Super Kawaii Cute Cat Kaoani
๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ
{Lecture}/Artificial intelligence application

[PyTorch] Tensor ๋ฐ์ดํ„ฐ ์กฐ์ž‘ ํ•จ์ˆ˜ ์ •๋ฆฌ

by wonee1 2025. 4. 23.
๋ฐ˜์‘ํ˜•

 

๐ŸšฉTensor๋ž€? 

  • Gpu ์ƒ์—์„œ ๋ณ‘๋ ฌ ์—ฐ์‚ฐ์„ ์ž˜ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด์„œ ์‚ฌ์šฉํ•˜๋Š” ์ž๋ฃŒํ˜• 
  • ์ž๋™ ๋ฏธ๋ถ„ ์ตœ์ ํ™”์™€ GPU์—์„œ ์‹คํ–‰๋œ๋‹ค. 

 

 

๐Ÿ”ง ์ƒ์„ฑ

import torch
import numpy as np

# tensor()
x = torch.tensor([1.0, 2.0, 3.0])

# from_numpy()
arr = np.array([4.0, 5.0, 6.0])
y = torch.from_numpy(arr)

 

torch.tensor(data) Python ๋ฆฌ์ŠคํŠธ/๋„˜ํŒŒ์ด ๋ฐฐ์—ด ๋“ฑ์„ ํ…์„œ๋กœ ์ƒ์„ฑ
torch.from_numpy(ndarray) NumPy ๋ฐฐ์—ด์„ ๊ณต์œ  ๋ฉ”๋ชจ๋ฆฌ ๊ธฐ๋ฐ˜์œผ๋กœ ํ…์„œ๋กœ ๋ณ€ํ™˜ (๋ฐ์ดํ„ฐ ๊ณต์œ ๋จ)

 

 

 

 

๐Ÿ”„ ํ˜•ํƒœ ๋ณ€ํ™˜

 

x = torch.tensor([[1, 2], [3, 4]])

# to()
x.to(torch.float32)

# transpose()
x.T  # ๋˜๋Š” x.transpose(0, 1)

# reshape()
x.reshape(4)

# view()
x.view(-1)  # reshape๊ณผ ๋น„์Šท (๋ฉ”๋ชจ๋ฆฌ ์—ฐ์†์„ฑ ํ•„์š”)

# squeeze()
torch.tensor([[[1], [2], [3]]]).squeeze()  # → [1, 2, 3]

# unsqueeze()
torch.tensor([1, 2, 3]).unsqueeze(0)  # → [[1, 2, 3]]

# permute()
x = torch.randn(2, 3, 4)
x.permute(2, 0, 1).shape  # → (4, 2, 3)

# flatten()
x = torch.tensor([[1, 2], [3, 4]])
x.flatten()  # → [1, 2, 3, 4]

# unflatten()
x = torch.tensor([1, 2, 3, 4])
x.unflatten(0, (2, 2))  # → [[1, 2], [3, 4]]

 

to(device) CPU ↔ GPU๋กœ ๋ฐ์ดํ„ฐ ์ด๋™
transpose(dim0, dim1) ๋‘ ์ฐจ์› ์œ„์น˜ ๋ฐ”๊ฟˆ
reshape(shape) ์ƒˆ๋กœ์šด shape์œผ๋กœ ๋ณ€ํ™˜ (๋ณต์‚ฌ/๋ทฐ ๋‘˜ ๋‹ค ๊ฐ€๋Šฅ)
view(shape) reshape๊ณผ ์œ ์‚ฌํ•˜๋‚˜, ์—ฐ์†๋œ ๋ฉ”๋ชจ๋ฆฌ์ผ ๋•Œ๋งŒ ๊ฐ€๋Šฅ
squeeze(dim) ์ฐจ์›์ด 1์ธ dim์„ ์ œ๊ฑฐ (์˜ˆ: [1, 3, 1] → [3])
unsqueeze(dim) dim์— ์ฐจ์› 1์„ ์ถ”๊ฐ€ (์˜ˆ: [3] → [1, 3])
permute(dims) ์ฐจ์›์˜ ์ˆœ์„œ๋ฅผ ์ž„์˜๋กœ ์žฌ๋ฐฐ์—ด
flatten(start_dim, end_dim) ํŠน์ • ์ฐจ์› ๋ฒ”์œ„๋ฅผ 1์ฐจ์›์œผ๋กœ ํ‰ํƒ„ํ™”
unflatten(dim, sizes) flatten์„ ๋˜๋Œ๋ฆฌ๋Š” ํ•จ์ˆ˜

 

 

 

๐Ÿงฉ ๊ฒฐํ•ฉ ๋ฐ ๋ถ„ํ• 

 

a = torch.tensor([[1, 2]])
b = torch.tensor([[3, 4]])

# cat()
torch.cat([a, b], dim=0)  # → [[1, 2], [3, 4]]

# stack()
torch.stack([a, b], dim=0)  # → [[[1, 2]], [[3, 4]]]

# split()
x = torch.tensor([1, 2, 3, 4])
torch.split(x, 2)  # → ([1, 2], [3, 4])

# chunk()
x = torch.tensor([1, 2, 3, 4])
torch.chunk(x, 2)  # → ([1, 2], [3, 4])

# unbind()
x = torch.tensor([[1, 2], [3, 4]])
torch.unbind(x, dim=0)  # → (tensor([1, 2]), tensor([3, 4]))

 

cat(tensors, dim) ์ฃผ์–ด์ง„ dim์„ ๊ธฐ์ค€์œผ๋กœ ํ…์„œ ์—ฐ๊ฒฐ
stack(tensors, dim) ์ƒˆ๋กœ์šด dim์„ ๋งŒ๋“ค๋ฉฐ ํ…์„œ๋ฅผ ์—ฐ๊ฒฐ
split(tensor, split_size, dim) ํŠน์ • ํฌ๊ธฐ๋กœ ๋‚˜๋ˆ„๊ธฐ
chunk(tensor, chunks, dim) ๊ท ๋“ฑํ•˜๊ฒŒ ๋‚˜๋ˆ„๊ธฐ
unbind(tensor, dim) ํŠน์ • ์ฐจ์›์„ ๊ธฐ์ค€์œผ๋กœ ํ…์„œ๋ฅผ ํŠœํ”Œ๋กœ ๋ถ„ํ•ด

 

 

 

๐Ÿ“Œ ์ธ๋ฑ์‹ฑ ๋ฐ ์Šฌ๋ผ์ด์‹ฑ

 

x = torch.tensor([[10, 20], [30, 40]])

# ์ผ๋ฐ˜ ์ธ๋ฑ์‹ฑ
x[0, 1]  # → 20

# ์Šฌ๋ผ์ด์‹ฑ
x[:, 1]  # → [20, 40]

# gather()
x = torch.tensor([[1, 2], [3, 4]])
index = torch.tensor([[0, 1], [1, 0]])
torch.gather(x, 1, index)  # → [[1, 2], [4, 3]]

# index_select()
x = torch.tensor([[1, 2, 3], [4, 5, 6]])
idx = torch.tensor([0, 2])
torch.index_select(x, 1, idx)  # → [[1, 3], [4, 6]]

 

tensor[i], tensor[i, j] ์ผ๋ฐ˜ ์ธ๋ฑ์‹ฑ/์Šฌ๋ผ์ด์‹ฑ
gather(dim, index) ์ฃผ์–ด์ง„ index์— ๋”ฐ๋ผ ์š”์†Œ ์„ ํƒ
index_select(dim, index) ์ฃผ์–ด์ง„ index๋งŒ ์„ ํƒํ•˜์—ฌ ํ…์„œ ์ƒ์„ฑ

 

 

 

 

โž• ์—ฐ์‚ฐ

์‚ฌ์น™์—ฐ์‚ฐ

a = torch.tensor([1, 2])
b = torch.tensor([3, 4])

a.add(b)  # → [4, 6]
a.sub(b)  # → [-2, -2]
a.mul(b)  # → [3, 8]
a.div(b)  # → [0.3333, 0.5]
add, sub, mul, div ๋ง์…ˆ, ๋บ„์…ˆ, ๊ณฑ์…ˆ, ๋‚˜๋ˆ—์…ˆ (๋ธŒ๋กœ๋“œ์บ์ŠคํŒ… ์ง€์›)

 

์ง‘๊ณ„ ์—ฐ์‚ฐ

x = torch.tensor([[1.0, 2.0], [3.0, 4.0]])

x.sum()    # → 10.0
x.mean()   # → 2.5
x.max()    # → 4.0
x.min()    # → 1.0
x.std()    # → ํ‘œ์ค€ํŽธ์ฐจ

 

sum, mean, max, min, std ํ…์„œ์˜ ํ•ฉ, ํ‰๊ท , ์ตœ๋Œ“๊ฐ’, ์ตœ์†Ÿ๊ฐ’, ํ‘œ์ค€ํŽธ์ฐจ ๋“ฑ ๊ณ„์‚ฐ (dim ์ง€์ • ๊ฐ€๋Šฅ)

 

 

๋น„๊ต ์—ฐ์‚ฐ

a = torch.tensor([1, 2, 3])
b = torch.tensor([2, 2, 1])

a.eq(b)  # → [False, True, False]
a.gt(b)  # → [False, False, True]
a.lt(b)  # → [True, False, False]

 

 

eq, ne, gt, lt ๊ฐ™์Œ, ๋‹ค๋ฆ„, ํผ, ์ž‘์Œ ๋น„๊ต ๊ฒฐ๊ณผ๋ฅผ Boolean Tensor๋กœ ๋ฐ˜ํ™˜

 

๐Ÿ’ ๊ธฐํƒ€ ํ•จ์ˆ˜

x = torch.tensor([1.0, 2.0, 3.0])

# numpy()
x.numpy()  # → NumPy ๋ฐฐ์—ด๋กœ ๋ณ€ํ™˜

# tolist()
x.tolist()  # → [1.0, 2.0, 3.0]

# clone()
y = x.clone()

# to()
x.to(torch.float64)

# expand()
x = torch.tensor([[1], [2], [3]])  # shape: (3, 1)
x.expand(3, 4)  # shape: (3, 4), ๊ฐ’ ๋ณต์ œ ์•„๋‹˜

# repeat()
x = torch.tensor([[1], [2], [3]])
x.repeat(1, 3)  # → [[1, 1, 1], [2, 2, 2], [3, 3, 3]]

# detach()
x = torch.tensor([1.0], requires_grad=True)
y = x * 2
y = y.detach()

 

numpy() ํ…์„œ๋ฅผ ๋„˜ํŒŒ์ด ๋ฐฐ์—ด๋กœ ๋ณ€ํ™˜ (๊ณต์œ  ๋ฉ”๋ชจ๋ฆฌ)
tolist() ํ…์„œ๋ฅผ ํŒŒ์ด์ฌ ๋ฆฌ์ŠคํŠธ๋กœ ๋ณ€ํ™˜
clone() ํ…์„œ ๋ณต์ œ (๋…๋ฆฝ๋œ ๋ณต์‚ฌ๋ณธ)
to(dtype/device) ํ…์„œ์˜ ํƒ€์ž… ๋˜๋Š” ๋””๋ฐ”์ด์Šค ๋ณ€๊ฒฝ
expand() ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๊ณต์œ ํ•˜๋ฉด์„œ ๋ธŒ๋กœ๋“œ์บ์ŠคํŠธ ๋ฐฉ์‹์œผ๋กœ ํ™•์žฅ
repeat() ๊ฐ’์„ ๋ณต์ œํ•˜์—ฌ ํ…์„œ ํฌ๊ธฐ ํ™•์žฅ (expand์™€๋Š” ๋ฐฉ์‹ ๋‹ค๋ฆ„)
detach() ๊ทธ๋ž˜๋””์–ธํŠธ ๊ณ„์‚ฐ์—์„œ ํ…์„œ๋ฅผ ๋ถ„๋ฆฌ (autograd ๋Š๊ธฐ)

 

 

 

 

 

 

 

๋ฐ˜์‘ํ˜•