In this lesson, we'll learn the basics of PyTorch, which is a machine learning library used to build dynamic neural networks. We'll learn about the basics, like creating and using Tensors.
Set up
We'll import PyTorch and set seeds for reproducability. Note that PyTorch also required a seed since we will be generating random tensors.
1
2
importnumpyasnpimporttorch
1
SEED=1234
1
2
3
# Set seed for reproducibilitynp.random.seed(seed=SEED)torch.manual_seed(SEED)
Basics
We'll first cover some basics with PyTorch such as creating tensors and converting from common data structures (lists, arrays, etc.) to tensors.
1
2
3
4
5
# Creating a random tensorx=torch.randn(2,3)# normal distribution (rand(2,3) -> uniform distribution)print(f"Type: {x.type()}")print(f"Size: {x.shape}")print(f"Values: \n{x}")
# Dimensional operationsx=torch.randn(2,3)print(f"Values: \n{x}")y=torch.sum(x,dim=0)# add each row's value for every columnprint(f"Values: \n{y}")z=torch.sum(x,dim=1)# add each columns's value for every rowprint(f"Values: \n{z}")
We can determine gradients (rate of change) of our tensors with respect to their constituents using gradient bookkeeping. This will be useful when we're training our models using backpropagation where we'll use these gradients to optimize our weights with the goals of lowering our objective function (loss).
Note
Don't worry if you're not familiar with these terms, we'll cover all of them in detail in the next lesson.
# Tensors with gradient bookkeepingx=torch.rand(3,4,requires_grad=True)y=3*x+2z=y.mean()z.backward()# z has to be scalarprint(f"x: \n{x}")print(f"x.grad: \n{x.grad}")
We also load our tensors onto the GPU for parallelized computation using CUDA (a parallel computing platform and API from Nvidia).
1
2
# Is CUDA available?print(torch.cuda.is_available())
False
If False (CUDA is not available), let's change that by following these steps: Go to Runtime > Change runtime type > Change Hardware accelertor to GPU > Click Save
1
importtorch
1
2
# Is CUDA available now?print(torch.cuda.is_available())
True
1
2
3
# Set devicedevice=torch.device('cuda'iftorch.cuda.is_available()else'cpu')print(device)
cuda
1
2
3
4
x=torch.rand(2,3)print(x.is_cuda)x=torch.rand(2,3).to(device)# sTensor is stored on the GPUprint(x.is_cuda)