๐ฌ Receive new lessons straight to your inbox (once a month) and join 40K+
developers in learning how to responsibly deliver value with ML.
Set up
We'll import PyTorch and set seeds for reproducibility. Note that PyTorch also required a seed since we will be generating random tensors.
12
importnumpyasnpimporttorch
1
SEED=1234
123
# Set seed for reproducibilitynp.random.seed(seed=SEED)torch.manual_seed(SEED)
Basics
We'll first cover some basics with PyTorch such as creating tensors and converting from common data structures (lists, arrays, etc.) to tensors.
12345
# Creating a random tensorx=torch.randn(2,3)# normal distribution (rand(2,3) -> uniform distribution)print(f"Type: {x.type()}")print(f"Size: {x.shape}")print(f"Values: \n{x}")
# Dimensional operationsx=torch.randn(2,3)print(f"Values: \n{x}")y=torch.sum(x,dim=0)# add each row's value for every columnprint(f"Values: \n{y}")z=torch.sum(x,dim=1)# add each columns's value for every rowprint(f"Values: \n{z}")
We can determine gradients (rate of change) of our tensors with respect to their constituents using gradient bookkeeping. The gradient is a vector that points in the direction of greatest increase of a function. We'll be using gradients in the next lesson to determine how to change our weights to affect a particular objective function (ex. loss).
# Tensors with gradient bookkeepingx=torch.rand(3,4,requires_grad=True)y=3*x+2z=y.mean()z.backward()# z has to be scalarprint(f"x: \n{x}")print(f"x.grad: \n{x.grad}")
We also load our tensors onto the GPU for parallelized computation using CUDA (a parallel computing platform and API from Nvidia).
12
# Is CUDA available?print(torch.cuda.is_available())
False
If False (CUDA is not available), let's change that by following these steps: Go to Runtime > Change runtime type > Change Hardware accelerator to GPU > Click Save
1
importtorch
12
# Is CUDA available now?print(torch.cuda.is_available())
True
123
# Set devicedevice=torch.device("cuda"iftorch.cuda.is_available()else"cpu")print(device)
cuda
1234
x=torch.rand(2,3)print(x.is_cuda)x=torch.rand(2,3).to(device)# Tensor is stored on the GPUprint(x.is_cuda)
False
True
To cite this content, please use:
123456
@article{madewithml,author={Goku Mohandas},title={ PyTorch - Made With ML },howpublished={\url{https://madewithml.com/}},year={2023}}