Texonom
Texonom
/
Engineering
Engineering
/Data Engineering/Artificial Intelligence/AI Development/AI Framework/Pytorch/Pytorch Grammar/torch.nn/
torch.nn.parallel
Search

torch.nn.parallel

Creator
Creator
Seonglae Cho
Created
Created
2024 Feb 26 7:56
Editor
Editor
Seonglae Cho
Edited
Edited
2024 Feb 26 16:49
Refs
Refs
Data Parallelism
torch.nn.parallel members
torch.nn.parallel.DistributedDataParallel
 
 

Use DDP instead of DP

DataParallel — PyTorch 2.2 documentation
This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension (other objects will be copied once per device). In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module.
DataParallel — PyTorch 2.2 documentation
https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html
 
 

Recommendations

Texonom
Texonom
/
Engineering
Engineering
/Data Engineering/Artificial Intelligence/AI Development/AI Framework/Pytorch/Pytorch Grammar/torch.nn/
torch.nn.parallel
Copyright Seonglae Cho