Building PyTorch
This page gets you set up to do a build of PyTorch from multiple platforms.
Prerequisites
PyTorch source code
Conda
We strongly recommend using Conda to manage your PyTorch dependencies and Python environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your operating system.
You can either install the standard Anaconda distribution, or if you want manual control over your dependencies you can install Miniconda.
CUDA (optional)
If you want to compile with CUDA support, install:
NVIDIA CUDA 10.2 or above
NVIDIA cuDNN v7 or above
Compiler compatible with CUDA Note: You could refer to the cuDNN Support Matrix for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardwares
ROCm (optional)
If you want to compile with ROCm support, install AMD ROCm 4.0 or above.
ROCm is currently supported only for Linux system.
Conda setup
First, create a Conda environment and install the dependencies common to all operating systems. From the root of your PyTorch repo, run:
Then, install OS-specific dependencies.
Linux
MacOS
Windows
Building PyTorch
Linux
If you are compiling for ROCm, you must run this command first:
MacOS
CUDA is not supported on macOS.
Windows
Choose Correct Visual Studio Version.
Sometimes there are regressions in new versions of Visual Studio, so it's best to use the same Visual Studio Version 16.8.5 as Pytorch CI's.
PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise, Professional, or Community Editions. You can also install the build tools from https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools do not come with Visual Studio Code by default.
If you want to build legacy python code, please refer to Building on legacy code and CUDA
Build with CPU
It's fairly easy to build with CPU.
Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking CMAKE_INCLUDE_PATH
and LIB
. The instruction here is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.
Build with CUDA
NVTX is needed to build Pytorch with CUDA. NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto already installed CUDA run CUDA installation once again and check the corresponding checkbox. Make sure that CUDA with Nsight Compute is installed after Visual Studio.
Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If ninja.exe
is detected in PATH
, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.
Additional libraries such as Magma, oneDNN, a.k.a MKLDNN or DNNL, and Sccache are often needed. Please refer to the installation-helper to install them.
You can refer to the build_pytorch.bat script for some other environment variables configurations
Last updated