Categories
Machine Learning

Install Transformers Pytorch Tensorflow Ubuntu 2023

To install transformers, Pytorch and Tensorflow works with GPU for the latest Ubuntu, several steps are required. This is how I successfully setup it and running several models with it.

Please make sure to install the latest NVIDIA drivers. I use RTX 4090 in this case. This is the link https://www.nvidia.com/download/driverResults.aspx/200481/en-us/

If you are using nouveau, you can disable it via

sudo bash -c "echo blacklist nouveau > /etc/modprobe.d/blacklist-nvidia-nouveau.conf"
sudo bash -c "echo options nouveau modeset=0 >> /etc/modprobe.d/blacklist-nvidia-nouveau.conf"

sudo update-initramfs -u
sudo reboot

Step 1: Install the Nvidia Drivers

Logout and press CTRL+ALT+F1 to login into terminal. Install required software to install Nvidia drivers

sudo apt install build-essential git pkg-config libglvnd-dev

Then, stop the X-server to avoid error installation and you can continue it later

sudo init 3
chmod a+x NVIDIA-Linux-x86_64-50xxxx.run
sudo ./NVIDIA-Linux-x86_64-50xxxx.run
sudo init 5

Step 2: Install NVIDIA Toolkit (Cuda) + CUDNN

Download NVIDIA Toolkit

https://developer.nvidia.com/cuda-11-8-0-download-archive

After downloading, you can execute it via Terminal. You can skip the Nvidia driver installation by un-mark “the Driver installation part.”

Driver:   Not Selected
Toolkit:  Installed in /usr/local/cuda-11.8/

Please make sure that
 -   PATH includes /usr/local/cuda-11.8/bin
 -   LD_LIBRARY_PATH includes /usr/local/cuda-11.8/lib64, or, add /usr/local/cuda-11.8/lib64 to /etc/ld.so.conf and run ldconfig as root

However, the path of installation may not loaded properly in your bash. Please add this into ~/.bashrc to load Cuda properly

export PATH="/usr/local/cuda/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"

Next, make sure the “/usr/local/cuda/” is exists.

Then go to https://developer.nvidia.com/rdp/cudnn-download and download “cudnn-linux-x86_64-8.9-x” version. Extract and copy it to cuda

sudo cp -av cudnn/include/cudnn*.h /usr/local/cuda/include
sudo cp -av cudnn/lib/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*

Step 3: Install NVIDIA Tensor RT

Download the Tensor RT and load it into your ~/.bashrc.

https://developer.nvidia.com/nvidia-tensorrt-7x-download

export LD_LIBRARY_PATH="/..your...path.../TensorRT-7.2.3.4/lib:$LD_LIBRARY_PATH"

Step 4: Install the latest Torch 2.0 with GPU support

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Test the GPU support by

python3 -c "import torch; print(torch.cuda.is_available())"

Step 5: Install the latest Tensorflow that works with current Transformer. Fyi, Tensorflow 2.10 and more will produce some warning or errors (at current this blogging time)_

pip install tensorflow==2.8

Test GPU capability by running

 python3 -c "import os; os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'; import tensorflow as tf; print('Num GPUs Available: ', len(tf.config.list_physical_devices('GPU')))"

Step 6: Install the latest xformers and others modules

pip install xformers evaluate datasets
pip install accelerate -U

Step 7: Last step, install Transformer!

pip install git+https://github.com/huggingface/transformers

Now you can use Transformers in Ubuntu 23.04 with RTX Nvidia without issue!

Btw, you may got this error

If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

And to downgrade the protobuf, please do

pip install protobuf==3.20.*

Leave a Reply

Your email address will not be published. Required fields are marked *