Categories
LLM

Force running infer / finetuning on specific cuda 1 or GPU

The quick way to make the model inferences or fine-tuning running on specific NVIDIA GPU card is by define this variable before execute the script

For instance, I forced its running on GPU 1 by

export CUDA_VISIBLE_DEVICES=1

Leave a Reply

Your email address will not be published. Required fields are marked *