Finetuning on multiple GPU

Here are several things to run finetuning leveraging multiple GPUs. In my case, I have two RTX 4090 that doing training. First, you can use accelerate modules

For example, I’m using

accelerate config
accelerate launch

Or export this variables either in terminal or python/ipynb file. If you have 4 GPUs


We can also put some small codes

gpu_list = [7]
gpu_list_str = ','.join(map(str, gpu_list))
os.environ.setdefault("CUDA_VISIBLE_DEVICES", gpu_list_str)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

Leave a Reply

Your email address will not be published. Required fields are marked *