Run Chrome on different network interfaces in Ubuntu

If you have two different internet providers running in single PC, most of time you would like to split the usage between different network interface / adapter.

For instance, you want one browser for browsing and others for downloading / uploading activities. There are several solution to run multiple ethernet network devices like bind-address, firejail and others that may not working easily in Ubuntu

Quick solution for this is to leverage open-source project :

First, clone the project

git clone

Second, compile it (make sure to have gcc already installed)

gcc -nostartfiles -fpic -shared bindToInterface.c -o -ldl -D_GNU_SOURCE

Third, get your network interfaces information with sudo ifconfig (you can install this as well) and try to get interfaces name. In my case, I have two: enpf0 and enpf1.

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17xxxx  netmask  broadcast

enpf0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.xx.xx  netmask  broadcast 

enpf1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.xxxx  netmask  broadcast         ....

Now, here is a quick way to run Google Chrome or Firefox with different network interfaces or ethernet with public internet access.

Go inside the github project (or you can copy the .so file) and run this command. Replace enpf1 with your own network interface

BIND_INTERFACE=enpf1 LD_PRELOAD=./ /usr/bin/google-chrome-stable 

Now you successfully open the google-chrome with specific network interface.


Solve WordPress NGINX (13: Permission denied)

A quick step to solve this error from WordPress and NGINX

crit *2 stat() index.php" failed (13: Permission denied) wordpress

Make sure to create permission to the hosted folder by

chmod +x /path/website

If the problem still exists, try to add www-data into your user group

gpasswd -a www-data ubuntu

If the page is not found, try to restart your php-fpm services

sudo service php8.1-fpm stop
sudo service php8.1-fpm start

Finetuning on multiple GPU

Here are several things to run finetuning leveraging multiple GPUs. In my case, I have two RTX 4090 that doing training. First, you can use accelerate modules

For example, I’m using

accelerate config
accelerate launch

Or export this variables either in terminal or python/ipynb file. If you have 4 GPUs


We can also put some small codes

gpu_list = [7]
gpu_list_str = ','.join(map(str, gpu_list))
os.environ.setdefault("CUDA_VISIBLE_DEVICES", gpu_list_str)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

Solve Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

When doing finetuning model, you may encountered with this error message that telling CUDA Out of Memory (OOM) with detail

RuntimeError: CUDA error: out of memory; Compile with TORCH_USE_CUDA_DSA to enable device-side assertions

For my case, I did upgrade NVIDIA drivers to 5.30 version from 5.25 that cause this problem.

So, the solution is to downgrade my NVIDIA drivers back to 5.25 version and using the latest Transformers and Torch installation like in


Fix AttributeError: ‘DatasetDict’ object has no attribute ‘to’ in Huggingface Dataset

If you load dataset and you want to export it with “to_json”, “to_csv” or “features” and you got this error

Fix AttributeError: 'DatasetDict' object has no attribute 'to' 

The first step, to check the structure of the dataset by using print.

    train: Dataset({
        features: ['id', 'category', 'instruction', 'context', 'response'],
        num_rows: 15015

In this case, the DatasetDict has only train data (usually it has test and etc). Exporting to json will be


Yes, you need to access the key from the DatasetDict


Force running infer / finetuning on specific cuda 1 or GPU

The quick way to make the model inferences or fine-tuning running on specific NVIDIA GPU card is by define this variable before execute the script

For instance, I forced its running on GPU 1 by


What is Passage Retrieval Methods

Passage retrieval methods refer to techniques and algorithms used to retrieve relevant passages or segments of text from a larger document or corpus. These methods are commonly employed in information retrieval systems and question-answering systems, where the goal is to locate specific information within a large amount of text.

Machine Learning

How to uninstall Cuda and replace with new version

The quickfix on how to uninstall current Cuda installed in Ubuntu not via software packages is using the uninstaller. For instance, I use cuda 11.8 and I need to downgrade it into 11.6.

So, I need to find the path and trigger this command

sudo /usr/local/cuda-11.8/bin/cuda-uninstaller

Last, we can clean-up entire cuda folder

sudo rm -rf /usr/local/cuda

If you have issue with GCC for the installed CUDA and need to downgrade or upgrade it, you can follow this

sudo apt install gcc-$MAX_GCC_VERSION g++-$MAX_GCC_VERSION
sudo ln -s /usr/bin/gcc-$MAX_GCC_VERSION /usr/local/cuda/bin/gcc 
sudo ln -s /usr/bin/g++-$MAX_GCC_VERSION /usr/local/cuda/bin/g++

Fix CUDA error: no kernel image is available for execution on the device

When generating question answers from datasets using this project “” I got error

RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

I believe this error because my GPU is RTX 4090 and Ada Lovelace is not supported for torch 1.12. To solve this, I upgrade the torch to one next version 1.13

pip install torch==1.13.0

The warning showed

NVIDIA GeForce RTX 4090 with CUDA capability sm_89 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 4090 GPU with PyTorch, please check the instructions at

And its works!


Fix RuntimeError: unscale_() has already been called on this optimizer since the last update().

When doing finetuning model using Lora and HuggingFace transformer, I received this error

RuntimeError: unscale_() has already been called on this optimizer since the last update().

This error because using the latest transformer version transformers-4.31.0.dev0. The solution is to revert back to transformers-4.30.2 with

pip install transformers-4.30.2