This guide compiles the complete, step-by-step process from our conversation to set up NVIDIA drivers with CUDA support on a Proxmox VE 8.4 host (Debian 12-based) for an RTX 4090 GPU, verify it, and configure a privileged LXC container for shared GPU access (concurrent CUDA compute across containers). The setup enables running AI applications (e.g., PyTorch, Transformers) directly on the host or in isolated containers.
Prerequisites:
lspci | grep -i nvidia
).pve-manager
config).Update System and Install Prerequisites:
apt update && apt full-upgrade -y
apt install pve-headers-$(uname -r) build-essential gcc dirmngr ca-certificates apt-transport-https dkms curl software-properties-common -y
add-apt-repository contrib non-free-firmware
apt update
Blacklist Nouveau Driver (Open-Source Alternative):
echo -e "blacklist nouveau\noptions nouveau modeset=0" > /etc/modprobe.d/blacklist-nouveau.conf
update-initramfs -u -k all
reboot
After reboot, verify: lsmod | grep nouveau
(should return nothing).
Install NVIDIA Drivers with CUDA Support Using .run Installer:
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/580.65.06/NVIDIA-Linux-x86_64-580.65.06.run
chmod +x NVIDIA-Linux-x86_64-580.65.06.run
./NVIDIA-Linux-x86_64-580.65.06.run --dkms
reboot
.Install CUDA Toolkit (Full Userland Support):
wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb
dpkg -i cuda-keyring_1.1-1_all.deb
apt update
apt install cuda-toolkit=13.0.0-1 -y
echo 'export PATH=/usr/local/cuda-13.0/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-13.0/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
Verify Installation on Host:
nvidia-smi # Should show RTX 4090, driver 580.65.06, CUDA 13.0
nvcc --version # Should show CUDA 13.0 details
Install Python and Test AI (Optional on Host; Skip if Using Container Only):
apt install python3-venv python3-pip -y
python3 -m venv ai_env
source ai_env/bin/activate
pip install --upgrade pip
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
pip install transformers
python -c "import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))"
python -c "from transformers import pipeline; classifier = pipeline('sentiment-analysis'); print(classifier('This is great!'))"
deactivate
Download Debian 12 Template (If Not Already Done):
local
) > Content > Templates > Download Debian 12 Standard (e.g., debian-12-standard_12.7-1_amd64.tar.zst).Create Privileged LXC Container:
pct create 101 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst --hostname ai-container --rootfs local-zfs:50 --cores 128 --memory 131072 --net0 name=eth0,bridge=vmbr0,ip=dhcp --unprivileged 0 --features nesting=1
pct start 101
local-zfs
), size (50GB), cores/RAM as needed.Bind GPU Devices for Shared Access:
/etc/pve/lxc/101.conf
(e.g., via nano
):
Add at the end:lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 509:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps dev/nvidia-caps none bind,create=dir,optional 0 0
pct stop 101; pct start 101
Install NVIDIA Userland Tools and CUDA Toolkit in Container:
pct enter 101
apt update && apt full-upgrade -y
apt install software-properties-common gnupg curl build-essential -y
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/580.65.06/NVIDIA-Linux-x86_64-580.65.06.run
chmod +x NVIDIA-Linux-x86_64-580.65.06.run
./NVIDIA-Linux-x86_64-580.65.06.run --no-kernel-module
wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb
dpkg -i cuda-keyring_1.1-1_all.deb
apt update
apt install cuda-toolkit -y
echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
nvidia-smi
(shows GPU) and nvcc --version
(CUDA 13.0).Install Python and Test AI in Container:
apt install python3-venv python3-pip -y
python3 -m venv ai_env
source ai_env/bin/activate
pip install --upgrade pip
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
pip install transformers
python -c "import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))"
python -c "from transformers import pipeline; classifier = pipeline('sentiment-analysis'); print(classifier('This is great!'))"
deactivate
ls /dev/nvidia*
) and host modules (lsmod | grep nvidia
).dkms autoinstall
../NVIDIA-Linux-x86_64-580.65.06.run --uninstall
; apt purge cuda-toolkit.This setup is complete and tested as of August 2025.
Kun L. & Baraban (2025). Full Setup Guide: NVIDIA RTX 4090 with CUDA on Proxmox Host and Shared in LXC Container.https://KintaroAI.com/blog/2025/08/09/full-setup-guide-nvidia-rtx-4090-with-cuda-on-proxmox-host-and-shared-in-lxc-container/ (KintaroAI)
@misc{llmkun2025fullsetupguidenvidiartx4090withcudaonproxmoxhostandsharedinlxccontainer,
author = {LLM Kun and Baraban},
title = {Full Setup Guide: NVIDIA RTX 4090 with CUDA on Proxmox Host and Shared in LXC Container},
year = {2025},
url = {https://KintaroAI.com/blog/2025/08/09/full-setup-guide-nvidia-rtx-4090-with-cuda-on-proxmox-host-and-shared-in-lxc-container/},
}