This guide provides a step-by-step process to install Docker inside a privileged LXC container on Proxmox (with nesting enabled and GPU bound for shared access), deploy Portainer as a web-based Docker management UI, and then set up Ollama (for running LLMs) and Open WebUI (a ChatGPT-like interface for Ollama models). This enables easy management of multiple Docker containers via a UI, with GPU acceleration for AI workloads. The setup assumes your LXC container (e.g., ID 101) is already created and GPU-bound (as per previous instructions).
Prerequisites:
--features nesting=1
), sufficient resources (e.g., 128 cores, 128GB RAM), and GPU devices bound (e.g., via /etc/pve/lxc/101.conf
with NVIDIA mounts and lxc.cap.drop:
cleared for capability support). See Proxmox with GPU support setup for more details.pct enter 101
(on Proxmox host)—all steps below are executed inside the LXC unless noted.Docker will run nested inside the LXC, allowing container isolation while sharing the host GPU.
Uninstall Old Docker Packages (If Present):
apt remove docker docker-engine docker.io containerd runc -y
Install Prerequisites:
apt update
apt install ca-certificates curl gnupg lsb-release -y
Add Docker Repository:
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update
Install Docker:
apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
Start and Enable Docker:
systemctl start docker
systemctl enable docker
Verify Docker:
docker --version
docker run hello-world # Pulls and runs a test container
This enables Docker containers (like Ollama) to use the bound RTX 4090 GPU.
Add Toolkit Repository:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
apt update
Install Toolkit:
apt install nvidia-container-toolkit -y
Configure Docker Runtime:
mkdir -p /etc/docker
echo '{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}' > /etc/docker/daemon.json
Restart Docker:
systemctl restart docker
Verify GPU Support:
docker info | grep -i runtime # Should show "nvidia"
nvidia-smi # Confirms GPU detection
Portainer provides a web UI for managing Docker containers, volumes, networks, and GPU allocation.
Create Persistent Volume:
docker volume create portainer_data
Run Portainer Container:
docker run -d -p 8000:8000 -p 9443:9443 -p 9000:9000 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
Access Portainer UI:
ip addr show eth0
Enable GPU in Portainer:
Ollama runs LLMs with GPU support.
In Portainer UI: Containers > Add container.
Name: ollama
Image: ollama/ollama:latest
Ports: Publish host 11434 to container 11434 (for API).
Volumes: Add: Host /root/.ollama
to container /root/.ollama
(persistent models/data).
Runtime & Resources: Enable GPU > Select RTX 4090 (all capabilities).
Restart policy: Always.
Deploy.
Verify Ollama:
docker exec -it ollama ollama run llama3
(pulls model, chat interactively).nvidia-smi
(shows usage during inference).Open WebUI provides a web-based chat interface for Ollama models (pull, manage, and converse).
In Portainer UI: Containers > Add container.
Name: open-webui
Image: ghcr.io/open-webui/open-webui:main
Ports: Publish host 3000 to container 8080 (access UI at http://
Volumes: Add: Host /root/open-webui-data
to container /app/backend/data
(persistent data).
Env Variables:
OLLAMA_BASE_URL
: http://host.docker.internal:11434
(or Ollama container IP/name).WEBUI_SECRET_KEY
: A random secret (e.g., your-secret-key
for auth).Runtime & Resources: Enable GPU > Select RTX 4090.
Restart policy: Always.
Deploy.
Access and Use Open WebUI:
nvidia-smi
.nvidia-container-cli info
). For errors, check Docker logs (docker logs <container-name>
).This setup is complete as of August 2025—lightweight, GPU-accelerated, and UI-driven for easy model pulling/chatting. If you encounter issues or want additions (e.g., multi-user auth), share details!
Kun L. & Baraban (2025). Comprehensive Guide: Deploying Docker with Portainer UI, Ollama, and Open WebUI in a Proxmox LXC Container.https://KintaroAI.com/blog/2025/08/09/comprehensive-guide-deploying-docker-with-portainer-ui-ollama-and-open-webui-in-a-proxmox-lxc-container/ (KintaroAI)
@misc{llmkun2025comprehensiveguidedeployingdockerwithportaineruiollamaandopenwebuiinaproxmoxlxccontainer,
author = {LLM Kun and Baraban},
title = {Comprehensive Guide: Deploying Docker with Portainer UI, Ollama, and Open WebUI in a Proxmox LXC Container},
year = {2025},
url = {https://KintaroAI.com/blog/2025/08/09/comprehensive-guide-deploying-docker-with-portainer-ui-ollama-and-open-webui-in-a-proxmox-lxc-container/},
}