Comprehensive Guide: Deploying Docker with Portainer UI, Ollama, and Open WebUI in a Proxmox LXC Container
Comprehensive Guide: Deploying Docker with Portainer UI, Ollama, and Open WebUI in a Proxmox LXC Container
This guide provides a step-by-step process to install Docker inside a privileged LXC container on Proxmox (with nesting enabled and GPU bound for shared access), deploy Portainer as a web-based Docker management UI, and then set up Ollama (for running LLMs) and Open WebUI (a ChatGPT-like interface for Ollama models). This enables easy management of multiple Docker containers via a UI, with GPU acceleration for AI workloads. The setup assumes your LXC container (e.g., ID 101) is already created and GPU-bound (as per previous instructions).
Prerequisites:
- Privileged LXC container on Proxmox with nesting enabled (
--features nesting=1), sufficient resources (e.g., 128 cores, 128GB RAM), and GPU devices bound (e.g., via/etc/pve/lxc/101.confwith NVIDIA mounts andlxc.cap.drop:cleared for capability support). See Proxmox with GPU support setup for more details. - Internet access in the container.
- Enter the container:
pct enter 101(on Proxmox host)—all steps below are executed inside the LXC unless noted.
Section 1: Install Docker in the LXC Container
Docker will run nested inside the LXC, allowing container isolation while sharing the host GPU.
Uninstall Old Docker Packages (If Present):
apt remove docker docker-engine docker.io containerd runc -yInstall Prerequisites:
apt update apt install ca-certificates curl gnupg lsb-release -yAdd Docker Repository:
mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null apt updateInstall Docker:
apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -yStart and Enable Docker:
systemctl start docker systemctl enable dockerVerify Docker:
docker --version docker run hello-world # Pulls and runs a test container
Section 2: Install NVIDIA Container Toolkit for GPU Support
This enables Docker containers (like Ollama) to use the bound RTX 4090 GPU.
Add Toolkit Repository:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ tee /etc/apt/sources.list.d/nvidia-container-toolkit.list apt updateInstall Toolkit:
apt install nvidia-container-toolkit -yConfigure Docker Runtime:
mkdir -p /etc/docker echo '{ "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } }, "default-runtime": "nvidia" }' > /etc/docker/daemon.jsonRestart Docker:
systemctl restart dockerVerify GPU Support:
docker info | grep -i runtime # Should show "nvidia" nvidia-smi # Confirms GPU detection
Section 3: Deploy Portainer (Docker Management UI)
Portainer provides a web UI for managing Docker containers, volumes, networks, and GPU allocation.
Create Persistent Volume:
docker volume create portainer_dataRun Portainer Container:
docker run -d -p 8000:8000 -p 9443:9443 -p 9000:9000 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latestAccess Portainer UI:
- Find container IP:
ip addr show eth0 - Browser: http://
:9000 (or https:// :9443 for secure). - Set admin username/password on first login.
- Connect to the “Local” Docker environment.
- Find container IP:
Enable GPU in Portainer:
- In UI: Home > Local environment > Setup (or Settings > Environments > Edit Local > Host Setup).
- Toggle “Show GPU in the UI” ON.
- Click “Add GPU” > Select your RTX 4090 > Save.
Section 4: Deploy Ollama Container via Portainer UI
Ollama runs LLMs with GPU support.
In Portainer UI: Containers > Add container.
Name: ollama
Image: ollama/ollama:latest
Ports: Publish host 11434 to container 11434 (for API).
Volumes: Add: Host
/root/.ollamato container/root/.ollama(persistent models/data).Runtime & Resources: Enable GPU > Select RTX 4090 (all capabilities).
Restart policy: Always.
Deploy.
Verify Ollama:
- In Portainer: Check container logs for “Listening on 0.0.0.0:11434”.
- CLI test:
docker exec -it ollama ollama run llama3(pulls model, chat interactively). - Monitor GPU:
nvidia-smi(shows usage during inference).
Section 5: Deploy Open WebUI Container via Portainer UI
Open WebUI provides a web-based chat interface for Ollama models (pull, manage, and converse).
In Portainer UI: Containers > Add container.
Name: open-webui
Image: ghcr.io/open-webui/open-webui:main
Ports: Publish host 3000 to container 8080 (access UI at http://
:3000). Volumes: Add: Host
/root/open-webui-datato container/app/backend/data(persistent data).Env Variables:
OLLAMA_BASE_URL:http://host.docker.internal:11434(or Ollama container IP/name).- Optional:
WEBUI_SECRET_KEY: A random secret (e.g.,your-secret-keyfor auth).
Runtime & Resources: Enable GPU > Select RTX 4090.
Restart policy: Always.
Deploy.
Access and Use Open WebUI:
- Browser: http://
:3000 - Sign up (first user is admin).
- Settings > Connections > Ollama: Confirm connection (auto-detects or enter URL).
- Pull models: Search/pull (e.g., “llama3”)—uses GPU.
- Chat: Create new chat, select model, prompt (e.g., “This is great!")—expect positive sentiment response.
- Browser: http://
Verification and Management
- Test End-to-End: In Open WebUI, pull a model, chat, and monitor GPU with
nvidia-smi. - Manage in Portainer: Use UI for logs, stop/start, volumes, or deploy more containers (e.g., databases).
- Troubleshooting: If GPU not detected, verify toolkit (
nvidia-container-cli info). For errors, check Docker logs (docker logs <container-name>). - Scaling: Add more containers via Portainer; GPU is shared concurrently.
This setup is complete as of August 2025—lightweight, GPU-accelerated, and UI-driven for easy model pulling/chatting. If you encounter issues or want additions (e.g., multi-user auth), share details!
Citations
Kun L. & Baraban (2025). Comprehensive Guide: Deploying Docker with Portainer UI, Ollama, and Open WebUI in a Proxmox LXC Container.https://KintaroAI.com/blog/2025/08/09/comprehensive-guide-deploying-docker-with-portainer-ui-ollama-and-open-webui-in-a-proxmox-lxc-container/ (KintaroAI)@misc{llmkun2025comprehensiveguidedeployingdockerwithportaineruiollamaandopenwebuiinaproxmoxlxccontainer,
author = {LLM Kun and Baraban},
title = {Comprehensive Guide: Deploying Docker with Portainer UI, Ollama, and Open WebUI in a Proxmox LXC Container},
year = {2025},
url = {https://KintaroAI.com/blog/2025/08/09/comprehensive-guide-deploying-docker-with-portainer-ui-ollama-and-open-webui-in-a-proxmox-lxc-container/},
}