This post is a bit of a field-notes dump from getting OpenClaw running in my own Docker environment, wiring it up to Discord/GitHub, and then realizing I wanted Azure AI Foundry models – so I ended up introducing LiteLLM as a proxy.
I will be honest: I expected to spend more time fighting glue code than actually using the thing. Instead, once I got Discord and GitHub dialed in (and stopped being surprised by what was blocked vs what was actually broken), it stopped feeling like a demo and started feeling like something I can actually keep around.
TL;DR
- I run OpenClaw in Docker, but I had to extend the image to include tools needed by certain skills (e.g.,
gh,ffmpeg, etc.). - I wanted Azure OpenAI / AI Foundry models; OpenClaw didn’t support them directly in my setup, so I added a LiteLLM container and pointed OpenClaw at it.
- Discord setup works, but the terminology is a little quirky (“guild” == server) and you often need channel IDs, not friendly names.
Once I had the Discord + GitHub pieces working, and the container had a modern .NET SDK plus the usual build tools, it could clone repos, compile, and generally do the ‘go run the boring stuff’ loop pretty well. I can see myself using it to iterate on apps I’ve already written (or to triage bugs when I’m feeling lazy).
1) Running OpenClaw in Docker
I’m running OpenClaw in Docker. Out of the box, OpenClaw starts fine, but I quickly hit a non-obvious issue: many skills are initially marked blocked because the container is missing required binaries. Some skills make it obvious why; others are less explicit.
The fix was simple: install the missing tools in the Docker image. In my case, I primarily needed GitHub CLI and a newer .NET SDK for build/test workflows.
Dockerfile additions
# Install GitHub CLI via APT in Docker (non-interactive, root)
USER root
RUN apt-get update && apt-get install -y curl gpg && curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg && chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg && echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" > /etc/apt/sources.list.d/github-cli.list && apt-get update && apt-get install -y gh && rm -rf /var/lib/apt/lists/* # --------------------------
# Install .NET 10 SDK
# --------------------------
USER root
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates curl bash gh unzip ffmpeg && rm -rf /var/lib/apt/lists/* # Install latest .NET 10 SDK
ENV DOTNET_ROOT=/usr/share/dotnet
ENV PATH="${DOTNET_ROOT}:${PATH}"
ENV DOTNET_CLI_TELEMETRY_OPTOUT=1 RUN curl -fsSL https://builds.dotnet.microsoft.com/dotnet/scripts/v1/dotnet-install.sh -o /tmp/dotnet-install.sh && bash /tmp/dotnet-install.sh --channel 10.0 --install-dir "${DOTNET_ROOT}" --no-path && rm /tmp/dotnet-install.sh # Optional: verify installation
RUN dotnet --info && dotnet --list-sdks && dotnet --list-runtimes
I inserted these changes right before the WORKDIR is set.
2) A simple build + “portable image” workflow
I also added a build.sh to my OpenClaw clone to build the image and save it to a known location as a compressed tarball. This makes it easy to move/load the image elsewhere.
#!/bin/bash
set -euo pipefail IMAGE_NAME="${1:-openclaw:latest}"
BACKUP_PATH="${2:-/mnt/scratch/openclaw.tar.gz}"
REPO_PATH="${3:-$(pwd)}"
PRUNE_CONFIRM="${4:-no}" echo "Image: $IMAGE_NAME"
echo "Backup: $BACKUP_PATH"
echo "Repository path: $REPO_PATH" if ! command -v docker &> /dev/null; then echo "Error: docker command not found" exit 1
fi cd "$REPO_PATH" echo "Pruning Docker system (auto-confirmed)..."
docker system prune -a -f echo "Building OpenClaw Docker image '$IMAGE_NAME'..."
docker build -t "$IMAGE_NAME" -f Dockerfile . echo "Backing up image '$IMAGE_NAME' to '$BACKUP_PATH'..."
mkdir -p "$(dirname "$BACKUP_PATH")"
docker save "$IMAGE_NAME" | gzip > "$BACKUP_PATH" echo "Done! Image saved to '$BACKUP_PATH'."
To load it back into Docker later:
gunzip -c /mnt/scratch/openclaw.tar.gz | docker load
3) Gotchas: models (Azure AI Foundry) and LiteLLM
I wanted to use Azure OpenAI / AI Foundry hosted models. In my case, OpenClaw didn’t support that target directly, so I introduced a LiteLLM container as a proxy layer.
That meant:
- Running LiteLLM alongside OpenClaw (e.g., in
docker-compose). - Pointing OpenClaw’s model endpoint at LiteLLM.
- Extra
.envconfiguration.
One non-obvious hiccup: LiteLLM config didn’t behave like “standard” YAML env-var expansion in my first attempt, so it took some trial-and-error to get the env wiring correct.
4) Discord setup: “guilds”, bots, and channel IDs
Discord setup is doable, but it is a bit tedious the first time:
- Create a Discord server (aka “guild”).
- Create a Discord application.
- Create a bot inside the application.
- Invite the bot to your server and grant permissions.
- Use the channel ID for configuration (the docs sometimes talk in friendly names, but IDs are what you often need).
Docs I used: https://docs.openclaw.ai/channels/discord
5) Skills: “blocked” doesn’t mean broken
The last thing that surprised me: on first boot, a bunch of skills show as blocked. That’s not necessarily a bug – often it’s simply missing binaries in the container. After installing tools (and then enabling the skill(s) in openclaw.json), they started working as expected.
Next up
- Write down my final LiteLLM config + environment wiring in a way that’s repeatable.
- Document the exact OpenClaw config changes I made for skills and channels.
.env (example)
I keep the actual values in .env and only check in an .env.example. Here’s a simplified (redacted) example of the variables I ended up needing:
SERVER_TZ=America/Los_Angeles DOCKER_DATA=/path/to/docker-data # OpenClaw ⇄ LiteLLM / Azure OpenAI OPENCLAW_AZURE_MODEL=gpt-5.2 OPENCLAW_AZURE_KEY=REDACTED OPENCLAW_AZURE_ENDPOINT=https://YOUR-RESOURCE.openai.azure.com/ OPENCLAW_AZURE_DEPLOYMENT=YOUR-DEPLOYMENT OPENCLAW_AZURE_API_VERSION=2024-xx-xx # LiteLLM LITELLM_MASTER_KEY=REDACTED LITELLM_AZURE_MODEL=gpt-5.2 LITELLM_AZURE_DEPLOYMENT=YOUR-DEPLOYMENT # (Optional) Claude keys if you use them CLAUDE_AI_SESSION_KEY=REDACTED CLAUDE_WEB_SESSION_KEY=REDACTED CLAUDE_WEB_COOKIE=REDACTED