Chapter 17: Using Other Languages with Docker
TL;DR
- Write an MCP-compliant HTTP server in any language. It must handle
POST /
requests and listen on port8080
. - Create a
Dockerfile
for your server.# Use a base image for your language
FROM python:3.11-slim
# Set up the container
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
# Expose the port MCP will use
EXPOSE 8080
# The command to run your server
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8080"] - Build the image:
docker build -t my-python-server:latest .
- Configure
aip.json
to use your custom image:"mcp_servers": {
"main_svc": {
"image": "my-python-server:latest"
}
}
While the Worka CLI provides first-class support for Rust and Node.js, the platform is designed to be language-agnostic. Thanks to containerization, you can write your pack's backend logic in any language that can run an HTTP server.
The Core Requirement
To create a custom MCP server, you only need to meet two conditions:
- Your application must be an HTTP server that correctly handles the MCP
tool.call
JSON-RPC format, as described in Chapter 14. - Your server must be packaged in a Docker (or OCI-compliant) container image.
Example: A Python MCP Server
Let's demonstrate this by creating a simple greet
tool using Python and the popular FastAPI framework.
main.py
from fastapi import FastAPI, Request
from pydantic import BaseModel
app = FastAPI()
class GreetParams(BaseModel):
name: str
# A map of our tool handlers
tool_handlers = {
"greet": lambda params: {"greeting": f"Hello, {params.name}!"}
}
@app.post("/")
async def handle_mcp_request(req: Request):
body = await req.json()
if body.get("method") != "tool.call":
# Handle error
return {"jsonrpc": "2.0", "error": {"code": -32600, "message": "Invalid Request"}}
tool_name = body["params"]["name"]
tool_args = body["params"]["arguments"]
handler = tool_handlers.get(tool_name)
if handler:
result = handler(tool_args)
return {"jsonrpc": "2.0", "result": result}
else:
# Handle tool not found error
return {"jsonrpc": "2.0", "error": {"code": -32601, "message": "Tool not found"}}
Step 1: Create a Dockerfile
Next, we need to create a Dockerfile
that tells Docker how to build an image containing our Python application and its dependencies.
Dockerfile
# 1. Start from an official Python base image
FROM python:3.11-slim
# 2. Set the working directory inside the container
WORKDIR /app
# 3. Copy and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# 4. Copy the rest of the application code
COPY . .
# 5. Expose the port the server will listen on
EXPOSE 8080
# 6. Define the command to run when the container starts
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8080"]
Step 2: Build the Docker Image
With the Dockerfile
in place, you can now build your image from the terminal. Make sure Docker or Podman is running.
# The -t flag tags the image with a memorable name
docker build -t my-python-server:latest .
Step 3: Configure aip.json
This is the final step. You need to tell Worka to use your custom image instead of a pre-configured one. You do this by editing the mcp_servers
block in your aip.json
file.
{
...
"mcp_servers": {
"main_svc": {
"image": "my-python-server:latest",
"command": null
}
},
"main_mcp_server": "main_svc",
...
}
When the Worka Host needs to start your pack's backend, it will now read this configuration and run the equivalent of podman run my-python-server:latest
. Because your server implements the MCP standard, the Host can communicate with it seamlessly, regardless of the language it was written in.