Skip to content

PrerequisitesΒΆ

Goal

Make sure your workstation can run a FastMCP server and launch the ContextForge Gateway. Everything below works on macOS, Linux, and Windows (via WSL2).

1. Core toolsΒΆ

Tool Why you need it Install tips
Python 3.11+ Runs FastMCP servers and the gateway Use your OS package manager or python.org/downloads
uv Fast dependency + virtualenv manager used by FastMCP (uvx, uv run) curl -LsSf https://astral.sh/uv/install.sh | sh
git Clone this workshop and your server repo brew install git, sudo apt install git, or download for Windows
Podman or Docker (optional) Needed only if you want to run the Gateway or servers in containers Install from podman.io or docker.com
Windows

Install WSL2 with Ubuntu, then follow the Linux instructions inside the WSL shell. Podman Desktop or Docker Desktop also work if you prefer a GUI runtime.

2. Quick OS notesΒΆ

sudo apt update && sudo apt install git podman python3 python3-venv -y
curl -LsSf https://astral.sh/uv/install.sh | sh
Log out and back in (or source the shell profile) so uv is on your PATH.

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install git python@3.11 podman
curl -LsSf https://astral.sh/uv/install.sh | sh  # or `brew install uv`
podman machine init --memory 4096 --cpus 2
podman machine start

wsl --install -d Ubuntu
wsl --status
Inside WSL:
sudo apt update && sudo apt install git podman python3 python3-venv -y
curl -LsSf https://astral.sh/uv/install.sh | sh

3. Verify your environmentΒΆ

python3 --version
uv --version
git --version
podman --version   # optional

If uv is on your PATH you can run the rest of the workshop commands exactly as shown.

4. Helpful extrasΒΆ

  • fastmcp CLI – ships with the FastMCP package; once installed you can run fastmcp --help.
  • ContextForge Gateway – clone IBM/mcp-context-forge to use the proxy locally.
  • MCP Inspector – official GUI testing tool: npx @modelcontextprotocol/inspector

Run LLMs locally to test your MCP servers offline:

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Pull IBM Granite 4 model (recommended - 2.1GB)
ollama pull granite4:3b

# Or try larger variant (4.2GB)
ollama pull granite4:tiny-h

# Or the smallest
ollama pull granite4:350m-h

Why Granite 4? IBM's Granite 4.0 models (released October 2025) feature a breakthrough hybrid Mamba-2/transformer architecture with: - 70-80% less memory usage vs traditional transformers - Strong tool-calling and instruction-following capabilities - Apache 2.0 licensed and ISO 42001 certified - Perfect for local testing - runs efficiently on laptops

Learn MoreΒΆ

  • Ollama - Run large language models on your laptop!
  • **Granite 4 - LLM that can run anywhere, with 350MB, 700MB, 1.6GB, 1.9GB, 3.3GB, and 19GB vRAM / memory requirement variants available. Can run on CPU or GPU.
  • MCP Official Site - Protocol overview and getting started
  • MCP Specification - Technical specification
  • FastMCP 2.0 Docs - Python MCP Framework documentation
  • Enterprise MCP Guide - Production architecture and security

5. Next stepsΒΆ

Once the commands above work you are ready to:

  1. Run existing MCP servers to see MCP in action
  2. Build your own server with FastMCP
  3. Launch ContextForge Gateway for central routing and management