PrerequisitesΒΆ
Goal
Make sure your workstation can run a FastMCP server and launch the ContextForge Gateway. Everything below works on macOS, Linux, and Windows (via WSL2).
1. Core toolsΒΆ
| Tool | Why you need it | Install tips |
|---|---|---|
| Python 3.11+ | Runs FastMCP servers and the gateway | Use your OS package manager or python.org/downloads |
| uv | Fast dependency + virtualenv manager used by FastMCP (uvx, uv run) | curl -LsSf https://astral.sh/uv/install.sh | sh |
| git | Clone this workshop and your server repo | brew install git, sudo apt install git, or download for Windows |
| Podman or Docker (optional) | Needed only if you want to run the Gateway or servers in containers | Install from podman.io or docker.com |
Windows
Install WSL2 with Ubuntu, then follow the Linux instructions inside the WSL shell. Podman Desktop or Docker Desktop also work if you prefer a GUI runtime.
2. Quick OS notesΒΆ
3. Verify your environmentΒΆ
If uv is on your PATH you can run the rest of the workshop commands exactly as shown.
4. Helpful extrasΒΆ
- fastmcp CLI β ships with the FastMCP package; once installed you can run
fastmcp --help. - ContextForge Gateway β clone IBM/mcp-context-forge to use the proxy locally.
- MCP Inspector β official GUI testing tool:
npx @modelcontextprotocol/inspector
Local LLM with Ollama (Optional but Recommended)ΒΆ
Run LLMs locally to test your MCP servers offline:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull IBM Granite 4 model (recommended - 2.1GB)
ollama pull granite4:3b
# Or try larger variant (4.2GB)
ollama pull granite4:tiny-h
# Or the smallest
ollama pull granite4:350m-h
Why Granite 4? IBM's Granite 4.0 models (released October 2025) feature a breakthrough hybrid Mamba-2/transformer architecture with: - 70-80% less memory usage vs traditional transformers - Strong tool-calling and instruction-following capabilities - Apache 2.0 licensed and ISO 42001 certified - Perfect for local testing - runs efficiently on laptops
Learn MoreΒΆ
- Ollama - Run large language models on your laptop!
- **Granite 4 - LLM that can run anywhere, with 350MB, 700MB, 1.6GB, 1.9GB, 3.3GB, and 19GB vRAM / memory requirement variants available. Can run on CPU or GPU.
- MCP Official Site - Protocol overview and getting started
- MCP Specification - Technical specification
- FastMCP 2.0 Docs - Python MCP Framework documentation
- Enterprise MCP Guide - Production architecture and security
5. Next stepsΒΆ
Once the commands above work you are ready to:
- Run existing MCP servers to see MCP in action
- Build your own server with FastMCP
- Launch ContextForge Gateway for central routing and management