Context Hub: Andrew Ng's Fix for Agents That Hallucinate APIs
There’s a class of AI coding agent failure that’s particularly annoying because it’s so avoidable.
The agent writes openai.ChatCompletion.create(). That was deprecated two versions ago. You fix it, run again — same mistake, different parameter. The agent has no idea. Its training data is frozen, and frozen data doesn’t know about API updates from last month.
Andrew Ng just released a direct fix: Context Hub.
The Problem in Concrete Terms
When an AI agent generates code that calls an external API, it’s drawing on training data. That data has a cutoff. APIs don’t respect cutoffs — they ship breaking changes, rename parameters, deprecate endpoints, and add required fields on their own schedule.
The result: agents confidently generate code that was correct six months ago and fails today. And because they learn nothing between sessions, they make the same mistake tomorrow.
This isn’t a reasoning failure. It’s an information problem. The agent isn’t stupid — it just doesn’t have the right docs.
What Context Hub Does
Context Hub (CLI: chub) is a versioned, curated documentation registry built specifically for agents to query at runtime.
chub get openai/chat --lang py
chub get stripe/payment-intents --lang ts
chub get aws/s3/put-object --lang py
One command. Current docs. Right format for the language you’re working in.
It ships with coverage across 68+ APIs out of the box: OpenAI, Anthropic, Stripe, AWS, Asana, GitHub, Twilio, and more. MIT license, completely free.
The Part That Makes It Smarter Over Time
The docs query is useful on its own. But the annotation system is what makes Context Hub genuinely different.
After a coding session, agents can annotate the docs with what they learned:
- “The
modelparameter no longer acceptsgpt-4— usegpt-4oinstead” - “streaming=True was renamed to stream=True in v1.0”
- “
max_tokensis nowmax_completion_tokensfor o-series models”
Those annotations are versioned and shared. Every agent that queries those docs afterward sees the accumulated lessons of every prior session.
The same mistake stops happening — not by training, but by memory.
Why This Is Underrated
The AI dev tooling conversation is dominated by context window sizes, reasoning models, and coding benchmarks. Nobody talks much about the docs problem, even though it’s responsible for a huge fraction of real-world agent failures.
Think about how often you’ve seen (or written) a prompt that includes “use the current API, not deprecated methods” — and the agent ignores it anyway because it has no idea what “current” means. Context Hub makes that instruction unnecessary by just giving the agent current docs.
How It Fits With the Rest of the Stack
Context Hub solves one layer of a three-layer problem:
| Layer | Problem | Solution |
|---|---|---|
| External APIs | Stale training data → hallucinated calls | Context Hub |
| Your codebase | No understanding of structure → wrong imports, wrong patterns | SocratiCode |
| Project context | Forgets decisions, architecture, constraints | soul.py |
A coding agent running all three doesn’t hallucinate APIs, doesn’t misunderstand the codebase it’s editing, and doesn’t forget what you decided last week. That’s a meaningfully different tool than what most people are running today.
The Andrew Ng Signal
Ng building in this space matters beyond the tool itself. He built Coursera and deeplearning.ai — his instinct is to find the thing practitioners are struggling with and make it accessible. Context Hub reads like that: a practitioner’s fix to a practitioner’s problem, distributed in the simplest possible form.
The fastest-growing new repo on GitHub doesn’t happen by accident. 68 APIs covered at launch means someone did the unsexy work of actually writing and curating docs before shipping. That’s a good sign.
Related: SocratiCode: MCP Codebase Intelligence for AI Agents · OpenViking: ByteDance’s Context Database for AI Agents · soul.py: Persistent Memory for LLM Agents