Your AI Proxy Layer Just Became a Target
The TeamPCP group compromised LiteLLM on PyPI — 97 million downloads, sitting between your application and every model API you call. They also hit Trivy, a security scanner. As in: the tool you use to check for vulnerabilities was itself compromised. Databricks is now investigating potential exposure.
Most AI newsletters glossed over this or skipped it entirely because supply chain attacks get filed under “security news” rather than “AI news.” That framing is wrong.
LiteLLM is the proxy layer many teams use to route requests across OpenAI, Anthropic, Gemini, and whatever model comes next — infrastructure, not a peripheral dependency. When it’s compromised, credential theft means whoever attacked it potentially has access to your model provider accounts, your API keys, your spend.
Here’s the question this raises for practitioners: do you actually know what’s in your dependency chain? Not the top-level packages you imported intentionally, but what those packages pull in, and who maintains them, and when they were last audited.
The AI tooling ecosystem is young, fast-moving, and now confirmed to be actively targeted. The teams that treat their AI stack like production infrastructure — with the same dependency hygiene they’d apply to a payment processor integration — will be better positioned than those that don’t.
This wave isn’t over. LiteLLM and Trivy were practice.