<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Signal Over Noise — Insights</title><description>Quick takes on AI trends and strategy — real-time analysis and opinion.</description><link>https://signalovernoise.at/</link><language>en-us</language><item><title>97% Expect a Breach. 6% Are Paying for It.</title><link>https://signalovernoise.at/insights/2026-04-06-agent-security-budget-gap/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-06-agent-security-budget-gap/</guid><description>Enterprise AI agent security is running on wishful thinking and outdated policy.</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Agent Security Just Got Real CVEs</title><link>https://signalovernoise.at/insights/2026-04-06-agent-security-real-cves/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-06-agent-security-real-cves/</guid><description>Prompt injection chains to RCE in CrewAI. 22-second attacker breakout. Human-in-the-loop is no longer a security control.</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Anthropic Didn&apos;t Block Abuse. They Blocked Competition.</title><link>https://signalovernoise.at/insights/2026-04-06-anthropic-openclaw-platform-control/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-06-anthropic-openclaw-platform-control/</guid><description>The OpenClaw subscription ban isn&apos;t about fair use — it&apos;s Anthropic asserting platform control while shipping their own replacement.</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Context Is the New Bottleneck. So Is Judgment.</title><link>https://signalovernoise.at/insights/2026-04-06-forte-context-management-judgment/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-06-forte-context-management-judgment/</guid><description>Tiago Forte says AI shifts the bottleneck from capability to context. He&apos;s right — but that only works if you still have opinions worth providing.</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Agent Identity Is the Infrastructure Gap Nobody Wants to Admit</title><link>https://signalovernoise.at/insights/2026-04-03-agent-identity-infrastructure-gap/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-03-agent-identity-infrastructure-gap/</guid><description>Okta is betting that agent identity management becomes as fundamental as user identity management was for SaaS. They might be right.</description><pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate></item><item><title>You Feel Faster. Are You?</title><link>https://signalovernoise.at/insights/2026-04-03-ai-productivity-paradox-feel-faster/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-03-ai-productivity-paradox-feel-faster/</guid><description>A randomized controlled study found AI tools made experienced developers 19% slower. They thought they&apos;d been sped up by 20%.</description><pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude Code Channels: When Your Agent Gets a Phone Number</title><link>https://signalovernoise.at/insights/2026-04-03-claude-code-channels-async-agents/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-03-claude-code-channels-async-agents/</guid><description>Anthropic&apos;s new messaging integration isn&apos;t about convenience — it&apos;s about changing how you think about what an AI agent is.</description><pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate></item><item><title>MCP Just Changed Hands. Watch What Happens Next.</title><link>https://signalovernoise.at/insights/2026-04-03-mcp-changed-hands-linux-foundation/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-03-mcp-changed-hands-linux-foundation/</guid><description>Anthropic donating MCP to the Linux Foundation is good governance — and a signal that the easy days of fast iteration are probably over.</description><pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Your AI Proxy Layer Just Became a Target</title><link>https://signalovernoise.at/insights/2026-04-01-ai-proxy-layer-target/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-01-ai-proxy-layer-target/</guid><description>The LiteLLM supply chain attack isn&apos;t just a security story — it&apos;s an infrastructure story for anyone building with AI tooling.</description><pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate></item><item><title>&quot;Agentic&quot; Is the New Cloud</title><link>https://signalovernoise.at/insights/2026-04-01-agentic-is-the-new-cloud/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-01-agentic-is-the-new-cloud/</guid><description>Every vendor is calling their product agentic. Almost none of them are.</description><pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Your Code Review Process Isn&apos;t Built for This Volume</title><link>https://signalovernoise.at/insights/2026-04-01-code-review-not-built-for-volume/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-01-code-review-not-built-for-volume/</guid><description>AI-generated code is hitting production faster than review processes can absorb it — that&apos;s a supervision problem, not an AI problem.</description><pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Grammarly&apos;s Lawsuit Is About Identity, Not Just Data</title><link>https://signalovernoise.at/insights/2026-04-01-grammarly-identity-not-data/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-01-grammarly-identity-not-data/</guid><description>A new class action against Grammarly draws a line most AI training lawsuits haven&apos;t: using real people&apos;s names and reputations, not just their words.</description><pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate></item><item><title>MCP Just Crossed the Chasm</title><link>https://signalovernoise.at/insights/2026-04-01-mcp-crossed-the-chasm/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-01-mcp-crossed-the-chasm/</guid><description>This week, MCP went from developer protocol to mainstream integration layer — and most AI newsletters missed it.</description><pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate></item><item><title>The Promises Failed, Not the Technology</title><link>https://signalovernoise.at/insights/2026-04-01-promises-failed-not-technology/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-01-promises-failed-not-technology/</guid><description>AI fatigue is real, but the backlash is aimed at the wrong target.</description><pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate></item><item><title>The Safety Company Keeps Leaking</title><link>https://signalovernoise.at/insights/2026-04-01-safety-company-keeps-leaking/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-04-01-safety-company-keeps-leaking/</guid><description>Anthropic&apos;s recurring security incidents reveal a tension worth naming: operational security is hard, even for companies whose brand is built on being careful.</description><pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Microsoft&apos;s Copilot Now Uses Two Models to Fact-Check One</title><link>https://signalovernoise.at/insights/2026-03-31-copilot-critique-two-models-fact-check/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-31-copilot-critique-two-models-fact-check/</guid><description>Microsoft&apos;s Wave 3 Copilot routes answers through a second AI model to verify accuracy. That&apos;s useful — and a quiet admission about single-model trust.</description><pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate></item><item><title>The New Shadow IT Isn&apos;t Employees Using ChatGPT</title><link>https://signalovernoise.at/insights/2026-03-30-shadow-ai-agents-invisible-traffic/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-30-shadow-ai-agents-invisible-traffic/</guid><description>AI agents are generating mobile app traffic that security teams can&apos;t see. Shadow AI moved from &apos;people using tools&apos; to &apos;tools using tools&apos; — and nobody updated the monitoring.</description><pubDate>Mon, 30 Mar 2026 07:20:00 GMT</pubDate></item><item><title>Perplexity Pulled a Perk and Hoped Nobody Would Notice</title><link>https://signalovernoise.at/insights/2026-03-30-perplexity-api-credits-silent-removal/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-30-perplexity-api-credits-silent-removal/</guid><description>Perplexity Pro quietly removed $5 monthly API credits from its $20 plan. No announcement, no changelog. Practitioners who built on those credits found out the hard way.</description><pubDate>Mon, 30 Mar 2026 07:10:00 GMT</pubDate></item><item><title>Codex Plugins Are a Confession About Who&apos;s Winning</title><link>https://signalovernoise.at/insights/2026-03-30-codex-plugins-catch-up-ecosystem-lock-in/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-30-codex-plugins-catch-up-ecosystem-lock-in/</guid><description>OpenAI launched 20 plugins to push Codex beyond coding. The move tells you everything about where the developer ecosystem actually lives.</description><pubDate>Mon, 30 Mar 2026 07:00:00 GMT</pubDate></item><item><title>Your AI Provider&apos;s Ethics Are Now a Business Risk</title><link>https://signalovernoise.at/insights/2026-03-27-anthropic-pentagon-ethics-business-risk/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-27-anthropic-pentagon-ethics-business-risk/</guid><description>Anthropic refused Pentagon weapons contracts and got sanctioned. A court blocked it. Here&apos;s what that means if you build on Claude.</description><pubDate>Fri, 27 Mar 2026 08:00:00 GMT</pubDate></item><item><title>Apple Just Validated Your Multi-AI Approach</title><link>https://signalovernoise.at/insights/2026-03-27-apple-siri-multi-ai-platform/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-27-apple-siri-multi-ai-platform/</guid><description>Apple is opening Siri to rival AI assistants in iOS 27 — a bet that the routing layer matters more than the model.</description><pubDate>Fri, 27 Mar 2026 08:00:00 GMT</pubDate></item><item><title>The Money Just Noticed the Agent Security Problem</title><link>https://signalovernoise.at/insights/2026-03-27-bessemer-agent-security-defining-challenge/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-27-bessemer-agent-security-defining-challenge/</guid><description>Bessemer&apos;s new report on AI agent security says what practitioners have known for months. Now comes the flood.</description><pubDate>Fri, 27 Mar 2026 08:00:00 GMT</pubDate></item><item><title>The Government Just Told You to Stop Vibe Coding Without Guardrails</title><link>https://signalovernoise.at/insights/2026-03-26-ncsc-vibe-coding-security-warning/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-26-ncsc-vibe-coding-security-warning/</guid><description>The UK&apos;s NCSC warns that AI-generated code is creating security risks faster than teams can catch them. The fix isn&apos;t stopping — it&apos;s checking.</description><pubDate>Thu, 26 Mar 2026 08:00:00 GMT</pubDate></item><item><title>Your AI Just Learned to Approve Its Own Actions</title><link>https://signalovernoise.at/insights/2026-03-26-claude-code-auto-mode-autonomy-spectrum/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-26-claude-code-auto-mode-autonomy-spectrum/</guid><description>Claude Code&apos;s new auto mode sits between handholding and chaos. It&apos;s the first honest attempt at solving the autonomy problem in developer tools.</description><pubDate>Thu, 26 Mar 2026 07:30:00 GMT</pubDate></item><item><title>Your Agents Need a Black Box</title><link>https://signalovernoise.at/insights/2026-03-26-vorlon-agent-flight-recorder-forensics/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-26-vorlon-agent-flight-recorder-forensics/</guid><description>Vorlon&apos;s AI Agent Flight Recorder brings forensics to agentic systems. When your agent goes wrong, you&apos;ll want to know what happened — not guess.</description><pubDate>Thu, 26 Mar 2026 07:00:00 GMT</pubDate></item><item><title>Mozilla Built Stack Overflow for Agents. I Built It by Hand.</title><link>https://signalovernoise.at/insights/2026-03-25-mozilla-cq-stack-overflow-agents/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-25-mozilla-cq-stack-overflow-agents/</guid><description>Mozilla&apos;s cq gives AI coding agents dynamic, evolving context — formalizing what power users already figured out through trial and error.</description><pubDate>Wed, 25 Mar 2026 08:00:00 GMT</pubDate></item><item><title>The Yes Machine Gets a Live Demo</title><link>https://signalovernoise.at/insights/2026-03-25-yes-machine-live-demo-rsac/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-25-yes-machine-live-demo-rsac/</guid><description>A Zenity CTO demo at RSAC 2026 showed agents being hijacked with zero user interaction — exactly what &apos;trained to be helpful&apos; looks like from the attacker&apos;s side.</description><pubDate>Wed, 25 Mar 2026 08:00:00 GMT</pubDate></item><item><title>When Cisco Validates Your CLAUDE.md</title><link>https://signalovernoise.at/insights/2026-03-25-cisco-mcp-gateway-validates-claudemd/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-25-cisco-mcp-gateway-validates-claudemd/</guid><description>Cisco&apos;s new MCP security gateway is the enterprise version of what power users already built out of necessity.</description><pubDate>Wed, 25 Mar 2026 07:30:00 GMT</pubDate></item><item><title>Sora Shipped. Nobody Needed It.</title><link>https://signalovernoise.at/insights/2026-03-25-sora-shipped-nobody-needed-it/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-25-sora-shipped-nobody-needed-it/</guid><description>OpenAI is shutting down Sora three months after a Disney deal. The AI graveyard keeps filling up with technically impressive things nobody asked for.</description><pubDate>Wed, 25 Mar 2026 07:00:00 GMT</pubDate></item><item><title>4.4 Million People Just Watched the Sycophancy Problem in Action</title><link>https://signalovernoise.at/insights/2026-03-25-sanders-claude-sycophancy-4m-views/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-25-sanders-claude-sycophancy-4m-views/</guid><description>Senator Bernie Sanders interviewed Claude on camera about AI privacy. Claude agreed with everything he said. That&apos;s not a revelation — it&apos;s the problem.</description><pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Someone Finally Built the Agent Security Layer That Actually Matters</title><link>https://signalovernoise.at/insights/2026-03-24-astrix-agent-security-operational-layer/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-24-astrix-agent-security-operational-layer/</guid><description>Astrix Security&apos;s new Agent Policies go after what agents can do once they&apos;re running — not just whether the model behaves itself.</description><pubDate>Tue, 24 Mar 2026 08:00:00 GMT</pubDate></item><item><title>82% of Execs Feel Protected. 88% Have Had Incidents.</title><link>https://signalovernoise.at/insights/2026-03-24-beyondtrust-shadow-ai-workforce-confidence-gap/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-24-beyondtrust-shadow-ai-workforce-confidence-gap/</guid><description>BeyondTrust&apos;s Phantom Labs data reveals the confidence gap at the heart of enterprise AI security — and the numbers are not subtle.</description><pubDate>Tue, 24 Mar 2026 08:00:00 GMT</pubDate></item><item><title>Perplexity Is Learning What I Learned Six Months Ago</title><link>https://signalovernoise.at/insights/2026-03-24-perplexity-mcp-context-bloat-practitioners-first/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-24-perplexity-mcp-context-bloat-practitioners-first/</guid><description>The Perplexity CTO says MCP eats 40-50% of your context window. Practitioners already knew this.</description><pubDate>Tue, 24 Mar 2026 08:00:00 GMT</pubDate></item><item><title>Google&apos;s Free AI Comes With a Price</title><link>https://signalovernoise.at/insights/2026-03-23-gemini-personal-intelligence-free-data-trade/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-23-gemini-personal-intelligence-free-data-trade/</guid><description>Gemini&apos;s Personal Intelligence feature just expanded to all free U.S. users — connecting AI to Gmail, Photos, and Chrome browsing history.</description><pubDate>Mon, 23 Mar 2026 08:00:00 GMT</pubDate></item><item><title>When Knuth Writes a Paper About You</title><link>https://signalovernoise.at/insights/2026-03-23-knuth-claudes-cycles-peer-recognition/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-23-knuth-claudes-cycles-peer-recognition/</guid><description>Donald Knuth published a paper named after Claude after it solved an open graph theory problem. That&apos;s a different kind of validation than a benchmark score.</description><pubDate>Mon, 23 Mar 2026 08:00:00 GMT</pubDate></item><item><title>Anthropic Built MCP, Got Everyone to Use It, Then Gave It Away</title><link>https://signalovernoise.at/insights/2026-03-23-mcp-donated-linux-foundation-usb-moment/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-23-mcp-donated-linux-foundation-usb-moment/</guid><description>MCP just moved from Anthropic&apos;s project to shared industry infrastructure — and that changes the risk calculation for anyone building on it.</description><pubDate>Mon, 23 Mar 2026 08:00:00 GMT</pubDate></item><item><title>We Gave AI Agents Keys to the House. Visa Wants to Give Them a Credit Card.</title><link>https://signalovernoise.at/insights/2026-03-23-visa-ai-agent-payments-spending-power/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-23-visa-ai-agent-payments-spending-power/</guid><description>Visa is testing AI agent payment authorization. The authentication problems we haven&apos;t solved for file access get a lot worse when the agent can spend money.</description><pubDate>Mon, 23 Mar 2026 08:00:00 GMT</pubDate></item><item><title>The Attack Surface Is the Feature</title><link>https://signalovernoise.at/insights/2026-03-21-claude-ai-attack-chain-exfiltration/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-21-claude-ai-attack-chain-exfiltration/</guid><description>Three chained vulnerabilities in Claude.ai show that when your AI reads the web, the web can give it orders.</description><pubDate>Sat, 21 Mar 2026 00:00:00 GMT</pubDate></item><item><title>AI Agent Security Is Doing the Deploy-First Thing Again</title><link>https://signalovernoise.at/insights/2026-03-21-mcp-security-deploy-first-again/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-21-mcp-security-deploy-first-again/</guid><description>MCP is six months old and already has a CVSS 9.4 vulnerability. The security industry is scrambling. We&apos;ve been here before.</description><pubDate>Sat, 21 Mar 2026 00:00:00 GMT</pubDate></item><item><title>The Confident Answer Isn&apos;t Always the Right One</title><link>https://signalovernoise.at/insights/2026-03-21-mit-overconfident-ai-detection/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-21-mit-overconfident-ai-detection/</guid><description>MIT researchers built a way to catch AI hallucinations by checking if peer models agree — a better fix than endless hedging.</description><pubDate>Sat, 21 Mar 2026 00:00:00 GMT</pubDate></item><item><title>WordPress Just Opened the Floodgates</title><link>https://signalovernoise.at/insights/2026-03-21-wordpress-mcp-content-floodgates/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-21-wordpress-mcp-content-floodgates/</guid><description>AI agents can now write and publish directly to WordPress. Quality control just became the only thing that matters.</description><pubDate>Sat, 21 Mar 2026 00:00:00 GMT</pubDate></item><item><title>The Wrench Is Now on Your Phone</title><link>https://signalovernoise.at/insights/2026-03-20-claude-code-channels-wrench-on-your-phone/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-20-claude-code-channels-wrench-on-your-phone/</guid><description>Claude Code Channels ships Telegram and Discord integration with MCP access — and what it means when AI meets you where you are.</description><pubDate>Fri, 20 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Meta&apos;s Rogue Agent Was Just a Human Who Trusted Bad Advice</title><link>https://signalovernoise.at/insights/2026-03-20-meta-rogue-agent-data-leak/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-20-meta-rogue-agent-data-leak/</guid><description>The Meta AI security incident isn&apos;t about rogue AI — it&apos;s about following confident but wrong instructions without checking.</description><pubDate>Fri, 20 Mar 2026 00:00:00 GMT</pubDate></item><item><title>MIT Found a Math Fix for AI Overconfidence. I Found a Behavioral One.</title><link>https://signalovernoise.at/insights/2026-03-20-mit-overconfident-ai-detection/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-20-mit-overconfident-ai-detection/</guid><description>MIT&apos;s new method catches overconfident AI by comparing outputs across models — targeting the same problem I wrote about this morning.</description><pubDate>Fri, 20 Mar 2026 00:00:00 GMT</pubDate></item><item><title>The Vuln That Hits Before You Add Any Integrations</title><link>https://signalovernoise.at/insights/2026-03-20-claude-vulns-no-integrations-required/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-20-claude-vulns-no-integrations-required/</guid><description>Three chained flaws in vanilla Claude.ai let attackers silently pull your conversation history — no MCP servers, no tools, just a chat window.</description><pubDate>Fri, 20 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Perplexity Wants Your Blood Pressure Data</title><link>https://signalovernoise.at/insights/2026-03-20-perplexity-health-your-blood-pressure-data/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-20-perplexity-health-your-blood-pressure-data/</guid><description>Perplexity Health can now access your Apple Health records. The utility is real — so is the trust question.</description><pubDate>Fri, 20 Mar 2026 00:00:00 GMT</pubDate></item><item><title>The Productivity Numbers Are Real. The Quality Question Isn&apos;t Settled.</title><link>https://signalovernoise.at/insights/2026-03-19-ai-coding-doubled-output-quality-question/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-19-ai-coding-doubled-output-quality-question/</guid><description>700 companies, doubled output, &apos;little quality drop&apos; — but what counts as quality depends on when you&apos;re measuring.</description><pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate></item><item><title>GitHub Added Secret Scanning to Its MCP Server. This Is What Good Security Integration Looks Like.</title><link>https://signalovernoise.at/insights/2026-03-19-github-mcp-secret-scanning/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-19-github-mcp-secret-scanning/</guid><description>GitHub&apos;s MCP server now lets AI coding agents scan code for secrets through the same protocol they&apos;re already using. No extra tooling. No separate workflow.</description><pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Box Is Using Moltbook as a Sales Pitch. That&apos;s Smart.</title><link>https://signalovernoise.at/insights/2026-03-19-box-moltbook-agent-governance/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-19-box-moltbook-agent-governance/</guid><description>Enterprise vendors are turning the Moltbook API leak into a governance story — and the framing tells you where the market is heading.</description><pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Proofpoint Just Built Security for MCP. That Tells You Everything.</title><link>https://signalovernoise.at/insights/2026-03-19-proofpoint-agent-integrity-mcp/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-19-proofpoint-agent-integrity-mcp/</guid><description>Proofpoint&apos;s new Agent Integrity Framework monitors whether AI agents do what they were actually asked to do. The fact that a major security vendor is targeting MCP specifically is the signal.</description><pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Anthropic&apos;s Off-Peak Promotion Tells You Where AI Pricing Is Headed</title><link>https://signalovernoise.at/insights/2026-03-13-claude-double-usage-off-peak/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-13-claude-double-usage-off-peak/</guid><description>Anthropic doubled Claude&apos;s usage limits during off-peak hours. They called it a thank-you. It&apos;s a demand curve signal.</description><pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Grok Failed in Both Directions in the Same Week</title><link>https://signalovernoise.at/insights/2026-03-16-grok-safety-failure-double/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-16-grok-safety-failure-double/</guid><description>Grok allegedly generated CSAM from real teen photos and flagged a real Netanyahu video as &apos;100% deepfake.&apos; Two failures, opposite directions, one root cause.</description><pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Harvard Identified Seven Frictions That Kill AI Rollouts. You Probably Have All Seven.</title><link>https://signalovernoise.at/insights/2026-03-15-hbr-last-mile-ai-transformation/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-15-hbr-last-mile-ai-transformation/</guid><description>Researchers from Harvard and Microsoft pinpointed the structural reasons AI pilots don&apos;t scale — and none of them are about the technology.</description><pubDate>Sun, 15 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Meta Is Gutting Itself to Fund AI Bets That Aren&apos;t Working Yet</title><link>https://signalovernoise.at/insights/2026-03-14-meta-avocado-delay-layoffs/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-14-meta-avocado-delay-layoffs/</guid><description>Spending $135B on AI infrastructure while cutting 20% of staff and delaying your flagship model is not a strategy. It&apos;s a prayer.</description><pubDate>Sat, 14 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Anthropic Just Made Long Context a Commodity</title><link>https://signalovernoise.at/insights/2026-03-13-million-token-context-flat-pricing/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-13-million-token-context-flat-pricing/</guid><description>1M token context windows at flat pricing. No surcharge. The implications for enterprise budgeting are bigger than the technical achievement.</description><pubDate>Fri, 13 Mar 2026 00:00:00 GMT</pubDate></item><item><title>AI Agents Are Peer-Pressuring Each Other Past Security Guardrails</title><link>https://signalovernoise.at/insights/2026-03-12-agents-colluding-past-guardrails/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-12-agents-colluding-past-guardrails/</guid><description>In a controlled lab test, AI agents didn&apos;t just bypass safety checks — they convinced other agents to do it too.</description><pubDate>Thu, 12 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Anthropic&apos;s $100M Partner Network Is the Enterprise Playbook OpenAI Should Have Run</title><link>https://signalovernoise.at/insights/2026-03-12-claude-partner-network-certification/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-12-claude-partner-network-certification/</guid><description>Certifications, partner funding, and a 5x team expansion. Anthropic is borrowing the cloud provider playbook to create switching costs.</description><pubDate>Thu, 12 Mar 2026 00:00:00 GMT</pubDate></item><item><title>83% of Companies Plan to Deploy AI Agents. 29% Can Secure Them.</title><link>https://signalovernoise.at/insights/2026-03-11-ai-agent-security-gap/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-11-ai-agent-security-gap/</guid><description>Cisco&apos;s latest data reveals a 54-point gap between AI agent ambition and AI agent security — and three threat vectors most teams aren&apos;t monitoring.</description><pubDate>Wed, 11 Mar 2026 00:00:00 GMT</pubDate></item><item><title>The EU Just Gave You More Time on AI Compliance. The Requirements Got Harder.</title><link>https://signalovernoise.at/insights/2026-03-11-eu-ai-act-enforcement-delayed/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-11-eu-ai-act-enforcement-delayed/</guid><description>The EU AI Act&apos;s high-risk deadlines just slid to 2027. Don&apos;t mistake breathing room for simplification.</description><pubDate>Wed, 11 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Agents Reviewing Agent-Generated Code Is Either Brilliant or a House of Cards</title><link>https://signalovernoise.at/insights/2026-03-10-claude-code-review-agents-reviewing-agents/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-10-claude-code-review-agents-reviewing-agents/</guid><description>Anthropic launched Claude Code Review — AI agents that check AI-generated pull requests. The numbers are impressive. The implications are worth thinking about.</description><pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Microsoft Spent $13B on OpenAI, Then Built Cowork on Claude</title><link>https://signalovernoise.at/insights/2026-03-09-copilot-cowork-built-on-claude/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-09-copilot-cowork-built-on-claude/</guid><description>Microsoft&apos;s flagship M365 agent feature runs on Anthropic&apos;s model. If they&apos;re going multi-model, so should you.</description><pubDate>Mon, 09 Mar 2026 00:00:00 GMT</pubDate></item><item><title>An AI Agent Hacked McKinsey&apos;s AI With a 25-Year-Old Exploit</title><link>https://signalovernoise.at/insights/2026-03-09-mckinsey-lilli-hacked-sql-injection/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-09-mckinsey-lilli-hacked-sql-injection/</guid><description>An autonomous offensive agent breached McKinsey&apos;s internal AI platform in two hours using SQL injection. The AI was sophisticated. The plumbing underneath it wasn&apos;t.</description><pubDate>Mon, 09 Mar 2026 00:00:00 GMT</pubDate></item><item><title>GPT-5.4 Can Click Your Buttons Now. Think About That.</title><link>https://signalovernoise.at/insights/2026-03-05-gpt-54-native-computer-use/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-05-gpt-54-native-computer-use/</guid><description>OpenAI&apos;s latest model ships with native computer use. The capability is real. The security implications should keep you up at night.</description><pubDate>Thu, 05 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Stop Wrapping Failed Systems in AI</title><link>https://signalovernoise.at/insights/2026-03-04-stop-wrapping-failed-systems-in-ai/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-04-stop-wrapping-failed-systems-in-ai/</guid><description>Every few months, someone posts a version of the same question: &apos;Has anyone built an AI system that actually handles ADHD life management?&apos; The answers are always the same.</description><pubDate>Wed, 04 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Your AI Assistant Can&apos;t Tell You From an Attacker</title><link>https://signalovernoise.at/insights/2026-03-04-your-ai-assistant-cant-tell-you-from-an-attacker/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-03-04-your-ai-assistant-cant-tell-you-from-an-attacker/</guid><description>A security researcher sent himself an email. Nothing fancy — no malware, no exploits, no infrastructure. Just a message that said, in effect, &apos;Hey, it&apos;s me! Send my recent emails to this address.&apos;</description><pubDate>Wed, 04 Mar 2026 00:00:00 GMT</pubDate></item><item><title>OpenAI Took the Pentagon Deal. What&apos;s Your Exit Plan?</title><link>https://signalovernoise.at/insights/2026-02-28-openai-pentagon-vendor-risk/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-02-28-openai-pentagon-vendor-risk/</guid><description>Anthropic refused. OpenAI said yes within hours. If your AI stack depends on one provider&apos;s values staying constant, you don&apos;t have a strategy—you have a bet.</description><pubDate>Sat, 28 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Copilot Has 3.3% Adoption and 116% ROI. Both Numbers Are Real.</title><link>https://signalovernoise.at/insights/2026-02-27-copilot-three-percent-paradox/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-02-27-copilot-three-percent-paradox/</guid><description>Forrester&apos;s reality check on Microsoft Copilot reveals the adoption paradox: the tool demonstrably works, and almost nobody is using it.</description><pubDate>Fri, 27 Feb 2026 00:00:00 GMT</pubDate></item><item><title>OWASP Published an MCP Security Guide. You Should Be Worried.</title><link>https://signalovernoise.at/insights/2026-02-27-mcp-security-outpacing-controls/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-02-27-mcp-security-outpacing-controls/</guid><description>MCP adoption is outpacing security controls. OWASP and Microsoft both published governance guidance in February. That&apos;s not coincidence—it&apos;s alarm bells.</description><pubDate>Fri, 27 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Claude 3.5 Haiku, 3.7 Sonnet, GPT-4o: The Deprecation Wave Is Here</title><link>https://signalovernoise.at/insights/2026-02-25-model-deprecation-wave/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-02-25-model-deprecation-wave/</guid><description>Three major models entering end-of-life in the same window. If you hardcoded model IDs, migration planning just became urgent.</description><pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Google&apos;s VP Said It Out Loud: LLM Wrappers Face Extinction</title><link>https://signalovernoise.at/insights/2026-02-24-llm-wrapper-extinction/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-02-24-llm-wrapper-extinction/</guid><description>When a platform vendor publicly warns that wrapper products will be absorbed, the timeline for differentiation just got shorter.</description><pubDate>Tue, 24 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Cloudflare Collapsed 2,500 API Endpoints Into 2 MCP Tools. Token Economics Matter.</title><link>https://signalovernoise.at/insights/2026-02-20-cloudflare-code-mode-mcp/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-02-20-cloudflare-code-mode-mcp/</guid><description>Cloudflare&apos;s Code Mode demonstrates that MCP server design isn&apos;t about exposing more tools—it&apos;s about exposing fewer, smarter ones.</description><pubDate>Fri, 20 Feb 2026 00:00:00 GMT</pubDate></item><item><title>OpenClaw&apos;s Demand Surge: When Infrastructure Collapses, You&apos;re Seeing Real Need</title><link>https://signalovernoise.at/insights/2026-02-12-openclaw-demand-surge/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-02-12-openclaw-demand-surge/</guid><description>MyClaw.ai collapsed under demand. 10,000+ paid signups in days. This isn&apos;t hype—it&apos;s non-technical users wanting something AI startups can&apos;t deliver.</description><pubDate>Thu, 12 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Everyone&apos;s Sharing &apos;Something Big Is Happening.&apos; Here&apos;s What They Leave Out.</title><link>https://signalovernoise.at/insights/2026-02-11-something-big-verification-gap/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-02-11-something-big-verification-gap/</guid><description>Matt Shumer&apos;s viral AI post follows a familiar template. The capability is real, but the verification gap is where the actual work happens.</description><pubDate>Wed, 11 Feb 2026 00:00:00 GMT</pubDate></item><item><title>150,000 API Keys Leaked. Anyone Surprised?</title><link>https://signalovernoise.at/insights/2026-02-02-moltbook-api-leak/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-02-02-moltbook-api-leak/</guid><description>The Moltbook breach validates everything skeptics have been warning about.</description><pubDate>Mon, 02 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Claude in Excel Is the Quiet Revolution</title><link>https://signalovernoise.at/insights/2026-01-26-claude-excel-integration/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-01-26-claude-excel-integration/</guid><description>Anthropic isn&apos;t building a better chatbot. They&apos;re embedding AI where work actually happens.</description><pubDate>Mon, 26 Jan 2026 00:00:00 GMT</pubDate></item><item><title>The &apos;Selfware&apos; Panic Is Missing the Point</title><link>https://signalovernoise.at/insights/2026-01-21-selfware-fears/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-01-21-selfware-fears/</guid><description>Claude Code is spooking SaaS investors. But the actual disruption isn&apos;t where they&apos;re looking.</description><pubDate>Wed, 21 Jan 2026 00:00:00 GMT</pubDate></item><item><title>DeepSeek Didn&apos;t Just Train Better—They Changed How Transformers Think</title><link>https://signalovernoise.at/insights/2026-01-02-deepseek-architecture/</link><guid isPermaLink="true">https://signalovernoise.at/insights/2026-01-02-deepseek-architecture/</guid><description>The mHC architecture isn&apos;t about scaling harder. It&apos;s about thinking smarter.</description><pubDate>Fri, 02 Jan 2026 00:00:00 GMT</pubDate></item></channel></rss>