Hasty Briefsbeta

Bilingual

Perplexity Says MCP Sucks

4 hours ago
  • #AI Security
  • #Data Privacy
  • #MCP Protocol
  • Perplexity's CTO announced moving away from MCP due to high token consumption: tool descriptions from three servers used 143K out of 200K context tokens (72% of the window) before processing user queries, making it expensive and less reliable compared to direct integrations.
  • MCP's design prioritizes dynamic tool discovery for arbitrary clients in heterogeneous environments, which is necessary for products like Kimono (an AI-powered CRM) that work with unknown agents and models, but this generality comes with token overhead.
  • A major security issue with MCP is the lack of trust mechanisms: after OAuth authorization, sensitive data (e.g., PII, medical records) can be routed to any inference provider without content-aware restrictions, risking compliance violations under regulations like HIPAA and GDPR.
  • To address the trust gap, proposals include adding sensitivity annotations to MCP responses, creating a trust-tier registry for inference providers, and implementing runtime enforcement to block or reroute data based on content and provider guarantees.
  • WebMCP (a browser-based initiative) faces similar trust problems, where agents can access tools across origins without distinguishing between public and private data, highlighting a protocol-agnostic need for content classification in AI systems.
  • Upcoming developments, such as the MCP Dev Summit, may focus on trust and content awareness, as current MCP adoption lacks a roadmap for these critical aspects, despite growing regulatory requirements like the EU AI Act.