Below you will find pages that utilize the taxonomy term “Llm”
Gradio MCP Support: Building AI Tools in Just 5 Lines of Code
Gradio Now Supports the Model Context Protocol (MCP)
Gradio, the popular Python library for building ML interfaces, now officially supports the Model Context Protocol (MCP). This means any Gradio app can be called as a tool by Large Language Models (LLMs) like Claude and GPT-4.
What is MCP?
The Model Context Protocol standardizes how applications provide context to LLMs. It allows models to interact with external tools such as image generators, file systems, and APIs. By providing a standardized protocol for tool-calling, MCP extends LLMs’ capabilities beyond text generation.
A Survey of AI Agent Protocols: Framework and Future
A recent research paper from Shanghai Jiao Tong University and the ANP Community provides the first comprehensive analysis of existing agent protocols, offering a systematic two-dimensional classification that distinguishes between context-oriented versus inter-agent protocols and general-purpose versus domain-specific protocols.
The paper highlights a critical issue in the rapidly evolving landscape of LLM agents: the lack of standardized protocols for communication with external tools or data sources. This standardization gap makes it difficult for agents to work together effectively or scale across complex tasks, ultimately limiting their potential for tackling real-world problems.
Long Context Models & RAG: Insights from Google DeepMind (Release Notes Podcast)
Explore the synergy between long context models and Retrieval Augmented Generation (RAG) in this episode of the Release Notes podcast. Google DeepMind’s Nikolay Savinov joins host Logan Kilpatrick to discuss scaling context windows into the millions, recent quality improvements, RAG versus long context, and what’s next in the field.

Watch the episode:
YouTube Video
Listen to the podcast:
Apple Podcasts | Spotify
Episode Summary
- Defining tokens and context windows: What is a token, and why do LLMs use them? How does tokenization affect model behavior and limitations?
- Long context vs. RAG: When is RAG still necessary, and how do long context models change the landscape for knowledge retrieval?
- Scaling context windows: The technical and economic challenges of moving from 1M to 10M+ tokens, and what breakthroughs are needed.
- Quality improvements: How recent models (Gemini 1.5 Pro, 2.5 Pro) have improved long context quality, and what benchmarks matter.
- Practical tips: Context caching, combining RAG with long context, and best practices for developers.
- The future: Predictions for superhuman coding assistants, agentic use cases, and the role of infrastructure.
Chapters
- 0:00 - Intro
- 0:52 Introduction & defining tokens
- 5:27 Context window importance
- 9:53 RAG vs. Long Context
- 14:19 Scaling beyond 2 million tokens
- 18:41 Long context improvements since 1.5 Pro release
- 23:26 Difficulty of attending to the whole context
- 28:37 Evaluating long context: beyond needle-in-a-haystack
- 33:41 Integrating long context research
- 34:57 Reasoning and long outputs
- 40:54 Tips for using long context
- 48:51 The future of long context: near-perfect recall and cost reduction
- 54:42 The role of infrastructure
- 56:15 Long-context and agents
Notable Quotes
“You can rely on context caching to make it both cheaper and faster to answer.”