Below you will find pages that utilize the taxonomy term “Podcast”
May 5, 2025
Long Context Models & RAG: Insights from Google DeepMind (Release Notes Podcast)
Explore the synergy between long context models and Retrieval Augmented Generation (RAG) in this episode of the Release Notes podcast. Google DeepMind’s Nikolay Savinov joins host Logan Kilpatrick to discuss scaling context windows into the millions, recent quality improvements, RAG versus long context, and what’s next in the field.

Watch the episode:
YouTube Video
Listen to the podcast:
Apple Podcasts | Spotify
Episode Summary
- Defining tokens and context windows: What is a token, and why do LLMs use them? How does tokenization affect model behavior and limitations?
- Long context vs. RAG: When is RAG still necessary, and how do long context models change the landscape for knowledge retrieval?
- Scaling context windows: The technical and economic challenges of moving from 1M to 10M+ tokens, and what breakthroughs are needed.
- Quality improvements: How recent models (Gemini 1.5 Pro, 2.5 Pro) have improved long context quality, and what benchmarks matter.
- Practical tips: Context caching, combining RAG with long context, and best practices for developers.
- The future: Predictions for superhuman coding assistants, agentic use cases, and the role of infrastructure.
Chapters
- 0:00 - Intro
- 0:52 Introduction & defining tokens
- 5:27 Context window importance
- 9:53 RAG vs. Long Context
- 14:19 Scaling beyond 2 million tokens
- 18:41 Long context improvements since 1.5 Pro release
- 23:26 Difficulty of attending to the whole context
- 28:37 Evaluating long context: beyond needle-in-a-haystack
- 33:41 Integrating long context research
- 34:57 Reasoning and long outputs
- 40:54 Tips for using long context
- 48:51 The future of long context: near-perfect recall and cost reduction
- 54:42 The role of infrastructure
- 56:15 Long-context and agents
Notable Quotes
“You can rely on context caching to make it both cheaper and faster to answer.”