Context Rot: How Increasing Input Tokens Impacts LLM Performance



Large language models have transformed the way we build software systems. In our latest research report, Kelly Hong shares her findings on what we’re calling Context Rot, how large language model performance degrades with more and more input tokens.

View the full report on our research site: https://research.trychroma.com/context-rot

0:00 – Intro
1:44 – Models struggle with long context
3:09 – Ambiguity compounds challenges
4:28 – Models struggle with distractions
5:30 – Models are not reliable computing systems
6:24 – Context Engineering

Chroma is the open-source AI application database. Batteries included.

Embeddings, vector search, document storage, full-text search, metadata filtering, and multi-modal. All in one place. Retrieval that just works. As it should be.

Try it today: https://trychroma.com/signup

source