Cursor has recently unveiled an innovative approach aimed at minimizing the context size of requests sent to large language models (LLMs). This game-changing method, dubbed dynamic context discovery, shifts the paradigm away from the conventional practice of providing extensive static context upfront. Instead, it enables agents to dynamically retrieve only the pertinent information they require, effectively reducing token usage and avoiding the pitfalls of including irrelevant or confusing details.
At the core of Cursor’s dynamic context discovery are five distinct techniques, all centered around utilizing files as the primary interface for LLM-based tools. This strategy not only streamlines the agent’s interactions but also mitigates the risks of overwhelming the context window with excessive information. The versatility and simplicity of using files as a data storage method make them a powerful primitive in the evolving landscape of coding agents.
As coding agents quickly improve, files have been a simple and powerful primitive to use, and a safer choice than yet another abstraction that can’t fully account for the future.
The first technique implemented by Cursor involves writing large outputs—like those derived from shell commands or tools—directly to files. This ensures that no relevant information is lost, and the agent can access the required data at any time by using the tail command to consult the latest entries.
Another key aspect is Cursor’s mechanism for preserving the full context history. When lengthy context requires summarizing to fit within token limits, the complete history is saved into a file. This strategy allows agents to retrieve any essential missing details whenever necessary. Additionally, domain-specific capabilities are organized in files, thereby enabling agents to discover and utilize relevant functionalities through Cursor’s semantic search tools.
For tools identified as MCP, the methodology diverges from including all tools from MCP servers at the outset. Instead, agents retrieve only the tool names initially and fetch complete details on an as-needed basis. This approach significantly reduces the total token count:
The agent now only receives a small bit of static context, including names of the tools, prompting it to look up tools when the task calls for it. In an A/B test, we found that in runs that called an MCP tool, this strategy reduced total agent tokens by 46.9% (statistically significant, with high variance based on the number of MCPs installed).
One notable advantage of this framework is that it allows the agent to keep track of each MCP tool’s operational status. For example, if a specific MCP server requires re-authentication, the agent is capable of notifying the user, ensuring that critical information doesn’t get overlooked.
Moreover, the methodology ensures that outputs from all terminal sessions are synchronized with the file system. This organization enables the agent to effectively address user inquiries about any failing commands. By storing outputs in files, the agent can utilize grep to isolate only the relevant information, further streamlining the context size.
However, some users have expressed concerns regarding the trade-off between token reduction and latency. For instance, user @glitchy acknowledged the importance of minimizing tokens but raised questions about latency implications. In contrast, @NoBanksNearby emphasized the benefits of dynamic context discovery for developer efficiency when managing multiple MCP servers. @casinokrisa echoed this sentiment:
Reducing tokens by nearly half cuts costs and speeds up responses, especially across multiple servers.
Finally, @anayatkhan09 proposed potential enhancements for the future:
The next step is exposing that dynamic context policy to users so we can tune recall aggressiveness per repo instead of treating all tools the same.
According to Cursor, users can expect dynamic context discovery to be broadly available in the coming weeks, promising a more efficient and effective interaction with large language models.
Inspired by: Source

