Context is Everything: Lessons from AI-Assisted Development

AI is powerful, but not limitless

Aaron Taylor
Aaron Taylor
Context is Everything: Lessons from AI-Assisted Development

As large language models (LLMs) evolve, one of the most important advancements is the expansion of their context windows—the amount of data, code, and documentation that a model can consume at once. This enables developers to work with AI tools in more complex, real-world scenarios: multi-file refactors, large component trees, deeply integrated systems. It’s powerful, but also imperfect.

We often forget that AI, as it exists today, is still a young technology. Even the best models have limitations, and those limitations show up in the everyday workflows of engineering teams. You quickly run into edge cases, context loss, or hallucinated results—especially as you increase the volume of context you expect a model to understand and apply consistently.

The Challenges of Teaching AI

Lately, I’ve been experimenting with Cursor, an AI-first coding environment, to modernize parts of our legacy infrastructure at Agility CMS. Cursor offers strong contextual awareness and a natural extension of the IDE experience. It allows me to move quickly—rewriting files, generating tests, and summarizing blocks of code with speed and clarity. It’s a great foundation.

But the more I try to get the AI to understand the specifics of our platform—especially tools like our CLI that use all three of our major SDKs (Management, Sync, and Fetch)—the more I find myself hitting the upper limits of general-purpose AI tools.

While these SDKs don’t directly interact with one another, they’re often used together in tools that orchestrate complex behaviour. Teaching an LLM to understand how these packages are meant to be used collectively requires a lot of guidance and a very deliberate setup.

The Context Window: Your Best Friend and Biggest Bottleneck

Even with 100k+ token windows available in newer models like Claude and Gemini, most models still show a preference toward the first and last 16k tokens in a context window. This means instructions placed in the middle of long prompts can be deprioritized or outright ignored.

While newer architectures are improving at parsing and retaining mid-context relevance, the day-to-day reality is that AI-assisted tasks can still fall short—particularly when they rely on deeply nested logic or sequential instructions.

In my workflow, I’ve tried solving this by crafting structured prompts that walk the AI through how our SDKs are used, with inline examples, explanations of expected outputs, and reusable conventions. When the prompt is small, it performs well.

But once I include more files, usage patterns, and logic branches, critical instructions begin to get lost. The output becomes less predictable and less aligned with what I’d expect from a human developer reading the same materials.

Where Should the Focus Be?

As the Developer Experience engineer here at Agility CMS, a big part of my job is improving how we build software internally. That includes exploring how AI can be a force multiplier for our engineering teams. I don’t want developers spending time on boilerplate, repetitive scaffolding, or writing wrapper tests—we should be focusing on high-value, product-impacting work. If we can get to a point where the boring tasks are abstracted away and the creative work is front and centre, that’s a huge win for a company trying to keep it fun!

The path forward, in my view, isn’t just about bigger models or more context—it’s about specificity. We need models that are tailored to our architecture, familiar with our conventions, and responsive to our code standards. That might mean building internal agents trained on our repositories, or exploring ways to inject structure into long prompts so they remain effective even as complexity grows.

The tools are catching up fast. The challenge now is to make sure they work for us—not just in theory, but in practice.

Aaron Taylor
About the Author
Aaron Taylor

A versatile full stack developer with a passion for supporting other developer experiences. Upgrading from a long time end user of Agility, Aaron is new to the Agility CMS team joining in 2024. That won't stop him from trying to make a big impact! 

Overseeing the Starters, SDKs, CLIs and more, look from some awesome new frameworks and tools coming to the Agility CMS platform!

When Aaron's not turning coffee into code, he's probably off with his two boys on a weekend adventure! 

Take the next steps

We're ready when you are. Get started today, and choose the best learning path for you with Agility CMS.