Want better AI outputs? Try context engineering.
| Editor’s note: A version of this blog was originally published in The GitHub Insider newsletter. Sign up now for more technical content > |
If you’ve ever felt like GitHub Copilot could be even stronger with just a little more context, you’re right. Context engineering is quickly becoming one of the most important ways developers shape, guide, and improve AI-assisted development.
What is context engineering?
Context engineering is the evolution of prompt engineering. It’s focused less on clever phrasing and more, as Braintrust CEO Ankur Goyal puts it, on “bringing the right information (in the right format) to the LLM.”
At GitHub Universe this past fall, Harald Kirschner—principal product manager at Microsoft and longtime VS Code and GitHub Copilot expert—outlined three practical ways developers can apply context engineering today:
- Custom instructions
- Reusable prompts
- Custom agents
Each technique gives Copilot more of the information it needs to produce code matching your expectations, your architecture, and your team’s standards.
Let’s explore all three, so you can see how providing better context helps Copilot work the way you do.
1. Custom instructions: Give Copilot the rules it should follow
Custom instruction files help Copilot understand your:
- Coding conventions
- Language preferences
- Naming standards
- Documentation style
You can use:
- Global rules:
.github/copilot-instructions.md - Task-specific rules:
.github/instructions/*.instructions.md
For example, you might define how React components should be structured, how errors should be handled in a Node service, or how you want API documentation formatted. Copilot then applies those rules automatically as Copilot works.
Learn how to set up custom instructions 👉
2. Reusable prompts: Standardize your common workflows
Reusable prompt files let you turn frequent tasks—like code reviews, scaffolding components, generating tests, or initializing projects—into prompts that you can call instantly and consistently.
Use:
- Prompt files:
.github/prompts/*.prompts.md - Slash commands such as
/create-react-formto trigger structured tasks
This helps teams enforce consistency, speed up onboarding, and execute repeatable workflows the same way every time.
See examples of reusable prompt files 👉
3. Custom agents: Create task-specific AI personas
Custom agents allow you to build specialized AI assistants with well-defined responsibilities and scopes. For example:
- An API design agent to review interfaces
- A security agent that performs static analysis tasks
- A documentation agent that rewrites comments or generates examples
Agents can include their own tools, instructions, constraints, and behavior models. And yes, you can even enable handoff between agents for more complex workflows.
Learn how to create and configure custom agents 👉
Why context engineering matters
The goal isn’t just better outputs, it’s better understanding by Copilot. When you provide Copilot with clearer context:
- You get more accurate and reliable code.
- You reduce back-and-forth prompting.
- You increase consistency across files and repositories.
- You stay in flow longer instead of rewriting or correcting results.
And the more you experiment with context engineering, the more you’ll discover how deeply it can shape your development experience.
More resources
- A practical guide to context engineering for developers 👉
- How to build reliable AI workflows with agentic primitives 👉
- What makes a great agents.md file? Lessons from 2,500+ examples 👉
The post Want better AI outputs? Try context engineering. appeared first on The GitHub Blog.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0