Agent rule files are deceptively simple. That simplicity makes it easy to underestimate how much influence they have on agent behavior.
While iterating on rule setups across a few repos, I kept running into the same issue. The agent would behave inconsistently. Sometimes it applied the right rules. Other times it ignored them or pulled in something unrelated. The surprising part was that the problem usually wasn’t the content of the rules. It was how they were being selected.
The Problem
My initial approach was fairly straightforward. I wrote a decent number of rules, added detailed descriptions, used globs where it made sense, and marked a few rules as global. On paper, everything looked reasonable.
The results varied. Some rules triggered when they shouldn’t have. Others didn’t show up at all. The same prompt could produce different behavior across runs, and it wasn’t always clear why. That made debugging difficult. It also made it harder to trust the system, since I couldn’t easily predict which rules would be in play. The core issue turned out to be selection, not instruction.
Rule Files Have Two Jobs
A useful way to think about rule files is that they do two separate things. First, they decide when they should apply. Second, they define what should happen once they are applied. Early on, I mixed those responsibilities together. Descriptions included implementation details. Rules tried to handle too many scenarios. Some rules attempted to be both global guidance and context-specific behavior at the same time. Once I started separating these concerns, things improved. Selection became clearer, and the rule bodies became easier for the agent to follow.
The Three Ways Rules Get Applied
In practice, rule selection is controlled by three mechanisms: applyAll, glob, and description. Each one plays a different role, and not being intentional about how they were used led to most of my earlier issues.
applyAll is for invariants
Rules marked with applyAll are always present. This is useful for constraints that should never depend on context, such as response formats, naming conventions, or logging requirements.
---
applyAll: true
---
All API responses must follow the standard envelope format.
Using this removes any ambiguity about whether the rule is available. It simply always is. At the same time, adding too many global rules can create noise. When everything is always present, it becomes harder for the agent to distinguish what actually matters for a given task. I found it helpful to reserve applyAll for rules that would feel reasonable in almost any file and any task.
glob is for file scoping, not discovery
Globs initially felt like the most reliable tool because they are explicit. They tie a rule to a set of files, which seems like a clear way to control applicability. That held up when working on targeted changes. If the agent was editing a file that matched the glob, the rule would typically be available and applied as expected.
---
globs: src/api/**/*.ts
---
Controllers should not contain business logic.
However, this approach became less reliable for broader tasks. When I asked the agent to do something like “refactor the auth flow,” the relevant files were not always part of the initial context. Since the agent builds context incrementally, rules tied to those files often didn’t appear early enough to influence the overall approach. In practice, globs worked best for localized work where the files were already known. They were less effective as a way to guide open-ended or cross-cutting changes.
description is the routing signal
The biggest improvement came from rethinking how descriptions are used. Descriptions are not there to explain the contents of a rule. They help the agent decide when to use it. In other words, they act as a routing signal.
Early descriptions I wrote focused on what the rule contained:
- “Defines API best practices”
- “Handles database rules”
- “Service layer architecture”
These descriptions were accurate, but not useful for selection. They didn’t give the agent a clear signal about when the rule should apply.
Rewriting them to focus on trigger conditions made a noticeable difference:
- “Use when adding or modifying API endpoints”
- “Use when working on database schema or migrations”
- “Use when implementing service layer business logic”
These descriptions map directly to tasks the agent can recognize. That made it easier for the agent to pull in the right rules at the right time.
What Makes a Good Description
The descriptions that worked best shared a few characteristics. They were short, specific, and tied to a recognizable task.
A simple pattern that held up well was:
Use when [context or task]
For example:
- “Use when creating new React components”
- “Use when modifying API request validation”
- “Use when adding caching logic”
This keeps the description focused on selection. The details of how the rule works belong in the body.
Keep the Rule Body Focused
Once a rule is selected, the body should be clear and concrete. This is where constraints, patterns, and examples belong.
---
description: Use when implementing API controllers
---
Controllers should only:
- parse requests
- call services
- format responses
Do not:
- include business logic
- access the database directly
In this structure, the description determines when the rule appears, and the body defines what the agent should do. Keeping those responsibilities separate made the rules easier to reason about and easier for the agent to follow.
Organizing Rule Files
As the number of rules grew, a flat structure became harder to manage. It also made it easier to accidentally introduce overlapping rules.
Grouping rules by domain worked better in practice:
.cursor/rules/
api.mdc
database.mdc
frontend.mdc
global.mdc
Each file handles a single area of responsibility. This makes it easier to understand the intent of each rule and reduces the chance of conflicting guidance.
Using Markdown as the Source of Truth
One pattern that worked well was moving the actual rule content into plain markdown files, and using rule files primarily for selection.
docs/
api.md
database.md
.cursor/rules/
api.mdc
database.mdc
Rule file:
---
description: Use when working on API endpoints
---
Follow docs/api.md
In this setup, the markdown files define behavior, and the rule files determine when that behavior should be applied. This separation made the system easier to maintain and easier to reuse across different agents.
What I Expected vs What Happened
What I expected
Adding more rules would make the agent more accurate.
What happened
Adding more rules made selection harder.
Reducing the number of rules and improving how they were triggered led to more consistent results.
What Actually Helped
A few changes consistently improved behavior:
- writing descriptions as trigger conditions instead of summaries
- using globs for file-scoped work where files are already in context
- limiting
applyAllto true invariants - keeping rule files small and focused
- moving rule content into shared markdown where possible
These changes didn’t add more guidance. They made it easier for the agent to choose the right guidance.
