AI Coding Agents: Rules, Commands, Skills, MCP and Hooks Explained
9 min read

AI Coding Agents: Rules, Commands, Skills, MCP and Hooks Explained

1747 words

If you’re using tools like Claude Code, GitHub Copilot Workspace, or similar, you’ve probably noticed there’s technical jargon that goes beyond simply “chatting with AI”. I’m talking about terms like rules, commands, skills, MCP, and hooks.

These concepts are the architecture that makes AI agents truly useful for software development. They’re not just fancy marketing words — each one serves a specific function in how the agent works.

Let’s break them down one by one in a clear way.

First things first: What is an AI Coding Agent?

An AI coding agent is a system that doesn’t just answer questions, but executes actions in your codebase. It can read files, modify code, run commands, and make decisions based on context.

The key difference from a normal chatbot: an agent has tools and can operate autonomously within certain limits.

Now, let’s dive into the concepts that make these agents work.


Rules: Permanent Instructions

Rules are instructions that you configure once and the agent respects always, in every interaction. Think of them as the agent’s global configuration.

What they’re for

Rules define:

  • What the agent should do
  • What it should NOT do (limitations)
  • How to behave in specific situations
  • Preferred output formats
  • Security precautions

Real example

Imagine you’re working on a Laravel project and you have a rule that says:

Always use Laravel factories to generate test data, never use hardcoded data or direct SQL inserts.

This means that every time the agent needs to create tests or sample data, it will automatically use factories without you having to remind it.

When to use rules

Use rules when:

  • ✅ You have restrictions that apply always
  • ✅ You want consistency across all responses
  • ✅ You need guard rails to prevent unwanted behavior
  • ✅ You have format preferences that must always be respected

Where to configure them

Depending on the tool, rules can be in:

  • Configuration files (eg: CLAUDE.md for Claude Code)
  • Application settings
  • Specific project files

Commands: Shortcuts for Recurrent Tasks

Commands are predefined shortcuts that execute a specific task. They’re like macros: a command that triggers a complex sequence of actions.

How they work

When you invoke a command, the agent:

  1. Expands the command into its full prompt
  2. Executes the associated task
  3. Returns the result to you

Real example

If you have a /commit command, the agent could:

  1. Read git status
  2. Analyze changes
  3. Generate a descriptive commit message
  4. Execute git commit with that message

All this with a single command: /commit.

Types of commands

There are two main categories:

1. Built-in commands (come with the tool)

  • /help - Shows help
  • /clear - Clears context
  • /commit - Creates commits (in some tools)

2. Custom commands (you define them)

  • /review - Reviews current code
  • /test - Generates tests for what you’re editing
  • /deploy - Runs your deployment pipeline

Advantages of using commands

  • Speed: You don’t have to write the same prompt over and over
  • Consistency: It’s always done the same way
  • Documentation: Commands serve as documentation of your workflow
  • Hidden complexity: You can chain many actions in a simple command

When to create commands

Create commands for:

  • ✅ Tasks you repeat frequently
  • Multi-step processes that are always done the same way
  • Complex workflows you want to simplify

Skills: Specialized Capabilities

Skills are packages of knowledge and capabilities that an agent can load on demand. Think of them as plugins or extensions: the agent has them available, but only uses them when you need them.

Difference vs commands

  • Commands: Execute a specific task
  • Skills: Add new capabilities to the agent

A skill can include multiple commands, rules, and specialized logic.

Real example

A “code-review” skill could include:

  • Knowledge about clean code patterns
  • Rules for what to check first
  • Commands for security analysis
  • Logic to prioritize critical issues
  • Specific report format

Common skills in current tools

  • Git management: Commits, branches, PRs
  • Testing: Test generation, execution, coverage
  • Documentation: Generating docs, updating READMEs
  • Security: Vulnerability scanning
  • Refactoring: Improving existing code
  • Database: Migrations, queries, optimization

Why skills matter

Skills allow:

  • Specialization: The agent knows about specific domains
  • Composability: Combine skills for complex tasks
  • Updates: You can improve skills independently
  • Sharing: You can distribute skills to your team

When to use skills

Use skills when:

  • ✅ A domain requires specialized knowledge
  • ✅ You want to share capabilities across different agents
  • ✅ You need to update knowledge without changing the core

MCP: Model Context Protocol

MCP (Model Context Protocol) is an open standard that allows AI agents to connect with external tools and data sources in a standardized way.

The problem it solves

Before MCP, each AI tool had its own format for connecting with things:

  • Reading filesystem files
  • Querying databases
  • Making HTTP calls
  • Running commands

It was chaos. Every integration was ad-hoc.

The MCP solution

MCP defines:

  • How the agent communicates with a tool
  • What information is passed (inputs/outputs)
  • How errors are handled
  • How available capabilities are discovered

Real example

Imagine an MCP server that exposes database access:

# The MCP server exposes tools
tools:
  - name: "query_database"
    description: "Executes SQL queries in the database"
    input_schema:
      type: "object"
      properties:
        query:
          type: "string"
          description: "SQL query to execute"

The agent can:

  1. Discover that this tool exists
  2. Know how to use it (what inputs it accepts)
  3. Execute queries safely
  4. Receive structured results

MCP Servers

An MCP server is a program that exposes functionality through the MCP protocol. It can be:

  • Local: Runs on your machine
  • Remote: Runs on a server
  • Hybrid: Mix of both

Examples of MCP servers:

  • Filesystem: Read/write files
  • Database: SQL queries
  • Git: Repository operations
  • API: Calls to external services
  • Memory: Persistent storage for the agent

Advantages of MCP

  • Open standard: You’re not vendor-locked
  • Community: There are hundreds of MCP servers already created
  • Security: You control what tools the agent can use
  • Composability: Combine multiple MCP servers
  • Extensible: Easy to create new servers

Why MCP is important for AI Coding Agents

MCP allows agents to:

  • Read your code (filesystem server)
  • Understand your git history (git server)
  • Query documentation (docs server)
  • Run tests (test runner server)
  • Connect to APIs (HTTP server)

All this in a standardized and secure way.


Hooks: Event-Based Customization

Hooks are insertion points where you can execute custom logic in response to specific events in the agent’s lifecycle.

How they work

A hook is defined as:

  1. When to execute (which event)
  2. What to execute (command or script)
  3. How to use the result (modify behavior)

Real example

A pre-commit hook could:

  1. Detect that the agent is about to commit
  2. Run a linter on changed files
  3. If there are errors, prevent the commit
  4. If everything is fine, allow the commit

Types of common hooks

1. Lifecycle hooks

  • before-work - Before starting a task
  • after-work - After completing a task
  • before-edit - Before modifying a file
  • after-edit - After modifying a file

2. Event hooks

  • on-error - When something fails
  • on-success - When something succeeds
  • on-retry - When something is retried

3. Tool-use hooks

  • before-command - Before executing a command
  • after-command - After executing a command
  • before-tool-use - Before using a tool
  • after-tool-use - After using a tool

What to use hooks for

Hooks are useful for:

  • Validation: Verify that the agent does the right thing
  • Logging: Register what the agent does
  • Modification: Change behavior based on context
  • Integration: Connect with external tools
  • Security: Prevent dangerous actions

Audio hook example (my personal configuration)

A practical use of hooks is receiving audio feedback when the agent performs actions. In my Claude Code configuration I have hooks that play sounds:

{
  "hooks": {
    "Notification": [
      {
        "matcher": "*",
        "hooks": [
          {
            "type": "command",
            "command": "afplay /System/Library/Sounds/Funk.aiff"
          }
        ]
      }
    ],
    "Stop": [
      {
        "matcher": "*",
        "hooks": [
          {
            "type": "command",
            "command": "afplay /System/Library/Sounds/Hero.aiff"
          }
        ]
      }
    ]
  }
}

This means:

  • Every time there’s a notification, a “Funk” sound plays
  • When the agent finishes its work, a “Hero” sound plays

It’s a simple but effective way to know what the agent is doing without constantly looking at the screen.

Security hook example

hooks:
  before-tool-use:
    - if: tool.name == "bash" && command.contains("rm -rf")
      then: block("rm -rf not allowed without explicit confirmation")

This hook prevents the agent from executing dangerous commands without approval.

Hooks vs Rules

AspectRulesHooks
When appliedAlways, in every interactionAt specific events
PurposeGeneral behaviorResponse to events
FlexibilityStaticDynamic, can evaluate context
Example“Don’t use words in images”“Before running bash, verify the command”

How everything works together

The interesting thing isn’t each concept separately, but how they work together.

Example: A complete workflow

Imagine you tell your agent: “Add authentication to this API endpoint”.

This could trigger:

  1. Hook before-work: Verifies you have the necessary permissions
  2. Rule: The agent knows it should use JWT, not sessions (global configuration)
  3. Command /auth-boilerplate: Expands an authentication code template
  4. Skill laravel-auth: Applies Laravel-specific knowledge
  5. MCP server docs: Queries official Laravel documentation
  6. MCP server git: Verifies you’re not modifying protected files
  7. Hook after-edit: Runs automatic tests after each modified file
  8. Hook before-command: Before executing php artisan migrate, asks for confirmation

All this without you having to specify each step.


Final analogy: The agent as a junior developer

To understand it all together, think of an AI agent as a very capable junior developer:

  • Rules = Your team’s guidelines (coding standards, processes)
  • Commands = The scripts and shortcuts you use daily
  • Skills = The specialized technologies they know (React, Laravel, Docker)
  • MCP = The tools they have available (IDE, terminal, APIs)
  • Hooks = Code reviews and checkpoints during development

The difference is this “junior”:

  • Works 24/7
  • Doesn’t get tired
  • Executes instantly
  • Has access to your entire codebase at once
  • Can read documentation faster than any human

Tools that use these concepts

Claude Code

  • Rules: CLAUDE.md and CLAUDE.local.md files
  • Commands: /commit, /help, etc.
  • Skills: Built-in, with more coming
  • MCP: Full support for MCP servers
  • Hooks: Configurable for lifecycle events

GitHub Copilot Workspace

  • Rules: Per-repository configuration
  • Commands: Predefined workflows
  • Skills: Integrated with GitHub ecosystem

Cursor

  • Rules: Per-project configuration
  • Commands: Customizable shortcuts
  • MCP: Growing support

Conclusion

Rules, commands, skills, MCP, and hooks are not just technical jargon. They’re the building blocks that make AI agents useful for real software development.

Understanding these concepts allows you to:

  • Configure your tools better
  • Automate repetitive workflows
  • Integrate the agent with your existing stack
  • Maintain control over what the agent does

AI won’t replace developers. But developers who understand how to configure and use these agents will be much more productive.

In 2026, understanding how to work with AI agents is increasingly valuable.


References

Comments

Latest Posts

4 min

810 words

A few days ago I came across a very interesting stream where someone showed their setup for agentic programming using Claude Code. After years developing “the old-fashioned way,” I have to admit that I’ve found this revealing.

What is Agentic Programming?

For those not familiar with the term, agentic programming is basically letting an AI agent (in this case Claude) write code for you. But I’m not talking about asking it to generate a snippet, but giving it full access to your system so it can read, write, execute, and debug code autonomously.

6 min

1097 words

Two protocols, two philosophies

In recent months, two protocols have emerged that will change how we build AI systems: Agent2Agent Protocol (A2A) from Google and Model Context Protocol (MCP) from Anthropic. But here’s the thing: they don’t compete with each other.

In fact, after analyzing both for weeks, I’ve realized that understanding the difference between A2A and MCP is crucial for anyone building AI systems beyond simple chatbots.

The key lies in one question: Are you connecting an AI with tools, or are you coordinating multiple intelligences?

6 min

1248 words

A few years ago, many AI researchers (even the most reputable) predicted that prompt engineering would be a temporary skill that would quickly disappear. They were completely wrong. Not only has it not disappeared, but it has evolved into something much more sophisticated: Context Engineering.

And no, it’s not just another buzzword. It’s a natural evolution that reflects the real complexity of working with LLMs in production applications.

From prompt engineering to context engineering

The problem with the term “prompt engineering” is that many people confuse it with blind prompting - simply writing a question in ChatGPT and expecting a result. That’s not engineering, that’s using a tool.

8 min

1497 words

Yet another protocol promising to change everything

When IBM Research announced the Agent Communication Protocol (ACP) as part of the BeeAI project, my first reaction was the usual one: “Oh, just another universal protocol”. With nearly 30 years in this field, I’ve seen too many “definitive standards” that ended up forgotten.

But there’s something different about ACP that made me pay attention: it doesn’t promise to solve all the world’s problems. It simply focuses on one very specific thing: making AI agents from different frameworks talk to each other. And it does it in a way that really makes sense.

7 min

1438 words

A few days ago I discovered Agent Lightning, a Microsoft project that I believe marks a before and after in how we think about AI agent orchestration. It’s not just another library; it’s a serious attempt to standardize how we build multi-agent systems.

What is Agent Lightning?

Agent Lightning is a Microsoft framework for orchestrating AI agents. It enables composition, integration, and deployment of multi-agent systems in a modular and scalable way. The premise is simple but powerful: agents should be components that can be combined, connected, and reused.

3 min

609 words

Recently, Addy Osmani published an article that gave me much to think about: “Self-Improving Coding Agents”. The idea is simple but powerful: agents that not only execute tasks, but improve their own performance over time.

This isn’t science fiction. It’s happening now, in 2026. And it has profound implications for the future of software development and, by extension, for all professions.

What is a Self-Improving Agent?

A self-improving agent is an AI system with the capacity to: