Claude Code with LSP: from searching text to understanding code
5 min read

Claude Code with LSP: from searching text to understanding code

1020 words

I have been using Claude Code daily for months, and there is one configuration that has completely changed how it works with my code. It is not a new plugin, a more powerful model, or a magic prompt. It is something that has existed since 2016 and that most developers use without knowing it every time they open VS Code: the Language Server Protocol (LSP).

Karan Bansal published an excellent article explaining in detail how to enable LSP in Claude Code and why it matters. After trying it, I can confirm the difference is real and significant.

The problem: Claude Code treats your code as plain text

By default, when Claude Code needs to find a function, a class, or understand how your project components connect, it uses grep, glob, and read. These are text search tools. They work, but they have a fundamental problem: they treat code as if it were any ordinary text file.

If you ask it to find where User is defined, it will find the User class, but also the comment that says // Create a new User, the string "User not found", the import, the type alias, and dozens more matches in files that have nothing to do with what you need. Filtering that takes time. According to Karan’s measurements, between 30 and 60 seconds per query.

With LSP, the same query takes about 50 milliseconds. And it is 100% accurate. No false positives. No filtering needed. The language server knows exactly where the definition is because it understands the code structure, not just the text.

What LSP is and why you should care

Before 2016, every code editor needed its own implementation of support for each language. If you wanted Python autocomplete in Sublime Text, someone had to write a Python plugin for Sublime Text. If you wanted the same in Atom, another plugin. It was an M times N problem: M editors times N languages.

Microsoft created LSP to solve this. It is a protocol that separates language intelligence from the editor interface. A language server (like Pyright for Python or gopls for Go) understands the code. Any editor that speaks LSP can use it. The M times N problem becomes M plus N.

What makes this interesting for Claude Code is that the same capabilities that power your IDE, definition navigation, reference search, type error detection, can be put at the service of an AI agent.

What changes in practice

With LSP enabled, Claude Code gains two types of capabilities:

Passive capabilities (automatic)

The language server sends real-time diagnostics. If Claude Code edits a file and breaks a type, it detects it immediately without needing to compile or run tests. Missing imports, undefined variables, type errors: everything is detected before you see it.

In my experience, this has the most impact. When Claude Code writes code with type errors, it normally needs another editing cycle to correct them. With LSP, it fixes them in the same turn because it receives feedback instantly.

Active capabilities (on-demand)

When you ask about your code, Claude Code can use semantic operations instead of text searches:

  • goToDefinition: Go directly to where a function or class is defined
  • findReferences: Find all places where a symbol is used
  • hover: See type signatures and documentation
  • workspaceSymbol: Search symbols across the entire project
  • goToImplementation: Find concrete implementations of interfaces
  • incomingCalls/outgoingCalls: Trace call hierarchies

The important thing is that you do not need special commands. You ask in natural language and Claude Code chooses the appropriate LSP operation. If you say “where is authenticate defined”, it uses goToDefinition. If you say “who uses UserService”, it uses findReferences.

How to enable it

The setup is straightforward. Karan describes it as “two minutes”, and he is right.

1. Enable the flag

Add to ~/.claude/settings.json:

{
  "env": {
    "ENABLE_LSP_TOOL": "1"
  }
}

This flag is not officially documented yet. It was discovered through GitHub Issue #15619.

2. Install the language servers

You need to install the server corresponding to your languages:

LanguageCommand
Pythonnpm i -g pyright
TypeScript/JSnpm i -g typescript-language-server typescript
Gogo install golang.org/x/tools/gopls@latest
Rustrustup component add rust-analyzer
Javabrew install jdtls
C/C++brew install llvm

3. Install and enable the plugins

claude plugin marketplace update claude-plugins-official
claude plugin install pyright-lsp
claude plugin enable pyright-lsp

4. Restart Claude Code

On startup, you will see in the debug logs that the language servers initialize. Python, Go, and TypeScript start up in under a second. Java takes about 8 seconds due to the JVM.

My real experience

I work primarily with TypeScript and Go. After enabling LSP, the most noticeable change is in refactoring. When I ask Claude Code to rename a function or move a module, it now finds all call sites reliably. Before, with grep, there was always some stray reference that would show up later as an error in CI.

The other significant change is in exploring code I do not know. When I work in a new repository, Claude Code with LSP can trace call chains and understand the architecture much faster than doing recursive greps. Instead of “search User in all .ts files”, it does findReferences and gets exactly the relevant files, with type context included.

A practical tip

Karan suggests adding instructions to your CLAUDE.md to make Claude Code prioritize LSP over grep. It is good advice. Something like:

Use LSP operations (goToDefinition, findReferences) for code navigation.
Only use grep for text pattern or string searches.

This ensures Claude Code uses the fast path by default instead of falling back to text searches by inertia.

A 900x improvement you can feel

The difference between 30-60 seconds and 50 milliseconds is not an incremental improvement. It is a category change. It is the difference between Claude Code working as an assistant that searches files and working as a developer that understands your code.

It is two minutes of configuration. And once you try it, there is no going back.

References

Comments

Latest Posts

6 min

1205 words

After my previous article about agent-centric programming, I’ve been researching more advanced techniques for using Claude Code really productively. As a programmer with 30 years of experience, I’ve seen many promising tools that ultimately didn’t deliver on their promises. But Claude Code, when used correctly, is becoming a real game-changer.

Beyond the basics: The difference between playing and working seriously

One thing is using Claude Code for experiments or personal projects, and another very different thing is integrating it into a professional workflow. For serious projects, you need a different approach:

5 min

905 words

Throughout my career, I’ve seen many things change. I’ve gone from Borland to Visual Studio, from vi to Sublime Text, from Sublime to VS Code… And believe me, each change was a deliberate decision that cost me weeks of adaptation. But what’s happening now with AI tools is something completely different.

I’ve found myself using Copilot in the morning, trying Cursor in the afternoon, and checking out Claude Code before going to bed. And I’m not alone. Developers have gone from being faithful as dogs to our tools to being… well, promiscuous.

9 min

1747 words

If you’re using tools like Claude Code, GitHub Copilot Workspace, or similar, you’ve probably noticed there’s technical jargon that goes beyond simply “chatting with AI”. I’m talking about terms like rules, commands, skills, MCP, and hooks.

These concepts are the architecture that makes AI agents truly useful for software development. They’re not just fancy marketing words — each one serves a specific function in how the agent works.

Let’s break them down one by one in a clear way.

4 min

791 words

Lately I’ve been following a discussion that worries me quite a bit: to what extent are we delegating our thinking to AI. It’s not an abstract or philosophical question, it’s something very real I’m seeing day to day in our profession and in society in general.

Recently I read an article by Erik Johannes Husom titled “Outsourcing thinking” that, among other things, discusses the concept of “lump of cognition fallacy”. The idea is that, just as there’s an economic fallacy saying there’s a fixed amount of work to do, some believe there’s a fixed amount of thinking to do, and if machines think for us, we’ll just think about other things.

5 min

1019 words

A necessary reflection on the “AI-Native Engineer”

I read Addyo’s article about the “AI-Native Software Engineer” and, as a Principal Backend Engineer who has seen technological promises come and go for years, I have quite sincere opinions about it. Not all are comfortable to hear.

I’ve seen enough “revolutions” to separate the wheat from the chaff. And there’s a lot of both here.

What’s really working (honestly)

1. AI as copilot, not as pilot

The article’s metaphor about treating AI as a “junior programmer available 24/7” is accurate. In my experience working with teams, I’ve seen developers use GitHub Copilot and Claude effectively to:

3 min

591 words

Greg Brockman, President and Co-Founder of OpenAI, recently published a thread that perfectly describes the moment we’re living in software development. According to him, we’re witnessing a genuine renaissance in software development, driven by AI tools that have improved exponentially since December.

The qualitative leap

The most striking part of Brockman’s thread is how they describe the internal change at OpenAI: engineers who previously used Codex for unit tests now see the tool writing practically all code and handling a large portion of operations and debugging. This isn’t an incremental improvement, it’s a paradigm shift.