Mozilla AI surprises again: AI agents that work just by opening an HTML
A few days ago I came across a Mozilla AI project that really caught my attention: WebAssembly Agents. And after 30 years watching the industry complicate life with dependencies, installations, and configurations, seeing something that works just by “opening an HTML” made me smile.
The problem it solves (and we all know it)
How many times have you tried to test an AI project and encountered this?
pip install this
npm install that
configure this API
download this model
install this framework
And after an hour installing things, it turns out it doesn’t work in your Python version, or the model doesn’t fit in your GPU, or there’s a dependency conflict. Frustrating, right?
Mozilla AI has attacked this problem elegantly: AI agents that run directly in the browser, packaged as standalone HTML files. No installations, no complex configurations, no headaches.
The magic behind: WebAssembly + Pyodide
The technical solution is really elegant. They use WebAssembly together with Pyodide to execute Python directly in the browser. For those not familiar:
- WebAssembly: A binary format that allows executing C, C++, Rust, and Python code at near-native speed in browsers
- Pyodide: A Python distribution that runs entirely on WebAssembly
The result is that you can execute complex Python code, including many libraries, directly in your browser. Without installing anything on your machine.
What I liked about the approach
As a developer who has worked with all kinds of technologies, several things caught my attention:
1. Distribution simplicity A single HTML file that contains both the UI and the agent code. Reminds me of the old days when web applications were simple and self-contained.
2. Sandboxing by design By running in the browser, agents are automatically in a sandboxed environment. This is especially important when working with AI, where security is critical.
3. Multi-model compatibility Although it works with OpenAI out-of-the-box, you can also use local models through Ollama, vLLM, or HuggingFace TGI. This flexibility seems fundamental to me.
The practical examples
Mozilla AI has included several demos showing the possibilities:
- hello_agent.html: A basic conversational agent
- handoff_demo.html: A multi-agent system that routes requests to specialized agents
- tool_calling.html: Agent with integrated tools (including the famous question “how many Rs are in strawberry?”)
- ollama_local.html: To use local models completely offline
The limitations (because there always are)
Mozilla AI is honest about current limitations, and I appreciate that transparency:
1. Dependency on specific frameworks For now it only works with openai-agents. Other frameworks like smolagents have their own limitations with Pyodide.
2. CORS everywhere As soon as you want your agent to access external APIs or local models, you’ll run into CORS problems. It’s the price of web security.
3. Heavy models Not all computers can run large models. On a MacBook M3 you can run qwen3:8b without problems, but on a Raspberry Pi 5 you start to suffer.
My experience testing it
I’ve been playing with the examples and the experience is surprisingly good. The setup is as simple as:
- Download the HTML files
- Configure your API key (OpenAI or local)
- Open the HTML in the browser
- It’s already working!
The speed is respectable, although obviously not as fast as running native Python. But for prototyping and experimentation, it’s more than enough.
Reflections from a veteran developer
This project has made me reflect on several things:
Back to basics Reminds me of when web development was simpler. An HTML file that contained everything you needed. Sometimes simplicity is the best feature.
The power of the modern browser It’s impressive what current browsers can do. Running complex AI models directly on the client would have been science fiction 10 years ago.
Barrier-free experimentation This completely eliminates entry barriers for experimenting with AI agents. It’s exactly what we need for more people to be able to test and learn.
Is it the future?
As the author of the Mozilla AI post says: “I don’t know if WebAssembly agents are a great idea or just a fun hack”. But to me, they seem to be touching on something important.
In my work philosophy, I’ve always said that “for every minute of planning, 2 minutes less of development”. Well, this project completely eliminates setup and configuration time. You can go straight to the point: experimenting with your agent.
Conclusion
WebAssembly Agents is one of those projects that makes you think “why didn’t I think of it before?”. The idea of packaging AI agents as standalone HTML files is brilliant in its simplicity.
Is it perfect? No. Is it useful? Absolutely. Is it worth experimenting with? Without a doubt.
As I always say: errors and problems always happen, you have to take that into consideration. But this project minimizes many of the typical problems of AI development, and that’s already a great advance.
I recommend you check out the GitHub repository and try the examples. It’s one of those projects that makes you feel technology can be simple and powerful at the same time.
Have you tried WebAssembly Agents? What do you think about the idea of running AI directly in the browser? I’d love to hear your experiences.













Comments