Skip to content

WebLLM Roadmap

WebLLM is on a journey to make AI a native part of the web platform. This roadmap outlines our current status, near-term goals, and long-term vision.

Status:Available Now

We’re currently in the Chrome Extension phase, where WebLLM works through a browser extension that provides the navigator.llm API to web pages. The extension manages providers, handles permissions, and provides a unified API for AI capabilities.

We’re focused on expanding the capabilities and reach of the extension while continuing to validate the API design.

Exploring support for additional browsers beyond Chrome. Firefox is the natural next step, followed by other Chromium-based browsers (Edge, Brave). Safari presents unique technical challenges we’re investigating.

Core improvements:

  • Streaming responses for real-time generation
  • Vision support for image analysis
  • Multi-modal model handling
  • Cost optimization and smart provider routing
  • Conversation memory and context management

Developer tooling:

  • Framework-specific integrations (React hooks, Vue composables, Svelte stores)
  • Testing utilities and mock providers
  • Enhanced debugging tools
  • Better documentation and examples

As the extension matures and gains adoption, we plan to engage with the W3C standardization process. This involves:

  • Drafting a complete technical specification
  • Security and privacy analysis
  • Presenting to the Web Incubator Community Group
  • Gathering feedback from browser vendors
  • Iterating based on real-world usage and vendor input

Web standards take time and require demonstrated demand, technical maturity, and vendor buy-in. The extension serves as a proving ground for the API design.

See: Platform Support Roadmap

Exploring support beyond web browsers:

  • Desktop application integration (Electron, Tauri)
  • Mobile browser support
  • Native mobile SDKs

The ultimate goal is for navigator.llm to be built directly into browsers, without requiring an extension. This would mean:

For users:

  • Seamless integration without installation
  • System-level key and model management
  • Better performance and privacy controls

For developers:

  • Universal API that works everywhere
  • Wider adoption and reach
  • Simplified integration

For the web:

  • AI as a standard web capability, like WebGL or WebRTC
  • Open, interoperable, vendor-neutral

Each browser vendor would implement the standard according to their architecture and priorities. The path to native integration depends on demonstrating value, achieving technical maturity, and earning vendor support.

Looking further ahead, standardized browser AI could enable:

  • Agentic systems with multi-step reasoning
  • Privacy-preserving personalization and fine-tuning
  • Collaborative learning approaches
  • Advanced multi-modal capabilities

These remain research areas, dependent on both technical feasibility and careful consideration of privacy and security implications.

  • Use WebLLM - Install and try it
  • Request support - Ask websites to add WebLLM
  • Provide feedback - Tell us what works and what doesn’t
  • Spread the word - Share with friends and communities
  • Build with WebLLM - Add AI to your projects
  • Share your experience - Write articles, tutorials
  • Contribute code - Submit PRs to improve WebLLM
  • Report bugs - Help us improve quality
  • Adopt WebLLM - Use it in your products
  • Sponsor development - Support the project financially
  • Contribute resources - Engineering time, infrastructure
  • Advocate - Talk to browser vendors
  • Engage with the spec - Review and provide feedback
  • Experiment - Implement origin trials
  • Commit - Help make WebLLM a standard

How we make decisions:

Users always have final say over:

  • Which providers to use
  • Where data goes
  • What gets shared

Privacy is not optional:

  • Local-first when possible
  • Transparent data flows
  • No tracking or telemetry

Make it easy to build:

  • Simple, intuitive API
  • Great documentation
  • Helpful error messages

No lock-in:

  • Open specification
  • Open source implementation
  • Community-driven development

Don’t break existing code:

  • Stable API guarantees
  • Clear migration paths
  • Versioned specifications

Follow our progress on GitHub: iplanwebsites/webllm

No. We adapt based on community feedback, technical realities, and vendor input. This is a living document that reflects our current thinking.

Web standards require demonstrated value, technical maturity, and multi-vendor support. The timeline depends on many factors outside our control. The extension provides a practical path forward regardless.

Yes! We prioritize based on user feedback, developer adoption, and technical feasibility. Share your thoughts on GitHub Discussions.


Together, we’re making AI a native part of the web. Join us!