Skip to content

Interactive Demo

Try WebLLM directly in your browser with two types of demos:

  • Quick Tests: Streamlined examples showing exactly what’s being sent
  • Live Examples: Full interactive demos with request summaries and source code

Test WebLLM with these streamlined examples. Each shows exactly what’s being sent:

"Write a haiku about coding"
Ready
View Source
await promptInstall();
const result = await generateText({
prompt: 'Write a haiku about coding'
});
console.log(result.text);
"Write a short story about a robot learning to paint"
creative quality: high
Ready
View Source
await promptInstall();
const result = await generateText({
prompt: 'Write a short story about a robot learning to paint',
task: 'creative',
hints: {
"quality": "high"
}
});
console.log(result.text);
"What is the capital of France?"
qa speed: fastest
Ready
View Source
await promptInstall();
const result = await generateText({
prompt: 'What is the capital of France?',
task: 'qa',
hints: {
"speed": "fastest"
}
});
console.log(result.text);
"Write a JavaScript function that checks if a string is a palindrome"
coding quality: bestcapabilities: codeGeneration
Ready
View Source
await promptInstall();
const result = await generateText({
prompt: 'Write a JavaScript function that checks if a string is a palindrome',
task: 'coding',
hints: {
"quality": "best",
"capabilities": {
"codeGeneration": true
}
}
});
console.log(result.text);
💬 3 messages
qa
Ready
View Source
await promptInstall();
const result = await generateText({
messages: [
// 3 messages in conversation
],
task: 'qa'
});
console.log(result.text);

Generate a Haiku

The simplest WebLLM example - generate creative text from a prompt

Ready to run
View Source Code
// Ensure extension is installed
await promptInstall();
// Generate a haiku
const result = await generateText({
prompt: 'Write a haiku about coding'
});
// Display the result
console.log(result.text);

Check WebLLM Availability

Detect if the extension is installed and check browser compatibility

Ready to run
View Source Code
// Check if extension is available
const available = isAvailable();
console.log('Extension installed:', available);
// Get browser information
const browserInfo = getBrowserInfo();
console.log('Browser:', browserInfo.browserName);
console.log('Supported:', browserInfo.isSupported);
if (browserInfo.isSupported && browserInfo.installUrl) {
console.log('Install URL:', browserInfo.installUrl);
}
if (!browserInfo.isSupported && browserInfo.reason) {
console.log('Not supported:', browserInfo.reason);
}

Translation Task

Use task hints to guide provider selection for translation

Ready to run
View Source Code
// Ensure extension is ready
await promptInstall();
// Generate translation with task hints
const result = await generateText({
task: 'translation',
hints: {
quality: 'high',
capabilities: { multilingual: true }
},
prompt: 'Translate to Spanish: Hello, how are you today?'
});
console.log('Translation:', result.text);

Quick Question & Answer

Fast factual responses optimized for speed

Ready to run
View Source Code
await promptInstall();
// Quick Q&A with speed optimization
const result = await generateText({
task: 'qa',
hints: {
speed: 'fastest',
quality: 'standard'
},
prompt: 'What is the capital of France?'
});
console.log('Answer:', result.text);
console.log('Response time: ~' + (result.usage?.totalTokens || 'unknown') + ' tokens');

Generate Code

Generate code with reasoning capabilities

Ready to run
View Source Code
await promptInstall();
// Code generation with quality hints
const result = await generateText({
task: 'coding',
hints: {
quality: 'best',
capabilities: {
codeGeneration: true,
reasoning: true
}
},
prompt: 'Write a JavaScript function that checks if a string is a palindrome'
});
console.log('Generated code:');
console.log(result.text);

Multi-turn Conversation

Maintain context across multiple messages

Ready to run
View Source Code
await promptInstall();
// Multi-turn conversation
const result = await generateText({
messages: [
{ role: 'user', content: 'What is TypeScript?' },
{ role: 'assistant', content: 'TypeScript is a typed superset of JavaScript that compiles to plain JavaScript.' },
{ role: 'user', content: 'What are its main benefits?' }
]
});
console.log('AI Response:');
console.log(result.text);

Below are full HTML examples you can copy and run in your own projects.

The most basic WebLLM example - generate text from a prompt.

<!DOCTYPE html>
<html>
<head>
<title>WebLLM Simple Demo</title>
</head>
<body>
<h1>Simple Text Generation</h1>
<button id="generate">Generate Haiku</button>
<pre id="result"></pre>
<script type="module">
import { promptInstall, generateText } from 'webllm';
document.getElementById('generate').onclick = async () => {
try {
// Ensure extension is installed
await promptInstall();
// Generate text
const result = await generateText({
prompt: 'Write a haiku about coding'
});
document.getElementById('result').textContent = result.text;
} catch (error) {
document.getElementById('result').textContent = 'Error: ' + error.message;
}
};
</script>
</body>
</html>

What this does:

  • Checks if the WebLLM extension is installed (prompts user to install if needed)
  • Generates a creative haiku when the button is clicked
  • Displays the result or any errors

Build a simple chat interface with conversation history.

<!DOCTYPE html>
<html>
<head>
<title>WebLLM Chat Demo</title>
<style>
#messages { max-height: 400px; overflow-y: auto; border: 1px solid #ccc; padding: 10px; margin: 10px 0; }
.message { margin: 5px 0; padding: 8px; border-radius: 4px; }
.user { background: #e3f2fd; }
.assistant { background: #f5f5f5; }
</style>
</head>
<body>
<h1>Chat with AI</h1>
<div id="messages"></div>
<input type="text" id="input" placeholder="Type your message..." style="width: 300px;">
<button id="send">Send</button>
<script type="module">
import { promptInstall, generateText } from 'webllm';
const messages = [];
let ready = false;
// Initialize extension
(async () => {
try {
await promptInstall();
ready = true;
} catch (error) {
addMessage('system', 'Failed to initialize: ' + error.message);
}
})();
function addMessage(role, content) {
const div = document.createElement('div');
div.className = `message ${role}`;
div.textContent = `${role}: ${content}`;
document.getElementById('messages').appendChild(div);
}
document.getElementById('send').onclick = async () => {
if (!ready) {
alert('WebLLM is not ready yet');
return;
}
const input = document.getElementById('input');
const userMessage = input.value.trim();
if (!userMessage) return;
// Add user message
addMessage('user', userMessage);
messages.push({ role: 'user', content: userMessage });
input.value = '';
try {
// Generate response
const result = await generateText({
messages: messages
});
// Add assistant message
addMessage('assistant', result.text);
messages.push({ role: 'assistant', content: result.text });
} catch (error) {
addMessage('system', 'Error: ' + error.message);
}
};
// Send on Enter key
document.getElementById('input').onkeypress = (e) => {
if (e.key === 'Enter') document.getElementById('send').click();
};
</script>
</body>
</html>

What this does:

  • Creates a chat interface that maintains conversation history
  • Sends the full conversation context with each request
  • Allows pressing Enter to send messages
  • Handles errors gracefully

WebLLM in a React application with hooks.

import { useState, useEffect } from 'react';
import { promptInstall, generateText, isAvailable } from 'webllm';
function App() {
const [input, setInput] = useState('');
const [output, setOutput] = useState('');
const [loading, setLoading] = useState(false);
const [error, setError] = useState(null);
async function handleGenerate() {
setLoading(true);
setError(null);
try {
// Prompt for installation if needed
if (!isAvailable()) {
await promptInstall();
}
// Generate text
const result = await generateText({
task: 'creative',
hints: { quality: 'high' },
prompt: input
});
setOutput(result.text);
console.log('Tokens used:', result.usage.totalTokens);
} catch (err) {
setError(err.message);
} finally {
setLoading(false);
}
}
return (
<div className="App">
<h1>WebLLM React Demo</h1>
<textarea
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Enter your prompt..."
rows={4}
style={{ width: '100%', marginBottom: '10px' }}
/>
<button
onClick={handleGenerate}
disabled={loading || !input}
style={{ padding: '10px 20px' }}
>
{loading ? 'Generating...' : 'Generate'}
</button>
{error && (
<div style={{ color: 'red', marginTop: '10px' }}>
Error: {error}
</div>
)}
{output && (
<div style={{ marginTop: '20px' }}>
<h3>Result:</h3>
<pre style={{ background: '#f5f5f5', padding: '15px', borderRadius: '4px' }}>
{output}
</pre>
</div>
)}
</div>
);
}
export default App;

Installation for React:

Terminal window
npm install webllm

Section titled “Option 1: Using a Module Bundler (Recommended)”

Setup:

Terminal window
# Create new project
npm create vite@latest my-webllm-demo -- --template vanilla
# Install WebLLM
cd my-webllm-demo
npm install webllm
# Or install with Vercel AI SDK
npm install webllm webllm-ai-provider ai

Then copy any example code into your project and run:

Terminal window
npm run dev

Option 2: Using Import Maps (No Build Step)

Section titled “Option 2: Using Import Maps (No Build Step)”

For simple HTML files, use import maps:

<!DOCTYPE html>
<html>
<head>
<script type="importmap">
{
"imports": {
"webllm": "https://esm.sh/webllm@latest",
"ai": "https://esm.sh/ai@latest",
"webllm-ai-provider": "https://esm.sh/webllm-ai-provider@latest"
}
}
</script>
</head>
<body>
<!-- Your example code here -->
<script type="module">
import { generateText } from 'webllm';
// ... rest of your code
</script>
</body>
</html>

You can also run these examples on online code editors:

  1. Open CodePen or JSFiddle
  2. Add import map in HTML settings
  3. Copy the example code
  4. Make sure you have the WebLLM extension installed in your browser

If promptInstall() shows an error:

  • Make sure you’re using Chrome, Edge, or another Chromium browser
  • Check that you don’t already have the extension installed
  • Try refreshing the page after installation

If you get this error:

  1. Click the WebLLM extension icon in your browser toolbar
  2. Go to the “Providers” tab
  3. Add at least one provider (Anthropic, OpenAI, or local model)
  4. Add your API key or download a local model

If you see module import errors:

  • Make sure you’ve installed webllm: npm install webllm
  • Check that your bundler supports ES modules
  • For plain HTML, use import maps (see Option 2 above)

If the “Run Demo” buttons don’t work:

  • Make sure JavaScript is enabled in your browser
  • Check the browser console for any errors
  • Ensure you have the WebLLM extension installed
  • Try refreshing the page