Generate images, chat with LLMs, train ML models, and visualize data — all running locally in your browser with WebGPU. Open source and completely private.
Scribbler runs AI models directly in your browser using WebGPU. No servers to manage, no APIs to pay for, no data leaving your device.
All AI runs on your device. Your data never leaves the browser — no server, no tracking.
No backend, no install, no npm, no Python. Open a URL and start running AI instantly.
Leverages WebGPU for near-native performance on LLMs, image generation, and ML inference.
Dynamically import TensorFlow.js, ONNX Runtime, Transformers.js, Plotly, and more from CDNs.
Save notebooks as .jsnb files, share via URL, or push directly to GitHub.
Mix JavaScript, HTML, CSS, and Markdown in live cells. See AI output as you code.
WebGPU and JavaScript are unlocking a new era of on-device AI — accessible to everyone, everywhere.
Client-Side
Required
AI Examples
To First Output
No Python. No backend. No GPU setup. Scribbler runs entirely in your browser — everything stays on your device.
| Scribbler | Google Colab | Backend / Server | Cloud APIs | |
|---|---|---|---|---|
| Language | JavaScript | Python | Python / Node / etc. | Any |
| Runs On | Your browser | Google servers | Your server / cloud VM | Provider's cloud |
| Setup Time | None | Google login | Install + configure | API keys + billing |
| GPU Required | WebGPU auto | Runtime allocation | CUDA / drivers | Provider-managed |
| Data Privacy | Never leaves device | Sent to Google | On your infra | Sent to provider |
| Cost | Free forever | Free tier + paid GPU | Server costs | Per-request billing |
| Works Offline | Yes |
Run Stable Diffusion, LLM chat, and text-to-speech directly on your device using WebNN and ONNX Runtime Web. No downloads, no cloud, no API keys — your browser's GPU does all the work.
From generating images to running LLMs to crunching data — all in the browser with no infrastructure.
See what others are buildingRun Stable Diffusion and other diffusion models directly in the browser via WebGPU.
Try ItHighlights
Chat with Llama, Phi, Gemma and other LLMs locally using WebLLM — fully private.
Try ItHighlights
Highlights
Analyze datasets and create interactive charts with Plotly, D3, and built-in tools.
Try ItHighlights
No login, no download, no subscription. Just open the app and run LLMs, generate images, or visualize data — instantly.
Get started in seconds. Load AI models, write code, and see results — all in interactive notebook cells.
Load an LLM in one line:
await scrib.loadWebLLM("Llama-3.1-8B-q4f16")
Plot a chart from data:
range(0,10,0.01).map(Math.sin).plot()
Show any output inline:
scrib.show("Hello World")
Browse 50+ AI and data examples in the Gallery, or explore the examples on GitHub. Each notebook can be opened instantly in the app via URL.
Dynamically import libraries like TensorFlow.js, Transformers.js, WebLLM, ONNX Runtime, Plotly, and D3 from CDNs — no bundler needed. Libraries load on demand and stay cached.
Save notebooks as .jsnb files and share via URL — anyone can open them instantly. Push to and pull from GitHub directly. Export notebooks or just the output as HTML.
Scribbler is pure static files — host it on any web server, S3 bucket, or GitHub Pages. No backend process, no database, no containers. Perfect for air-gapped or enterprise environments.