Open Source & 100% In-Browser

Run & Share
AI in the Browser No Server. No Install. No Login.

Generate images, chat with LLMs, train ML models, and visualize data — all running locally in your browser with WebGPU. Open source and completely private.

Image Generation LLMs in Browser Machine Learning Data Visualization
Live Chat Demo
app.scribbler.live
Explain what AI is in simple terms
Scribbler AI • on-device
Ask anything…
Running locally in your browser — no server, no API
Why Scribbler?

AI Without the Infrastructure

Scribbler runs AI models directly in your browser using WebGPU. No servers to manage, no APIs to pay for, no data leaving your device.

100% Private

All AI runs on your device. Your data never leaves the browser — no server, no tracking.

Zero Setup

No backend, no install, no npm, no Python. Open a URL and start running AI instantly.

WebGPU Accelerated

Leverages WebGPU for near-native performance on LLMs, image generation, and ML inference.

Load Any Library

Dynamically import TensorFlow.js, ONNX Runtime, Transformers.js, Plotly, and more from CDNs.

Share & Collaborate

Save notebooks as .jsnb files, share via URL, or push directly to GitHub.

Interactive Notebooks

Mix JavaScript, HTML, CSS, and Markdown in live cells. See AI output as you code.

AI Meets the Browser

WebGPU and JavaScript are unlocking a new era of on-device AI — accessible to everyone, everywhere.

0
%

Client-Side

0
servers

Required

0
+

AI Examples

0
sec

To First Output

How It's Different

Not Another Cloud Notebook

No Python. No backend. No GPU setup. Scribbler runs entirely in your browser — everything stays on your device.

No Python Required No Backend Needed No GPU Setup Runs Locally
Scribbler Google Colab Backend / Server Cloud APIs
Language JavaScript Python Python / Node / etc. Any
Runs On Your browser Google servers Your server / cloud VM Provider's cloud
Setup Time None Google login Install + configure API keys + billing
GPU Required WebGPU auto Runtime allocation CUDA / drivers Provider-managed
Data Privacy Never leaves device Sent to Google On your infra Sent to provider
Cost Free forever Free tier + paid GPU Server costs Per-request billing
Works Offline Yes
Live Demo

WebNN & ONNX
Right in Your Browser

Run Stable Diffusion, LLM chat, and text-to-speech directly on your device using WebNN and ONNX Runtime Web. No downloads, no cloud, no API keys — your browser's GPU does all the work.

  • Image Generation — Stable Diffusion via WebNN + ONNX Runtime
  • LLM Chat — Converse with language models on-device
  • Text to Speech — Kokoro TTS running entirely client-side
scribbler.live/webnn-sample
What Can You Build?

Use Cases

From generating images to running LLMs to crunching data — all in the browser with no infrastructure.

See what others are building

Image Generation

Run Stable Diffusion and other diffusion models directly in the browser via WebGPU.

Try It

Highlights

  • Text-to-image generation on-device.
  • No API keys or cloud costs.
  • Experiment with prompts interactively.
  • Share generated images and notebooks.

LLMs in Browser

Chat with Llama, Phi, Gemma and other LLMs locally using WebLLM — fully private.

Try It

Highlights

  • Run open-source LLMs on-device.
  • Build chat UIs and AI agents.
  • Text summarization and extraction.
  • Zero cost, zero latency to cloud.

Machine Learning

Train and run ML models with TensorFlow.js, Brain.js, and ONNX Runtime Web.

Try It

Highlights

  • Train neural networks in the browser.
  • Run pre-trained model inference.
  • Classification, regression, clustering.
  • Visualize training loss and metrics.

Data Analysis & Visualization

Analyze datasets and create interactive charts with Plotly, D3, and built-in tools.

Try It

Highlights

  • Interactive Plotly and D3 charts.
  • Load CSV, JSON, and API data.
  • Statistical analysis and transforms.
  • Export visualizations as HTML.

Start running AI in your browser now.

No login, no download, no subscription. Just open the app and run LLMs, generate images, or visualize data — instantly.

For enterprise use and partnerships reach out to us.

How It Works

Get started in seconds. Load AI models, write code, and see results — all in interactive notebook cells.

Getting Started

  • No installation needed. Open app.scribbler.live and start immediately.
  • Or download / clone the GitHub repo to self-host.
  • No npm, no Python, no Docker. Just a browser with WebGPU support.

Quick Examples

Load an LLM in one line:

await scrib.loadWebLLM("Llama-3.1-8B-q4f16")

Plot a chart from data:

range(0,10,0.01).map(Math.sin).plot()

Show any output inline:

scrib.show("Hello World")

Example Notebooks

Browse 50+ AI and data examples in the Gallery, or explore the examples on GitHub. Each notebook can be opened instantly in the app via URL.

Interactive Cells

  • Each notebook is made of code and doc cells. Code cells run JavaScript; doc cells render HTML/Markdown.
  • AI model outputs — generated images, chat responses, charts — render inline below the cell.
  • Press ▶ or Cmd/Ctrl-Enter to execute. Rearrange, add, or delete cells freely.

Load Any AI Library

Dynamically import libraries like TensorFlow.js, Transformers.js, WebLLM, ONNX Runtime, Plotly, and D3 from CDNs — no bundler needed. Libraries load on demand and stay cached.

Share & Collaborate

Save notebooks as .jsnb files and share via URL — anyone can open them instantly. Push to and pull from GitHub directly. Export notebooks or just the output as HTML.

Self-Host Anywhere

Scribbler is pure static files — host it on any web server, S3 bucket, or GitHub Pages. No backend process, no database, no containers. Perfect for air-gapped or enterprise environments.