Try It Now
Load an AI model and run inference right in your browser
WebGPU is required for model inference. Please use Chrome, Edge, or another Chromium-based browser.
Select the task type that matches your model
How It Works
Choose a Model
Load from any model URL or upload your own ONNX models
Browser Downloads
Model downloads directly to your browser cache
GPU Acceleration
WebGPU uses your graphics card for fast inference
100% Private
Everything runs locally - no data sent to servers