
SoftwiredTech was instrumental in advancing NiiVue—adding WebAssembly plugins, accelerating AI models, and creating compelling user interfaces. They bring a terrific mix of technical expertise and a deep understanding of the real challenges teams face in medical imaging. Truly exceptional partners to work with.Chris Rorden, University of South Carolina

We relied on SoftwiredTech to help get Brainchop.org running on WebGPU, and they delivered exceptional results. Beyond providing example shaders, their technical expertise shone when we encountered a bug in tinygrad’s WebGPU exports. They stepped in, fixed the bug, and managed the upstream merge. They are a highly professional team that is incredibly easy to work with.Sergey Plis, brainchop.org

Great work with WebGPU! I'm really happy to have that wgpu-py dependency gone.George Hotz, the tiny corp


We replaced the wgpu-py WebGPU runtime with the Dawn WebGPU engine (used in Google Chrome) in tinygrad. By using the clang2py tool to auto-generate the Python interface, we directly interact with Dawn, bypassing third-party libraries. This approach allows us to immediately leverage new features from Dawn without waiting for wrapper updates. The integration of Dawn enabled us to finally add f16 support, which was not possible with wgpu.
The next chapter? We’re working on making it incredibly easy to export models to WebGPU. Stay tuned!


We ported the Brainchop brain segmentation models to tinygrad with WebGPU, enabling fast whole-brain segmentation directly in the browser. This allows users to process brain imaging data entirely client-side, improving privacy and speed without relying on cloud services.


We took the powerful Stable Diffusion model and ported it to WebGPU using tinygrad, enabling fast, high-quality image generation right in the browser. With the performance boost from WebGPU, users can now run Stable Diffusion locally, eliminating the need for expensive cloud-based computation.
If a system unit breaks, we dive in - patch it, fork it, or rewrite it. We're fluent across the stack and work where others hesitate.
We take responsibility at every layer - digging to the root, improving what's upstream, and delivering fixes that last, not patches that crumble.
We tackle deep debugging, dependency dives, performance tuning, and compiler-level work. Customizing, extending, or rebuilding - this is where we excel.