Type the task. Get an architecture you can read, edit, and export as PyTorch. 14 structural checks run as you build, so the wiring bug shows up before the GPU bill does.
Your sketch has no tensor shapes, no validation, no link to running code. By Monday you can't read it either.
print(model) isn't an architectureA 200-line tree dump won't show data flow, won't catch the wiring bug, won't fit in a Slack message.
Figure 2 is pretty. The actual layer order, init, and shape contracts are buried in 40 pages of methods.
3-minute walkthrough — problem description to working PyTorch model
+RandAugment. Generated vit_cifar.py + training script.
Every step above is a real action in the app, not a script. Try it yourself →
Names anonymized while we collect attribution permission · roles representative of actual users
Vision model for catching manufacturing defects from camera feeds. Lightweight for edge inference.
Classify and route customer emails by urgency and department. Transformer-based, exports fine-tuning code.
Detect equipment failures early from sensor readings. LSTM/GRU tuned for time-series patterns.
Paste an arXiv URL and get the architecture on the canvas. Diff it against your baseline. Export and train.
Two-tower neural collaborative filtering. User and item embedding model for product recommendations.
Lightweight architecture for mobile inference. Quantization preview, ONNX export, hardware fit analysis built in.
"Classify customer support tickets" or "10-class image classifier on 224×224 RGB". The agent asks back if it needs to.
The agent picks the layers, wires the connections, and propagates tensor shapes. 14 lint rules check your work as it builds.
Run training. AI reads the loss curves, diagnoses overfitting or underfitting, and applies targeted fixes.
PyTorch, Keras, ONNX, Jupyter notebooks. Clean readable code that runs without any Neurarch import.
286 layer types + 70 macro blocks. Tensor shapes propagate automatically. Dimension mismatches caught before you train.
The agent has access to your selected layers, the shape trace, and the lint output. So when you ask "why's this exploding?", it answers from your model — not a textbook one.
Catches vanishing gradients, missing normalization, GQA head mismatches, MoE aux-loss, overfitting risk, ordering errors before you waste compute.
Paste a paper URL. The agent parses the architecture and builds it on the canvas. Diff against your version.
Paste an HF model ID, get a clickable architecture. Useful when you want to read what BERT or LLaMA actually does instead of skimming the config.
PyTorch, Keras, ONNX, Jupyter notebooks, PDF reports. Full training script with optimizer and scheduler.
Drop a CSV, pick a HuggingFace dataset, upload an image-zip, or resume from a checkpoint URL. A10G / A100, real loss curves, AI diagnoses results live and proposes the patch.
Save architecture checkpoints. Diff any two versions. Compare training runs side-by-side, mark a best run, re-use a config in one click.
Live cursors, shared canvas, team model library. WebSocket collab is in the box, no Pusher / Liveblocks dependency.
Every layer carries provenance. One canvas state generates a TikZ figure, a "Methods" paragraph, and a BibTeX file with the right entries cited. Authoring path for ML papers, not toy demos.
Drop a screenshot of any architecture diagram. Vision LLM rebuilds nodes + edges (residuals included). Skip-edges get bowed routing so the result reads like the original figure.
Point at a paper and your nn.Module source. Per-layer drift report: dimensions, activations, ordering, missing residuals. Catches re-implementation bugs before review.
Edit code in Monaco, parse back, the canvas only updates what changed (matched by name first, type-ordinal second). BERT's two norm layers don't collide; residuals survive on round-trip.
Latency / FLOPs / size / energy on 7 platforms (mobile, browser-WebGPU, Coral edge-TPU, A100, …). Quantization variants table, perf-budget gauge, smell detector with clickable layers, comparison vs LeNet → Llama-3.
Real ONNX export through Modal — drop the file back in the browser to run inference live (WASM + WebGPU). Top-5 softmax bar, no server round-trip. CSV upload auto-infers the task before you even build.
| Capability | Neurarch | HF AutoTrain | Google AutoML | Write code |
|---|---|---|---|---|
| Natural language → architecture | ✓ | — | — | — |
| Visual canvas + tensor shape trace | ✓ | — | — | — |
| Architecture lint (gradient / overfit / order) | ✓ | — | — | — |
| Import arXiv paper → canvas | ✓ | — | — | — |
| Import any HuggingFace model | ✓ | ✓ | — | manual |
| Import .onnx / .safetensors → editable canvas | ✓ | — | — | — |
| PyTorch + ONNX code export (you own it) | ✓ | — | — | ✓ |
| AI reads training curves + targeted fixes | ✓ | — | limited | — |
| Real-time team collaboration | Pro+ | — | — | — |
| Free to start | ✓ | ✓ | trial | ✓ |
AutoML hands you a black box and a number. Great if the number is good enough. If it isn't, you have nowhere to start debugging. Neurarch shows you the layers, the shapes, and the code — so when it goes wrong, you know which line to change.
print(model)
| Capability | Whiteboard / Excalidraw | print(model) |
Netron | Neurarch |
|---|---|---|---|---|
| See full layer graph | manual | tree only | ✓ | ✓ |
| Tensor shapes verified end-to-end | — | if forward runs | ✓ | ✓ |
| Edit graph + re-export code | — | — | read-only | ✓ |
| Architecture lint (gradient/overfit/order) | — | — | — | ✓ |
| Build from prompt or arXiv paper | — | — | — | ✓ |
| AI reads training curves + applies fixes | — | — | — | ✓ |
| Shareable URL of your architecture | screenshot | — | file only | ✓ |
.onnx or .safetensors file onto Neurarch — we reconstruct the full graph as editable canvas nodes.
Modify layers, fix architecture issues, run training, and export clean PyTorch code.
Netron shows you what a model is. Neurarch lets you change it.
Beta pricing — rates lock in for life when you subscribe during early access.
Need invoicing, a custom seat count, or on-prem? — contact us →
No accounts required for the canvas, agent, or export. When you BYOK, the request goes browser → provider — we're not in the path.
Architecture, prompts, and pasted code live in browser memory only. Refresh the tab to wipe them.
sessionStorageYour Anthropic / Gemini key never touches our servers and is cleared automatically when the tab closes.
Browser → Anthropic / Google. We're not a man-in-the-middle. Open the network tab and verify it yourself.
Local snapshots stay in browser memory. Cloud save (Pro) and team workspaces (Pro Plus) are explicit actions — one-click delete from the dashboard at any time.
Drag-and-drop architecture builder with automatic tensor shape propagation and 20+ rule lint checker.
Tell the agent what you're building. It picks the layers, wires them up, and patches its own mistakes when the advisor flags something.
Paste any arXiv URL or HF model ID and the architecture appears on your canvas instantly.
Production-ready code, full training scripts, Jupyter notebooks — you own the output.
One-click training on A10G / A100. Drop a CSV, pick a HuggingFace dataset, upload an image-zip, or resume from a checkpoint URL. Real loss curves, early stopping, AI diagnosis of results.
Export the canvas to a real .onnx file via Modal. Drop the file back in the browser to run inference live (WASM + WebGPU). Plus Docker / FastAPI / training-script bundles for deploy anywhere.
Every layer carries provenance (paper / HF / manual). Generate a TikZ figure, a "Methods" paragraph, and a BibTeX file with the right entries cited — straight from canvas state.
Live cursors, shared canvas, team model library. Pro Plus feature launching with Stripe billing.
Pick a base model (Llama, Mistral, Phi), pick a HuggingFace instruction dataset, and Modal handles the LoRA loop. Tabular and image fine-tuning already works today.
Trained model + auto-generated card → pushed to your HF account in one button. Local Docker / FastAPI deploy bundles already work today.
Full canvas + AI agent + code export on the free plan.
Drop your email to get Pro updates and early access pricing.
No spam. Or just open the app now →
Add this badge to your README, notebook, or paper to show the architecture was built with Neurarch.
Click to copy Markdown
Bug reports, feature requests, partnerships, design partners, investor intros — fastest way to reach us is email or GitHub.