How it works Use cases vs alternatives Pricing Privacy FAQ Contact GitHub ↗ Open App →
✦ Visual ML architecture, with an agent in the loop

From problem description
to working PyTorch model

Type the task. Get an architecture you can read, edit, and export as PyTorch. 14 structural checks run as you build, so the wiring bug shows up before the GPU bill does.

For ML engineers replacing whiteboards researchers reading papers teams onboarding new hires
Start building free → ▶ Watch demo (3 min)
✓ Free to start ✓ No credit card required ✓ You own the exported code ✓ API keys never leave your browser
engineers signed up
AI messages sent
training runs
142 layer types
The problem

Building the model is easy.
Knowing it's the right one is hard.

📐

Whiteboards die in your notebook

Your sketch has no tensor shapes, no validation, no link to running code. By Monday you can't read it either.

🐍

print(model) isn't an architecture

A 200-line tree dump won't show data flow, won't catch the wiring bug, won't fit in a Slack message.

📚

Papers leave the wiring out

Figure 2 is pretty. The actual layer order, init, and shape contracts are buried in 40 pages of methods.

neurarch.com
PROJECT
✦ Neurarch
📝 Support ticket classifier
▶ val_acc 87.3% · ep 12
📊 3.2M params · 12.4 GF
ADVISOR · 20 RULES
✓ 0 lint errors
⚠ overfitting → fixed
LIBRARY · 286 LAYERS · 70 BLOCKS
📄 arXiv · 🤗 HF · py source
📦 PyTorch · Keras · ONNX
INPUT
tokens · 512
EMBEDDING
+ pos · 512d
[B, T, 512]
TRANSFORMER
encoder_0 · 8h
TRANSFORMER
encoder_1 · 8h
POOL
cls_token
[B, 512]
DROPOUT
p=0.3
LINEAR
classifier · 4
OUTPUT
logits [B, 4]
⚙ LINEAR · classifier
in_features512
out_features4
initkaiming
✦ AGENT · CLAUDE
Built architecture BERT-style classifier · 2 encoders → pool → dropout → linear. val_acc 87.3% @ ep 12.
⚠ Overfitting detected val_loss diverged at epoch 8 (gap +0.18). Suggesting Dropout before classifier.
7x = F.dropout(x, p=0.3)
8return self.classifier(x)
+ dropoutval_loss ↓ 0.09PyTorch ✓

See it in action

3-minute walkthrough — problem description to working PyTorch model

A 30-second walkthrough

What it actually feels like

YOU
Build a vision transformer for CIFAR-10
NEURARCH
ViT-Tiny — patch embedding → 12× attention block → CLS head. ~5.5M params, all shapes verified. → canvas updated · 0 lint errors · ready to train
YOU
Make it deeper, drop the CLS token, use GAP head
NEURARCH
Diff applied: 12 → 24 attention blocks, removed CLS, swapped to global average pooling. → params 5.5M → 11.0M · gradient flow OK · advisor: consider stochastic depth above 18 layers
YOU
Train it on Modal, then export PyTorch
NEURARCH
Training on A10G — epoch 30/50, val_acc 78.4%. Diagnosing now: slight underfitting, suggesting +RandAugment. Generated vit_cifar.py + training script. → download · or one-click re-run with the fix

Every step above is a real action in the app, not a script. Try it yourself →

What people say

From early beta users

Names anonymized while we collect attribution permission · roles representative of actual users

★★★★★
"The biggest defense of this product is workflow integration — drop an arXiv link or a PyTorch file and land in an interactive ML-specific editor. The long-term moat is the deep engineering: AST parsers, intelligent graph routing, academic export pipelines."
J
J.M.
Member of Technical Staff · frontier AI research lab
★★★★★
"This aligns directly with what we're building. It would save us most of the early-stage architecture-design work — the part our team currently rebuilds from scratch every time on Transformer-based architectures."
L
L.K.
CEO · early-stage AI startup
★★★★★
"The idea and the visual interface are great — clean and intuitive. I uploaded a research paper, and they shipped 7 fixes to the import pipeline within 24 hours. If the full paper-to-runnable-code path lands end-to-end, this tool is unbeatable."
T
T.W.
CS PhD researcher · US research university
Use cases

What are people building?

🏭

Industrial defect detection

Vision model for catching manufacturing defects from camera feeds. Lightweight for edge inference.

"Build a CNN classifier for defect detection in manufacturing images"
Try this prompt →
🎫

Support ticket routing

Classify and route customer emails by urgency and department. Transformer-based, exports fine-tuning code.

"Route customer support tickets by urgency and department"
Try this prompt →
📡

IoT anomaly detection

Detect equipment failures early from sensor readings. LSTM/GRU tuned for time-series patterns.

"Anomaly detector for IoT sensor data"
Try this prompt →
📄

Replicate a paper

Paste an arXiv URL and get the architecture on the canvas. Diff it against your baseline. Export and train.

"Replicate ResNet-50 from the original paper"
Try this prompt →
🛒

E-commerce recommendations

Two-tower neural collaborative filtering. User and item embedding model for product recommendations.

"Recommendation model with user and item embeddings"
Try this prompt →
📱

On-device ML

Lightweight architecture for mobile inference. Quantization preview, ONNX export, hardware fit analysis built in.

"Compact transformer for on-device NLP on mobile CPU"
Try this prompt →
How it works

Four steps from idea to code

01

Describe your problem

"Classify customer support tickets" or "10-class image classifier on 224×224 RGB". The agent asks back if it needs to.

02

AI designs the architecture

The agent picks the layers, wires the connections, and propagates tensor shapes. 14 lint rules check your work as it builds.

03

Train & get AI diagnosis

Run training. AI reads the loss curves, diagnoses overfitting or underfitting, and applies targeted fixes.

04

Export & ship

PyTorch, Keras, ONNX, Jupyter notebooks. Clean readable code that runs without any Neurarch import.

Features

What's in the box

The basics
🏗

Visual canvas + shape inference

286 layer types + 70 macro blocks. Tensor shapes propagate automatically. Dimension mismatches caught before you train.

AI agent that sees your canvas

The agent has access to your selected layers, the shape trace, and the lint output. So when you ask "why's this exploding?", it answers from your model — not a textbook one.

🔍

Architecture advisor (20+ rules)

Catches vanishing gradients, missing normalization, GQA head mismatches, MoE aux-loss, overfitting risk, ordering errors before you waste compute.

📄

arXiv paper → canvas

Paste a paper URL. The agent parses the architecture and builds it on the canvas. Diff against your version.

🤗

HuggingFace import

Paste an HF model ID, get a clickable architecture. Useful when you want to read what BERT or LLaMA actually does instead of skimming the config.

Production code export

PyTorch, Keras, ONNX, Jupyter notebooks, PDF reports. Full training script with optimizer and scheduler.

Real GPU training via Modal

Drop a CSV, pick a HuggingFace dataset, upload an image-zip, or resume from a checkpoint URL. A10G / A100, real loss curves, AI diagnoses results live and proposes the patch.

📸

Snapshots + run history

Save architecture checkpoints. Diff any two versions. Compare training runs side-by-side, mark a best run, re-use a config in one click.

👥

Real-time collaboration

Live cursors, shared canvas, team model library. WebSocket collab is in the box, no Pusher / Liveblocks dependency.

The moat deep engineering
🎓

Academic export pipeline

Every layer carries provenance. One canvas state generates a TikZ figure, a "Methods" paragraph, and a BibTeX file with the right entries cited. Authoring path for ML papers, not toy demos.

🖼

Paper figure → architecture

Drop a screenshot of any architecture diagram. Vision LLM rebuilds nodes + edges (residuals included). Skip-edges get bowed routing so the result reads like the original figure.

🔬

Cross-validation (Paper ↔ Code)

Point at a paper and your nn.Module source. Per-layer drift report: dimensions, activations, ordering, missing residuals. Catches re-implementation bugs before review.

🔁

Canvas ↔ Code identity merge

Edit code in Monaco, parse back, the canvas only updates what changed (matched by name first, type-ordinal second). BERT's two norm layers don't collide; residuals survive on round-trip.

🚀

Deploy advisor

Latency / FLOPs / size / energy on 7 platforms (mobile, browser-WebGPU, Coral edge-TPU, A100, …). Quantization variants table, perf-budget gauge, smell detector with clickable layers, comparison vs LeNet → Llama-3.

Browser ONNX inference

Real ONNX export through Modal — drop the file back in the browser to run inference live (WASM + WebGPU). Top-5 softmax bar, no server round-trip. CSV upload auto-infers the task before you even build.

Why Neurarch

AutoML gives you a model.
Neurarch gives you the model.

Capability Neurarch HF AutoTrain Google AutoML Write code
Natural language → architecture
Visual canvas + tensor shape trace
Architecture lint (gradient / overfit / order)
Import arXiv paper → canvas
Import any HuggingFace modelmanual
Import .onnx / .safetensors → editable canvas
PyTorch + ONNX code export (you own it)
AI reads training curves + targeted fixeslimited
Real-time team collaborationPro+
Free to starttrial

AutoML hands you a black box and a number. Great if the number is good enough. If it isn't, you have nowhere to start debugging. Neurarch shows you the layers, the shapes, and the code — so when it goes wrong, you know which line to change.

For when you'd reach for a sketchpad or print(model)

Capability Whiteboard / Excalidraw print(model) Netron Neurarch
See full layer graphmanualtree only
Tensor shapes verified end-to-endif forward runs
Edit graph + re-export coderead-only
Architecture lint (gradient/overfit/order)
Build from prompt or arXiv paper
AI reads training curves + applies fixes
Shareable URL of your architecturescreenshotfile only
👁
Already using Netron to inspect models? Drop any .onnx or .safetensors file onto Neurarch — we reconstruct the full graph as editable canvas nodes. Modify layers, fix architecture issues, run training, and export clean PyTorch code. Netron shows you what a model is. Neurarch lets you change it.
Pricing

Free to start. Pay when you scale.

Beta pricing — rates lock in for life when you subscribe during early access.

Free
$0 forever
Full canvas, AI agent, and core code export. No card, no time limit.
  • Full visual canvas — 142 layer types + 70 blocks
  • 15 Gemini + 30 Groq AI messages per day
  • Claude via your own Anthropic key
  • PyTorch, Keras, ONNX, training script export
  • Architecture advisor (14 rules)
  • 5 local snapshots + URL sharing
Start for free →
Most popular
Pro
$19 /mo
Everything single-user — every importer, every export, every research tool.
  • 500 Gemini / month + unlimited Groq
  • Imports: HuggingFace, arXiv paper, AI code parser
  • Exports: Notebook, Report, Slides, Study Guide, Model Card
  • Cost estimator, quant preview, receptive field, scaling calc
  • Architecture compare + profiler import + fusion hints
  • 20 snapshots + cloud save
  • 7-day free trial
Start free trial →
Pro Plus
$39 /mo
Pro + Claude quota, team collaboration, and the heavy research suite.
  • 2,000 Gemini + 200 Claude Sonnet / month (no key needed)
  • Real-time team collaboration + workspaces
  • Unlimited snapshots
  • Loss landscape, logit lens, attribution explorer
  • Live training dashboard, training replay, Bayesian opt
  • GPU profiling, auto-viz generator, custom layer editor
  • Architecture marketplace + browser
  • SSO + audit log + priority support
  • Everything in Pro · 7-day free trial
Start free trial →

Need invoicing, a custom seat count, or on-prem? — contact us →

Privacy & data

Your code never leaves your browser

No accounts required for the canvas, agent, or export. When you BYOK, the request goes browser → provider — we're not in the path.

01

Type or paste in the canvas

Architecture, prompts, and pasted code live in browser memory only. Refresh the tab to wipe them.

02

API key stays in sessionStorage

Your Anthropic / Gemini key never touches our servers and is cleared automatically when the tab closes.

03

BYOK calls go direct to the provider

Browser → Anthropic / Google. We're not a man-in-the-middle. Open the network tab and verify it yourself.

04

Nothing persisted unless you opt in

Local snapshots stay in browser memory. Cloud save (Pro) and team workspaces (Pro Plus) are explicit actions — one-click delete from the dashboard at any time.

FAQ

Common questions

Both. By default it runs an architecture-aware training simulation — curves reflect your architecture's real quality signals (dropout, residuals, normalization, depth) so the AI's interpretation is meaningful. For real GPU training, you can connect Modal.com with a one-time setup. Either way the exported PyTorch code runs anywhere.
No. The "Problem → Model" wizard asks a few questions and the agent picks layers for you. But you'll get more out of it once you start recognising what each block does. The canvas and the advisor are designed to teach you that as you go — not to hide it from you.
No. API keys live in browser session storage and never reach our servers — BYOK calls go straight from your browser to the provider. We proxy Gemini and Groq calls for signed-in users so we can enforce per-plan quotas (15 Gemini/day on Free, 500/mo on Pro, 2,000/mo on Pro Plus). Claude proxy is Pro Plus only — Free and Pro users supply their own Anthropic key.
ChatGPT gives you a code block you can't see inside. Neurarch gives you a visual architecture where every layer is inspectable, shapes are verified across the full graph, and 14 structural rules run automatically. You can import from HuggingFace or arXiv, run training, get AI diagnosis of the curves, and export. The agent is architecture-aware, not just code-aware.
Yes. Use the code importer to load an existing architecture, or load a HuggingFace model ID to visualize it. The exporter generates clean nn.Module classes you can paste into any existing codebase.
Yes. Cancel from the billing portal anytime. Pro access continues until the end of the period. Full refund within 7 days of first payment if you're not satisfied.
Roadmap

What's built & what's coming

Visual canvas with 286 layer types + 70 blocks Shipped

Drag-and-drop architecture builder with automatic tensor shape propagation and 20+ rule lint checker.

AI architecture agent Shipped

Tell the agent what you're building. It picks the layers, wires them up, and patches its own mistakes when the advisor flags something.

arXiv + HuggingFace import Shipped

Paste any arXiv URL or HF model ID and the architecture appears on your canvas instantly.

PyTorch / Keras / ONNX / Notebook export Shipped

Production-ready code, full training scripts, Jupyter notebooks — you own the output.

Real GPU training via Modal.com Shipped

One-click training on A10G / A100. Drop a CSV, pick a HuggingFace dataset, upload an image-zip, or resume from a checkpoint URL. Real loss curves, early stopping, AI diagnosis of results.

Real ONNX export + browser inference Shipped

Export the canvas to a real .onnx file via Modal. Drop the file back in the browser to run inference live (WASM + WebGPU). Plus Docker / FastAPI / training-script bundles for deploy anywhere.

Academic export pipeline Shipped

Every layer carries provenance (paper / HF / manual). Generate a TikZ figure, a "Methods" paragraph, and a BibTeX file with the right entries cited — straight from canvas state.

Team collaboration In progress

Live cursors, shared canvas, team model library. Pro Plus feature launching with Stripe billing.

Q3'26

LoRA / PEFT fine-tuning on LLMs Coming Q3 2026

Pick a base model (Llama, Mistral, Phi), pick a HuggingFace instruction dataset, and Modal handles the LoRA loop. Tabular and image fine-tuning already works today.

Q4'26

One-click push to HuggingFace Hub Coming Q4 2026

Trained model + auto-generated card → pushed to your HF account in one button. Local Docker / FastAPI deploy bundles already work today.

Start free today

Full canvas + AI agent + code export on the free plan.
Drop your email to get Pro updates and early access pricing.

No spam. Or just open the app now →

Share your model

Add this badge to your README, notebook, or paper to show the architecture was built with Neurarch.

⚡ Designed with Neurarch
[![Designed with Neurarch](https://img.shields.io/badge/Designed%20with-Neurarch-7c3aed?style=flat&logo=data:image/svg%2bxml;base64,PHN2ZyB3aWR0aD0iMTYiIGhlaWdodD0iMTYiIHZpZXdCb3g9IjAgMCAxNiAxNiIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48Y2lyY2xlIGN4PSI4IiBjeT0iOCIgcj0iNyIgc3Ryb2tlPSJ3aGl0ZSIgc3Ryb2tlLXdpZHRoPSIxLjUiLz48cGF0aCBkPSJNNSA4aDZNOCA1djYiIHN0cm9rZT0id2hpdGUiIHN0cm9rZS13aWR0aD0iMS41IiBzdHJva2UtbGluZWNhcD0icm91bmQiLz48L3N2Zz4=)](https://neurarch.com)

Click to copy Markdown

Contact

Talk to us

Bug reports, feature requests, partnerships, design partners, investor intros — fastest way to reach us is email or GitHub.

Email us
neurarch.ai@gmail.com
📅
Book a demo
15-min walkthrough
GitHub
github.com/neurarch-ai
📝
Draft feedback
Opens your mail client