NemoClawMarch 17, 20267 min read

NemoClaw Shipped. Here's What It Actually Is.

NemoClaw is live. The reality is more specific than pre-launch coverage suggested — it's a sandbox orchestration system, not a generic agent framework. Here's what that means for teams building on it.

NemoClaw is live

NemoClaw shipped March 16 at GTC 2026, right on schedule. Jensen presented it in the keynote, the repo went public, and the coverage was roughly what everyone expected — NVIDIA's answer to the OpenClaw acquisition, an open-source AI agent platform for enterprise teams.

But now that the code is public and people are actually running it, the reality is more specific (and more interesting) than the pre-launch coverage suggested. NemoClaw isn't a generic AI agent framework. It's a sandbox orchestration system. That distinction matters if you're planning to build on it.

What it actually is

NemoClaw is specifically a plugin for OpenClaw — NVIDIA's open-source fork/extension of the original Claw project. When you install NemoClaw, you're installing a CLI that provisions a fully isolated Linux sandbox via NVIDIA OpenShell and runs OpenClaw inside it.

Each sandbox enforces isolation at the kernel level. Landlock restricts filesystem access. Seccomp filters syscalls. Network namespaces control egress. This isn't container-level isolation — it's OS-level enforcement. Every agent runs in its own locked-down environment, and the sandbox policy is not optional.

Inference calls from inside the sandbox are intercepted by OpenShell and transparently routed to NVIDIA's cloud. The agent doesn't know or care where the model lives. From the agent's perspective, it's making a local API call. From the infrastructure's perspective, every call is hitting NVIDIA's NIM endpoints with full telemetry and access control.

The CLI interface

This is the part that surprised people. NemoClaw isn't a web UI or a drag-and-drop workflow builder. It's a CLI. Here's what using it actually looks like:

# Install NemoClaw + OpenShell sandbox runtime
nemoclaw install
# Connect to a running sandbox
nemoclaw <name> connect
# Check sandbox health
nemoclaw <name> status
# Stream logs from inside the sandbox
nemoclaw <name> logs

That's it. You install the runtime, provision a sandbox, connect to it, and monitor it. There's no visual orchestration layer, no marketplace, no templates UI. It's infrastructure tooling for running isolated agents, and it does that one thing well.

The inference layer

NIM provides the model serving underneath. You get OpenAI-compatible endpoints — hit /v1/chat/completions with a bearer token and get completions back. The production NemoClaw model is nvidia/nemotron-3-super-120b-a12b — NVIDIA's Nemotron-3-Super-120B, a mixture-of-experts model specifically tuned for enterprise agent tasks.

The clever part is the routing. Inside the sandbox, the agent makes inference calls like it's hitting a local endpoint. OpenShell intercepts those calls and routes them to NVIDIA's cloud transparently. The agent code doesn't need any cloud configuration, API keys, or endpoint URLs — the sandbox runtime handles all of that.

The production gap

NemoClaw is good at what it does: sandbox provisioning, agent isolation, and inference routing. What it doesn't have is everything you need to actually run this in production for real customers.

Sandbox fleet management is the first gap. If you have 50 tenants each running 3 sandboxes, you need to provision, monitor, restart, and destroy those sandboxes programmatically. NemoClaw gives you the CLI for a single sandbox. It doesn't give you the fleet management layer.

Configuration management is the second. Teams want to define a workflow config once, version it, and deploy it across tenants. NemoClaw doesn't have a blueprint or template system — you're managing config files manually per sandbox.

Then there's the standard enterprise stack: auth, billing, multi-tenancy, run history, step-level logging, and human-in-the-loop approvals for high-risk actions. None of that is in NemoClaw, and none of it should be. A sandbox runtime is supposed to be a sandbox runtime. But if you're building a product, you'll need all of it.

How flowClaw integrates

flowClaw wraps NemoClaw with the production layer it's missing. We shipped the integration on March 17 — one day after NemoClaw went public.

The integration uses a dual-path execution model. In production on Linux, flowClaw talks to NemoClaw's CLI directly — provisioning real kernel-isolated sandboxes, managing their lifecycle, and streaming logs back through our API. In development on macOS (where Landlock/seccomp aren't available), we fall back to NIM HTTP calls so you can develop and test workflows without a Linux box. The engine auto-selects the right path based on environment.

On top of that, we built a blueprint system. Teams define NemoClaw workflow configs as versioned blueprints through flowClaw's API. Deploy a blueprint and a sandbox is automatically provisioned, configured, and monitored. The full flow: POST /api/nemoclaw/blueprints/:id/deploy → sandbox provisioned → agent running → health polling every 60 seconds.

What comes next

NemoClaw is alpha software — no official versioned releases yet, and the API surface will change. That's fine. The sandbox isolation model is sound, and the inference routing through OpenShell is genuinely clever. The foundation is solid.

If you're a team planning to build on NemoClaw and you don't want to spend months on the production infrastructure layer, that's exactly the problem flowClaw solves. We're accepting early access signups now.

✅ NemoClaw is live

Skip the production infrastructure work

Join the waitlist and get early access to flowClaw — the managed hosting platform for NemoClaw.

Get Early Access