NemoClaw · part 1
[AI Agent] What Is NemoClaw? NVIDIA's AI Agent Framework for DGX Spark Explained
❯ cat --toc
TL;DR
NemoClaw (v0.1.0, GTC 2026) bundles OpenClaw agent core + OpenShell security sandbox + NVIDIA Agent Toolkit into a single installable framework. The key addition is OpenShell: a policy enforcement layer that runs between the user and the agent, sandboxing file access and restricting tool execution. Without it, OpenClaw is a capable but permissive agent runtime. With it, OpenClaw becomes deployable in contexts where you can't trust the agent to self-limit.
Plain-Language Version: What Does NemoClaw Actually Do?
If you've used ChatGPT or Claude, you know AI can chat and answer questions. But there's a big gap between "chatting" and "doing things for you." You can ask ChatGPT how to organize your photos — it'll give you the steps, but it won't actually sort the files on your hard drive.
That's the problem NemoClaw wants to solve. It's an open-source framework NVIDIA announced at GTC 2026, built to let AI go beyond talking and start doing — reading your files, running commands, remembering previous conversations. Think of it as an assistant that lives inside your computer.
Sounds powerful, but also risky, right? An AI that can read and write files — what if it goes rogue? This is where NemoClaw is fundamentally different from just installing a random AI agent. It ships with a built-in security sandbox called OpenShell. Think of it as a fence around the AI — the AI can do anything inside the fence, but it can't get out. Which folders it can access, which tools it can use — you define all of it.
I installed NemoClaw on my NVIDIA DGX Spark (a desktop AI workstation) and ran it for a while. This article covers what's inside it, how the architecture works, and whether it's worth installing.
What NemoClaw Actually Is
NemoClaw is a composition of three things:
-
OpenClaw — the open-source AI agent framework. It handles the agent loop: tool calls, reasoning, memory, conversation context. If you've used it, it feels similar to LangGraph or AutoGen but with a focus on local/private inference.
-
OpenShell — a security sandbox layer that sits in front of OpenClaw. It runs as a separate process (port 8080 by default) and intercepts all agent-initiated tool calls. File system access is scoped to
/sandboxand/tmp. Commands that aren't in the policy allowlist are blocked before they reach the agent core. -
NVIDIA Agent Toolkit — a set of NVIDIA-maintained integrations and utilities: Nemotron model connectors, tool adapters, and onboarding scaffolding.
The short version: NemoClaw = OpenClaw + containment layer + NVIDIA batteries.
Why This Exists
OpenClaw was already open source before NemoClaw shipped. You could run it today without NemoClaw. So why does NemoClaw exist?
The honest answer is that OpenClaw alone has no security boundary. The agent can read files, execute shell commands, make network requests — bounded only by what tools you define for it. For personal use, this is fine. For anything where you'd hand the agent a real filesystem or let it act on behalf of someone else, it's a problem.
OpenShell is the answer to that problem. It enforces that the agent operates inside a defined perimeter. The sandbox restriction isn't just "we recommend you scope this" — it's enforced at the gateway level. The agent doesn't get to negotiate.
The NVIDIA Agent Toolkit wraps this in tooling that makes it easier to go from zero to a running, policy-enforced agent without writing the plumbing yourself.
So the real question NemoClaw answers isn't "how do I run an agent" — it's "how do I run an agent I can actually give to someone else."
Architecture: How It Works
The request flow through NemoClaw looks like this:
User / Client
↓
OpenShell Gateway (port 8080)
│ - policy enforcement
│ - file access restriction (/sandbox, /tmp only)
│ - tool allowlist
↓
OpenClaw Agent Core
│ - agent loop
│ - tool execution
│ - context management
↓
Inference Backend
│ - Nemotron (NVIDIA cloud, requires account)
└─ or local vLLM endpoint (e.g., qwen3.5-35b on port 8000)
OpenShell is not optional scaffolding — it's the intended entry point. The NemoClaw CLI (nemoclaw onboard) configures both layers together. You're not meant to bypass it and talk to OpenClaw directly.
The inference backend is pluggable. NemoClaw defaults to Nemotron cloud endpoints (which do require a paid NVIDIA account), but the configuration accepts any OpenAI-compatible API endpoint. Running it against a local vLLM instance works, which is what I'll cover in the next part.
What Was Gained
What cost the most time: Separating OpenClaw from NemoClaw in documentation. NVIDIA's announcement materials treat them as one thing. The distinction only becomes clear when you ask "what does NemoClaw add that OpenClaw doesn't already have" — and the answer is entirely OpenShell.
Transferable diagnostic: When a tool is described as "built on X," the interesting question is always what the wrapper adds, not what it inherits. In NemoClaw's case, the wrapper is the entire security story. Strip it, and you're back to a general-purpose agent with no deployment boundary.
The pattern that applies everywhere: Open-source agent framework + containment layer is the recurring pattern for every agent tool trying to move from personal use to something you'd hand to someone else. NemoClaw isn't the first to do this, and the specific shape of OpenShell (gateway process, policy file, scoped filesystem) is worth noting as a reference point for anyone designing the same thing.
What's Next
Installation and first run on the GX10 are covered in Part 2 — including the actual bugs, and pointing NemoClaw at a local vLLM instance instead of the Nemotron cloud.
Also in this series: Part 2 — NemoClaw Onboard: Pointing It at Local vLLM Instead of Nemotron Cloud (coming soon)
FAQ
- What is NemoClaw and how is it different from OpenClaw?
- NemoClaw is NVIDIA's all-in-one AI agent framework announced at GTC 2026 (v0.1.0). It bundles OpenClaw (the open-source agent runtime), OpenShell (a security sandbox that enforces filesystem and tool restrictions at the gateway level), and the NVIDIA Agent Toolkit. The key difference from standalone OpenClaw is OpenShell — without it, OpenClaw has no security boundary.
- What does OpenShell do in NemoClaw?
- OpenShell is a security sandbox layer that runs as a separate process (port 8080 by default) between the user and the OpenClaw agent core. It intercepts all agent-initiated tool calls and enforces policy: file system access is scoped to /sandbox and /tmp, and only allowlisted commands can execute. The agent cannot bypass or negotiate these restrictions.
- Can NemoClaw run with a local model instead of NVIDIA's cloud?
- Yes. NemoClaw's inference backend accepts any OpenAI-compatible API endpoint. While it defaults to Nemotron cloud (which requires a paid NVIDIA account), you can point it at a local vLLM or Ollama instance by editing the per-sandbox config.yaml file.
- Is NemoClaw open source?
- Yes. NemoClaw, OpenClaw, and OpenShell are all open source. OpenClaw was open source before NemoClaw existed. NemoClaw packages them together with NVIDIA Agent Toolkit integrations and onboarding tooling.