What is NemoClaw?
NemoClaw is NVIDIA's open-source security wrapper for OpenClaw, the AI agent platform that lets you run always-on assistants on WhatsApp, Telegram, Discord, and more.
If you've read our NemoClaw vs OpenClaw comparison, you know the pitch: OpenClaw is the agent, NemoClaw is the cage. It adds kernel-level sandboxing, network policy enforcement, and privacy routing so your AI agent can't go rogue.
Released at GTC 2026 on March 16, NemoClaw is in early alpha preview under the Apache 2.0 license. It's free, open-source, and installable right now.
Let's get it running.
Prerequisites
Before you install, make sure your system meets these requirements.
Hardware
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 vCPU | 4+ vCPU |
| RAM | 8 GB | 16 GB |
| Disk | 20 GB free | 40 GB free |
The sandbox image is about 2.4 GB compressed. During setup, Docker, k3s, and the OpenShell gateway run alongside the export pipeline, which buffers decompressed layers in memory. On machines with less than 8 GB of RAM, this can trigger the OOM killer. If you can't add memory, configure at least 8 GB of swap.
Software
| Dependency | Version |
|---|---|
| Linux | Ubuntu 22.04 LTS or later |
| Node.js | 20 or later |
| npm | 10 or later |
| Container runtime | Docker installed and running |
| OpenShell | Installed (the installer handles this) |
Platform Support
| Platform | Supported Runtimes | Notes |
|---|---|---|
| Linux | Docker | Primary supported path |
| macOS (Apple Silicon) | Colima, Docker Desktop | Recommended for macOS setups |
| macOS | Podman | Not supported yet |
| Windows WSL | Docker Desktop (WSL backend) | Supported target path |
If you're on a DGX Spark, NVIDIA has a dedicated setup guide covering Spark-specific prerequisites like cgroup v2 and Docker configuration.
Step 1: Install NemoClaw
One command. That's it.
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bashThis script does the following:
- ●Installs Node.js if it's not already present
- ●Installs the NVIDIA OpenShell runtime
- ●Installs the
nemoclawCLI globally via npm - ●Runs the guided onboard wizard
The wizard walks you through creating a sandbox, configuring inference, and applying security policies.
If the command isn't found after install
If you use nvm or fnm to manage Node.js, the installer may not update your current shell's PATH. Run one of these:
source ~/.bashrc
# or for zsh users:
source ~/.zshrcOr just open a new terminal.
Step 2: Get Your NVIDIA API Key
NemoClaw routes inference through NVIDIA's cloud by default, using the Nemotron 3 Super 120B model. You need a free API key.
- ●Go to build.nvidia.com
- ●Create an account or sign in
- ●Generate an API key
- ●The onboard wizard will prompt you to paste it during setup
That's your inference engine sorted. The agent never calls the model directly. OpenShell intercepts every request and routes it through the NVIDIA cloud provider.
Step 3: Verify the Installation
When the install completes, you'll see a summary like this:
------------------------------------------------------
Sandbox my-assistant (Landlock + seccomp + netns)
Model nvidia/nemotron-3-super-120b-a12b (NVIDIA Cloud API)
------------------------------------------------------
Run: nemoclaw my-assistant connect
Status: nemoclaw my-assistant status
Logs: nemoclaw my-assistant logs --follow
------------------------------------------------------
[INFO] === Installation complete ===Check the status:
nemoclaw my-assistant statusCheck the underlying sandbox:
openshell sandbox listStep 4: Connect to Your Agent
Connect to the sandbox shell:
nemoclaw my-assistant connectThis drops you into a sandboxed shell: sandbox@my-assistant:~$
Get the Weekly IT + AI Roundup
What changed this week in NinjaOne, ServiceNow, CrowdStrike, and AI. One email, every Monday.
No spam, unsubscribe anytime. Privacy Policy
From here you have two ways to talk to your agent.
Option A: Interactive TUI
openclaw tuiThis opens an interactive chat interface. Type a message, hit enter, get a response. Best for back-and-forth conversation.
Option B: CLI (Single Message)
openclaw agent --agent main --local -m "hello" --session-id testThis sends a single message and prints the full response directly in the terminal. Better for long outputs like code generation.
How the Security Works
This is the part that makes NemoClaw more than just "OpenClaw with extra steps." Four protection layers lock down your agent:
| Layer | What It Protects | When It Applies |
|---|---|---|
| Network | Blocks unauthorized outbound connections | Hot-reloadable at runtime |
| Filesystem | Prevents reads/writes outside /sandbox and /tmp | Locked at sandbox creation |
| Process | Blocks privilege escalation and dangerous syscalls | Locked at sandbox creation |
| Inference | Reroutes model API calls to controlled backends | Hot-reloadable at runtime |
What this means in practice
When your agent tries to reach a host that isn't in the allow list, OpenShell blocks the request and surfaces it in the TUI for you to approve or deny. The agent can't just curl whatever it wants.
The filesystem sandbox means even if a prompt injection tricks your agent into running rm -rf /, it can only touch /sandbox and /tmp. The rest of the system is invisible to it.
Process-level protection uses Landlock, seccomp, and network namespaces. These are kernel-level security mechanisms, not application-level. The agent literally cannot escalate privileges, even if it tries.
Architecture Overview
NemoClaw has four main components:
| Component | Role |
|---|---|
| Plugin | TypeScript CLI commands for launch, connect, status, and logs |
| Blueprint | Versioned Python artifact that orchestrates sandbox creation, policy, and inference setup |
| Sandbox | Isolated OpenShell container running OpenClaw with policy-enforced egress and filesystem |
| Inference | NVIDIA cloud model calls, routed through the OpenShell gateway, transparent to the agent |
The blueprint lifecycle follows four stages:
- ●Resolve the artifact
- ●Verify its digest
- ●Plan the resources
- ●Apply through the OpenShell CLI
Essential CLI Commands
Here's your cheat sheet:
| Command | What It Does |
|---|---|
nemoclaw my-assistant connect | Connect to the sandbox shell |
nemoclaw my-assistant status | Check sandbox health |
nemoclaw my-assistant logs --follow | Tail the agent logs |
openshell sandbox list | Check underlying sandbox state |
openclaw tui | Interactive chat (inside sandbox) |
openclaw agent --agent main --local -m "message" | Send a single message (inside sandbox) |
Supported Hardware
NemoClaw runs on a range of NVIDIA hardware:
- ●GeForce RTX PCs and laptops (RTX 30/40 series)
- ●RTX PRO workstations
- ●DGX Station
- ●DGX Spark
You don't need an NVIDIA GPU for cloud inference, but you will need one for local inference with Nemotron models. Local inference via Ollama and vLLM is still experimental.
Uninstalling
If you need to remove NemoClaw:
curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bashThis removes sandboxes, the NemoClaw gateway and providers, Docker images and containers, and the global nemoclaw npm package. It does not remove shared system tools like Docker, Node.js, npm, or Ollama.
Useful flags:
| Flag | Effect |
|---|---|
--yes | Skip the confirmation prompt |
--keep-openshell | Leave the openshell binary installed |
--delete-models | Also remove NemoClaw-pulled Ollama models |
What's Next?
NemoClaw is alpha software. Interfaces, APIs, and behavior may change without notice. But the core experience is solid: install with one command, get a sandboxed AI agent with enterprise-grade security.
If you're already running OpenClaw (like we do with our Skynet assistant), NemoClaw is worth testing to see how the security wrapper changes the agent experience.
Resources:
- ●NemoClaw GitHub
- ●NVIDIA Build Platform (get your API key here)
- ●Our NemoClaw vs OpenClaw comparison
*Not sure which AI tool fits your workflow? Take our free quiz and find out in 2 minutes.*