Back to Phantom Notes
AI Agents

How to Install NemoClaw: Step-by-Step Guide

March 20, 20268 min readBy T.W. Ghost
NemoClawNVIDIAOpenClawAI AgentsInstallation Guide

What is NemoClaw?

NemoClaw is NVIDIA's open-source security wrapper for OpenClaw, the AI agent platform that lets you run always-on assistants on WhatsApp, Telegram, Discord, and more.

If you've read our NemoClaw vs OpenClaw comparison, you know the pitch: OpenClaw is the agent, NemoClaw is the cage. It adds kernel-level sandboxing, network policy enforcement, and privacy routing so your AI agent can't go rogue.

Released at GTC 2026 on March 16, NemoClaw is in early alpha preview under the Apache 2.0 license. It's free, open-source, and installable right now.

Let's get it running.


Prerequisites

Before you install, make sure your system meets these requirements.

Hardware

ResourceMinimumRecommended
CPU4 vCPU4+ vCPU
RAM8 GB16 GB
Disk20 GB free40 GB free

The sandbox image is about 2.4 GB compressed. During setup, Docker, k3s, and the OpenShell gateway run alongside the export pipeline, which buffers decompressed layers in memory. On machines with less than 8 GB of RAM, this can trigger the OOM killer. If you can't add memory, configure at least 8 GB of swap.

Software

DependencyVersion
LinuxUbuntu 22.04 LTS or later
Node.js20 or later
npm10 or later
Container runtimeDocker installed and running
OpenShellInstalled (the installer handles this)

Platform Support

PlatformSupported RuntimesNotes
LinuxDockerPrimary supported path
macOS (Apple Silicon)Colima, Docker DesktopRecommended for macOS setups
macOSPodmanNot supported yet
Windows WSLDocker Desktop (WSL backend)Supported target path

If you're on a DGX Spark, NVIDIA has a dedicated setup guide covering Spark-specific prerequisites like cgroup v2 and Docker configuration.


Step 1: Install NemoClaw

One command. That's it.

bash
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash

This script does the following:

  • Installs Node.js if it's not already present
  • Installs the NVIDIA OpenShell runtime
  • Installs the nemoclaw CLI globally via npm
  • Runs the guided onboard wizard

The wizard walks you through creating a sandbox, configuring inference, and applying security policies.

If the command isn't found after install

If you use nvm or fnm to manage Node.js, the installer may not update your current shell's PATH. Run one of these:

bash
source ~/.bashrc
# or for zsh users:
source ~/.zshrc

Or just open a new terminal.


Step 2: Get Your NVIDIA API Key

NemoClaw routes inference through NVIDIA's cloud by default, using the Nemotron 3 Super 120B model. You need a free API key.

  • Go to build.nvidia.com
  • Create an account or sign in
  • Generate an API key
  • The onboard wizard will prompt you to paste it during setup

That's your inference engine sorted. The agent never calls the model directly. OpenShell intercepts every request and routes it through the NVIDIA cloud provider.


Step 3: Verify the Installation

When the install completes, you'll see a summary like this:

------------------------------------------------------
Sandbox      my-assistant (Landlock + seccomp + netns)
Model        nvidia/nemotron-3-super-120b-a12b (NVIDIA Cloud API)
------------------------------------------------------
Run:         nemoclaw my-assistant connect
Status:      nemoclaw my-assistant status
Logs:        nemoclaw my-assistant logs --follow
------------------------------------------------------

[INFO]  === Installation complete ===

Check the status:

bash
nemoclaw my-assistant status

Check the underlying sandbox:

bash
openshell sandbox list

Step 4: Connect to Your Agent

Connect to the sandbox shell:

bash
nemoclaw my-assistant connect

This drops you into a sandboxed shell: sandbox@my-assistant:~$

Get the Weekly IT + AI Roundup

What changed this week in NinjaOne, ServiceNow, CrowdStrike, and AI. One email, every Monday.

No spam, unsubscribe anytime. Privacy Policy

From here you have two ways to talk to your agent.

Option A: Interactive TUI

bash
openclaw tui

This opens an interactive chat interface. Type a message, hit enter, get a response. Best for back-and-forth conversation.

Option B: CLI (Single Message)

bash
openclaw agent --agent main --local -m "hello" --session-id test

This sends a single message and prints the full response directly in the terminal. Better for long outputs like code generation.


How the Security Works

This is the part that makes NemoClaw more than just "OpenClaw with extra steps." Four protection layers lock down your agent:

LayerWhat It ProtectsWhen It Applies
NetworkBlocks unauthorized outbound connectionsHot-reloadable at runtime
FilesystemPrevents reads/writes outside /sandbox and /tmpLocked at sandbox creation
ProcessBlocks privilege escalation and dangerous syscallsLocked at sandbox creation
InferenceReroutes model API calls to controlled backendsHot-reloadable at runtime

What this means in practice

When your agent tries to reach a host that isn't in the allow list, OpenShell blocks the request and surfaces it in the TUI for you to approve or deny. The agent can't just curl whatever it wants.

The filesystem sandbox means even if a prompt injection tricks your agent into running rm -rf /, it can only touch /sandbox and /tmp. The rest of the system is invisible to it.

Process-level protection uses Landlock, seccomp, and network namespaces. These are kernel-level security mechanisms, not application-level. The agent literally cannot escalate privileges, even if it tries.


Architecture Overview

NemoClaw has four main components:

ComponentRole
PluginTypeScript CLI commands for launch, connect, status, and logs
BlueprintVersioned Python artifact that orchestrates sandbox creation, policy, and inference setup
SandboxIsolated OpenShell container running OpenClaw with policy-enforced egress and filesystem
InferenceNVIDIA cloud model calls, routed through the OpenShell gateway, transparent to the agent

The blueprint lifecycle follows four stages:

  • Resolve the artifact
  • Verify its digest
  • Plan the resources
  • Apply through the OpenShell CLI

Essential CLI Commands

Here's your cheat sheet:

CommandWhat It Does
nemoclaw my-assistant connectConnect to the sandbox shell
nemoclaw my-assistant statusCheck sandbox health
nemoclaw my-assistant logs --followTail the agent logs
openshell sandbox listCheck underlying sandbox state
openclaw tuiInteractive chat (inside sandbox)
openclaw agent --agent main --local -m "message"Send a single message (inside sandbox)

Supported Hardware

NemoClaw runs on a range of NVIDIA hardware:

  • GeForce RTX PCs and laptops (RTX 30/40 series)
  • RTX PRO workstations
  • DGX Station
  • DGX Spark

You don't need an NVIDIA GPU for cloud inference, but you will need one for local inference with Nemotron models. Local inference via Ollama and vLLM is still experimental.


Uninstalling

If you need to remove NemoClaw:

bash
curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bash

This removes sandboxes, the NemoClaw gateway and providers, Docker images and containers, and the global nemoclaw npm package. It does not remove shared system tools like Docker, Node.js, npm, or Ollama.

Useful flags:

FlagEffect
--yesSkip the confirmation prompt
--keep-openshellLeave the openshell binary installed
--delete-modelsAlso remove NemoClaw-pulled Ollama models

What's Next?

NemoClaw is alpha software. Interfaces, APIs, and behavior may change without notice. But the core experience is solid: install with one command, get a sandboxed AI agent with enterprise-grade security.

If you're already running OpenClaw (like we do with our Skynet assistant), NemoClaw is worth testing to see how the security wrapper changes the agent experience.

Resources:


*Not sure which AI tool fits your workflow? Take our free quiz and find out in 2 minutes.*