Your AI Has Amnesia
Every AI conversation starts from zero. Claude, ChatGPT, Gemini - none of them remember what you built yesterday. The architecture decisions, the bugs you fixed, the tools you chose, the lessons you learned. Gone.
People work around this. CLAUDE.md files. Session logs. Memory folders. Notion databases. Scattered text files with names like "notes-final-v2-REAL.md."
These all work until they do not. The breaking point is always the same: retrieval. The knowledge exists somewhere in your files, but finding it takes longer than re-learning it from scratch.
An AI second brain solves this. Instead of you remembering where you wrote something down, the AI searches your entire knowledge base and finds it. Instead of re-explaining your project every session, the AI already knows.
The concept is not new. Tiago Forte called it a "second brain." Andrej Karpathy recently described it as an "LLM Wiki." The builder community has been experimenting with different implementations for months. What is new is that the tools are finally good enough to make it practical.
There are three paths. Each one solves the same problem at a different scale.
Path 1: Obsidian (Local, Visual, Manual)
Best for: Individuals who want full control, privacy, and a visual interface. No server required.
Obsidian is a markdown editor with a graph view. Every note is a plain text file on your computer. You create links between notes, and Obsidian shows you the connections visually. Add a few plugins (Dataview for queries, Templater for consistency, Calendar for daily notes) and you have a functional knowledge base.
How It Works as a Second Brain
- ●Capture - Write daily logs, decision records, project notes, and reference docs
- ●Link - Connect related notes with wikilinks. Every link creates a navigable relationship.
- ●Query - Use Dataview to search across your vault by metadata, tags, and dates
- ●Connect to AI - Point Claude Code at your vault folder in CLAUDE.md. It reads the files directly.
That last step is the key. Your Obsidian vault is a folder of markdown files. Claude Code reads markdown files. The integration is already done. No MCP server. No API. No configuration beyond adding a path to CLAUDE.md.
The Strengths
- ●Free and local. Your data stays on your machine.
- ●Visual graph view. See connections between concepts at a glance.
- ●Rich plugin ecosystem. 1,500+ community plugins for every use case.
- ●Zero infrastructure. No server, no Docker, no cloud dependency.
- ●Claude Code reads it natively. Plain markdown is the universal AI format.
Where It Breaks
Obsidian works well up to about 200 notes. Past that, manual linking becomes a chore. You spend more time connecting notes than writing them. Search is keyword-based, so relational questions ("what services depend on our reverse proxy?") fail when the answer is spread across five notes that never use those exact words together.
The bigger issue is access. Obsidian is local. If you run Claude Code on a VPS, a Telegram bot, or any remote agent, it cannot reach your Obsidian vault without extra tooling.
If you are one person working on a few projects from one machine, Obsidian is the right choice. When you outgrow it, everything you built transfers directly to the next path.
Deep dive: AI Second Brain: Obsidian Pro Track
Path 2: LightRAG Local (Graph-Powered, Automatic, No Server)
Best for: Builders with too many notes to link manually who want semantic search without managing a server.
LightRAG is a graph-augmented retrieval system. Feed it documents and it automatically extracts entities (people, tools, concepts, decisions) and the relationships between them. Then it builds a knowledge graph and a vector index. When you ask a question, it searches both the graph and the vectors to find answers.
How It Works as a Second Brain
- ●Ingest - Drop your notes, docs, and session logs into LightRAG
- ●Extract - LightRAG uses an LLM to identify entities and relationships automatically
- ●Query - Ask questions in natural language. LightRAG traverses the graph and searches vectors simultaneously.
- ●Grow - Add new documents anytime. The graph expands with each ingestion.
The critical difference from Obsidian: you do not create links manually. LightRAG reads your documents and builds the connections itself. A note about your reverse proxy and a note about your automation tool get connected automatically because LightRAG understands they are related, even if you never linked them.
Running It Locally
LightRAG runs on Python. Install it with pip, point it at a folder of documents, and it builds the graph on your machine. No Docker. No VPS. No cloud.
pip install lightrag-hku[api]For the LLM and embeddings, you can use local models (Ollama) or cloud APIs (Google Gemini free tier, OpenAI). The free tier of Gemini handles both entity extraction and embeddings at zero cost for moderate usage.
The Strengths
Get the Weekly IT + AI Roundup
What changed this week in NinjaOne, ServiceNow, CrowdStrike, and AI. One email, every Monday.
No spam, unsubscribe anytime. Privacy Policy
- ●Automatic relationship extraction. No manual linking.
- ●Semantic search. Finds answers even when the exact words do not appear.
- ●Graph traversal. Answers relational questions across multiple documents.
- ●Local option. Runs on your machine with local models.
- ●Scales past 200 notes. Handles thousands of documents without manual organization.
Where It Breaks
Local LightRAG is still local. Your Telegram bot cannot reach it. Your VPS agents cannot query it. If you work from multiple devices, you need to run the server somewhere accessible.
Also, LightRAG requires an LLM for entity extraction. Local models work but are slower and less accurate than cloud APIs. The quality of your knowledge graph depends on the quality of the extraction model.
Path 3: LightRAG on a VPS (Production, Remote, Agent-Accessible)
Best for: Builders running AI agents on a server who need a knowledge base accessible from anywhere.
This is what we run. LightRAG deployed on a $7/month VPS, connected to Claude Code via MCP, accessible from Telegram, and auto-ingesting session logs after every working session.
How It Works as a Second Brain
- ●Deploy - Docker Compose on a VPS. LightRAG server, Gemini API for extraction and embeddings.
- ●Ingest - Feed it everything: session logs, blog posts, project docs, skill files. Auto-ingest new content via scripts.
- ●Query from anywhere - Claude Code queries via MCP. Telegram queries via custom skill. Any API client can hit the REST endpoint.
- ●Agents remember - Before answering a question, Claude Code checks the knowledge graph for relevant context. The AI starts every session already knowing your project history.
Our Stack
- ●LightRAG Server: Docker on VPS (port 9621)
- ●LLM: Gemini 2.5 Flash (entity extraction)
- ●Embeddings: gemini-embedding-001 (vector search)
- ●MCP Server: @g99/lightrag-mcp-server (30 tools for Claude Code)
- ●Auto-ingestion: /session-save skill pushes summaries after every session
- ●Cost: $0 additional (runs on existing VPS, Gemini free tier)
The Strengths
- ●Remote access. Query from phone, laptop, VPS, or any device.
- ●Agent integration. Claude Code, OpenClaw, Telegram bots all query the same knowledge base.
- ●Auto-ingestion. New content flows in without manual effort.
- ●Production-grade. SSL via Traefik, persistent storage, health checks.
- ●Graph visualization. Built-in WebUI shows your knowledge graph.
The Trade-offs
You need a VPS. You need to manage Docker containers. You need to monitor Gemini API usage (free tier has rate limits). The setup takes an afternoon, not five minutes.
If you are not already running a VPS for other services, this path has more overhead than the value it adds. Start with Obsidian or local LightRAG. Graduate to a VPS deployment when remote access and agent integration become requirements, not nice-to-haves.
Deep dive: AI Second Brain: LightRAG Pro Track
Choosing Your Path
| Obsidian | LightRAG Local | LightRAG VPS | |
|---|---|---|---|
| Cost | Free | Free | $7/month VPS |
| Setup time | 10 minutes | 30 minutes | 2-3 hours |
| Linking | Manual | Automatic | Automatic |
| Search | Keyword | Semantic + Graph | Semantic + Graph |
| Access | Local only | Local only | Anywhere |
| Agent integration | Claude Code (local) | Claude Code (local) | Claude Code, Telegram, API |
| Best scale | Up to ~200 notes | Hundreds to thousands | Thousands, multi-project |
| Privacy | Everything local | Everything local | VPS-hosted |
Start with Obsidian if you want control and simplicity. One vault, four plugins, daily notes. Your AI reads the files directly.
Move to LightRAG Local if manual linking becomes a bottleneck and you want the AI to build connections automatically.
Deploy LightRAG on a VPS if you need remote agents to query your knowledge base and auto-ingestion from multiple sources.
The paths are not exclusive. Many builders use Obsidian as their writing interface and feed the vault into LightRAG for graph-powered search. Your Obsidian vault is markdown. LightRAG ingests markdown. Nothing is wasted at any step.
The Real Difference
A second brain is not about the tool. It is about retrieval.
A folder of notes you never search is write-only storage. A knowledge base that your AI queries before every answer is compound intelligence. Every session you log, every decision you document, every lesson you capture makes the next session faster.
The builders who move fastest are not the ones with the fanciest tools. They are the ones whose AI already knows their stack, their decisions, and their constraints. They do not waste ten minutes every session re-explaining context.
Pick a path. Any path. The gap between having a second brain and not having one grows wider every week.
*We have been running our AI second brain in production since March 2026. LightRAG on a VPS, 159 entities, 152 relationships, queryable from Claude Code and Telegram. Learn both approaches in our Pro Tracks: Obsidian | LightRAG*