Summary of "Comment je me suis construit un « deuxième cerveau » IA (Obsidian + Claude Code)"

Summary

This video demonstrates how to build an “AI second brain” that behaves more like a persistent personal wiki than a chat model with short-term memory. The core problem addressed is that LLMs often respond by re-selecting small chunks from your documents each time—meaning knowledge doesn’t truly accumulate between sessions (a kind of LLM forgetting).

Main idea: a persistent wiki that “self-compiles”

Inspired by a concept discussed by Andrej Karpathy, the approach is to transform Claude into an agent that maintains a growing wiki. Instead of having the LLM reread sources for every question, it builds a 3-layer persistent knowledge system:

  1. Raw sources layer (read-only)

    • Contains PDFs, articles, transcripts, and similar materials.
    • The LLM reads these but does not modify them.
  2. Wiki layer (generated/maintained)

    • Uses Markdown pages (e.g., *.md) as concept pages.
    • Pages represent concepts/entities/sources/summaries, connected with Wikipedia-like links.
    • As new knowledge arrives, existing pages are enriched/updated.
  3. “Diagram / conventions” layer

    • A cloud.md file acts like a contract/schema.
    • It instructs the LLM on how to structure and follow naming/content conventions.

The system is positioned as:

Goal: a system that self-compiles, self-maintains, and grows as you ingest new sources.


Why Obsidian (features highlighted)


Implementation tutorial (Obsidian + “Claude Code” workflow)

  1. Install Obsidian (Windows/Mac tutorial noted as working).
  2. Create a vault in Obsidian (example name shown like “AI monitoring system”).
  3. Open the vault folder in VS Code.
  4. Launch Claude Code (triggered at the folder level).
  5. Copy prompt/structure instructions from a GitHub link (referenced in the description) to create the required files, including:
    • cloud.md (schema/rules)
    • index.md
    • log.md
    • plus custom command files under a command system

Custom commands (agent operations)

The generated structure includes agent commands such as:


Ingestion flow (browser extension + ingest command)

  1. Install Obsidian Web Clipper (Chrome/Firefox).
  2. Clip web articles into the vault. Initially, they’re placed in a raw folder structure (e.g., a raw/row-style layout).
  3. Example workflow:
    • Clip multiple articles.
    • Rename ingested files to avoid special characters/spaces (e.g., article 1.md, article 2.md).
    • Run ingest so Claude parses content and generates/updates:
      • concept pages
      • entity/project pages
      • summaries
  4. The example shows that added articles result in interconnected wiki pages, visible via Obsidian’s graph links.

Query + continuous learning behavior

After ingestion, the system can answer questions like:

The agent can then propose to classify the answer as a wiki summary, which is written back into Obsidian. This creates an iterative loop:

asking → producing synthesis → saving/updating wiki


Maintenance / quality control


Main speakers / sources (as mentioned)

Category ?

Technology


Share this summary


Is the summary off?

If you think the summary is inaccurate, you can reprocess it with the latest model.

Video