LLM Wiki Skill: Build a Personal Knowledge Base with InfraNodus Gap Analysis
The LLM Wiki skill
(skill-llm-wiki)
is a guided, multi-phase Claude Skill that scaffolds a
persistent, compounding knowledge base from
your raw sources — with
InfraNodus wired into every stage that
benefits from network analysis: ontology generation, gap
detection, research planning, and GraphRAG retrieval.
What InfraNodus specifically does inside the wiki:
-
Builds the knowledge graph — every wiki
folder (
concepts/,sources/,questions/,systems/…) gets a flat ontology ininfranodus/using[[wikilinks]]syntax that you can paste into infranodus.com to visualize the structure of your thinking as a network. -
Finds the gaps in your reading —
generate_content_gapsidentifies the under-connected concept clusters in your wiki, so you can see what you haven't read yet, not just what you have. -
Turns gaps into research priorities —
generate_research_questionsandgenerate_research_ideasconvert structural gaps into a prioritized, actionable todo list intodos/. -
Powers GraphRAG retrieval —
retrieve_from_knowledge_basequeries your wiki with graph-aware retrieval grounded in your materials, not the LLM's training data. -
Detects bias, focus, and dispersion —
optimize_text_structurediagnoses the cognitive state of any wiki page and recommends whether to broaden, focus, or bridge.
Instead of re-deriving knowledge from raw documents on every query (the classic RAG pattern), the LLM extracts, cross-references, and synthesizes knowledge once into markdown pages — then InfraNodus keeps the graph view current so you always know where the holes are. The wiki compounds over time; you curate sources and ask questions; the LLM does the bookkeeping; InfraNodus does the structural diagnostics.
View Skill on GitHub InfraNodus MCP Server Download from ReleasesWhen to Use This Skill
The skill activates whenever you ask Claude (or another LLM client with skills enabled) to:
- set up a personal knowledge base or research wiki
- organize notes, papers, transcripts, or articles with LLM help
- build a "second brain" or Obsidian + LLM workflow
- create a persistent knowledge graph from a folder of documents
- plan ongoing research priorities and uncover gaps in what you've read
- turn raw PDFs, YouTube transcripts, or web articles into a queryable, linked knowledge structure
It asks a few targeted questions about your domain and goals, then scaffolds the entire wiki structure, schema, and workflows tailored to your needs.
How the Wiki Is Structured
Every LLM Wiki the skill builds has five layers, with InfraNodus owning the structural / network layer:
- Raw sources — immutable input documents (PDFs converted to markdown, transcripts, articles, notes). The LLM reads but never modifies these. Organized by source type (papers, notes, youtube, articles, patents, books, interviews…), not by topic.
- The wiki — LLM-generated markdown pages: summaries, entity pages, concept pages, comparisons, synthesis. The LLM owns this layer entirely.
-
The
infranodus/folder — flat folder of[[wikilinks]]ontology files, one per wiki section (concepts-ontology.md,sources-ontology.md,full-wiki-ontology.md…). Generated and incrementally updated by the ontology-creator skill and consumed by the InfraNodus MCP server for graph visualization, gap analysis, and GraphRAG. This folder is the bridge between markdown and network view. - Output folder — results of your interactions: reports, analyses, gap-analysis exports, slide decks.
-
Schema documents
(
CLAUDE.mdandAGENTS.md) — configuration that tells the LLM how the wiki is structured, what InfraNodus tools to call at which stage, and what conventions to follow. Co-evolved by user and LLM over time.
Core principle: the user never writes the
wiki. The LLM maintains the markdown layer; InfraNodus
maintains the graph layer; the human curates sources and
directs analysis. The two layers stay in sync via the
infranodus/ ontology folder.
The 10-Phase Workflow
The skill walks through ten distinct phases. Phases 8 and 9 are deliberately separated — acquisition (getting files onto disk) and processing (converting raw materials into wiki pages) are independent, re-runnable operations.
| # | Phase | Purpose | InfraNodus role |
|---|---|---|---|
| 1 | Discover | What domain? What's the goal? What sources? | — |
| 2 | Scope | How big? How deep? What outputs matter? | — |
| 3 | Structure | Directory layout, page types, naming conventions | Reserves the flat infranodus/ folder for ontologies |
| 4 | Schema | Write the CLAUDE.md / AGENTS.md configuration | Encodes when to call which InfraNodus MCP tool |
| 5 | Workflows | Define ingest, query, and lint operations | Wires generate_topical_clusters, generate_content_gaps, retrieve_from_knowledge_base into the workflows |
| 6 | Tooling | Obsidian plugins, CLI tools, search, git | InfraNodus MCP server + infranodus-cli skill registered as the structural-analysis backbone |
| 7 | Scaffold | Create the directory structure and starter files | Creates empty infranodus/ folder + initial ontology stubs |
| 8 | Acquire | Get sources into raw/ — hard-drive import, web fetch, transcription, PDF→md | Optional: analyze_text on each source for an early structural read |
| 9 | Process | Ingest raw/ → wiki/ — summarize, update index | Append-only ontology updates via the ontology-creator skill — never regenerated from scratch |
| 10 | Plan | Analyze gaps, prioritize research directions, create actionable todos | Core InfraNodus phase: generate_content_gaps, generate_research_questions, generate_research_ideas, develop_conceptual_bridges |
Phases 9 and 10 are where InfraNodus does most of its work: keeping the ontology current as new sources land, then surfacing the missing connections so you know what to read next.
Tiered to Your Project Size
The skill classifies each wiki into a tier based on source volume, entity count, and timeframe — so a one-weekend reading project doesn't get the same indexing infrastructure as a multi-year research program.
| Tier | Sources | Entities | Duration | Example |
|---|---|---|---|---|
| Light | 5–20 | Few / none | Days–weeks | Reading a single book, trip planning |
| Medium | 20–100 | Dozens | Weeks–months | Research project, course notes, competitive analysis |
| Heavy | 100+ | Hundreds | Months–years | Ongoing team wiki, long-term research program, personal life wiki |
What InfraNodus Brings to the Wiki
Markdown alone gives you content. The InfraNodus MCP server gives you structure — the network view of how your concepts actually connect. The LLM Wiki skill calls InfraNodus at every point where structure matters.
1. Append-Only Ontology in [[wikilinks]] Format
After each Phase 9 ingest, the
ontology-creator skill
reads the updated wiki pages and adds new
[[entity1]] [relation] [[entity2]] statements to
the corresponding ontology file in infranodus/.
Existing relationships are never overwritten —
ontology files are curated artifacts that accumulate
human-reviewed knowledge. You can paste any ontology file
directly into
infranodus.com
to see the network view.
2. Gap Analysis Drives Phase 10 Research Planning
This is the part that makes the wiki self-directing. After each ingest, Phase 10 calls:
-
generate_topical_clusters— identifies the main topical clusters in the current wiki state -
generate_content_gaps— finds the under-connected pairs of clusters: this cluster and that cluster aren't yet bridged -
generate_research_questions/generate_research_ideas— converts each gap into a concrete question or research direction you can act on -
develop_conceptual_bridges— for the gaps worth bridging, suggests how to connect them
The output lands in todos/ as a prioritized
list. The wiki tells you what to read next.
3. GraphRAG Retrieval over Your Own Materials
retrieve_from_knowledge_base queries the
ontology + wiki together — you get answers grounded in
your own sources with the network context (which clusters,
which connections, which gaps) instead of a flat
top-k chunk dump. generate_responses_from_graph
goes further and answers questions using the structure
of the graph itself as context.
4. Structural Diagnostics on Any Page or Cluster
optimize_text_structure reads any wiki page (or
the whole wiki) and tells you whether it's
biased (tunnel vision on one cluster),
focused (well-balanced),
diversified (spread across multiple clusters), or
dispersed (scattered with no through-line) —
then suggests the right development move.
develop_latent_topics surfaces topics that are
implicit in the text but not yet explicit pages.
5. Real-Time External Data for SEO & Search Wikis
For wikis that track a competitive or content-marketing
topic, analyze_google_search_results,
analyze_related_search_queries, and
generate_seo_report pull live SERP data so the
wiki's gap analysis includes what the world is
searching, not just what you've already collected.
6. Persistent Memory via Knowledge Graphs
memory_add_relations persists key
[[wikilinks]] relationships into a named
InfraNodus graph that survives across sessions, so future
conversations can call memory_get_relations to
pick up where you left off — without re-reading the
entire wiki.
7. Optional /actionize Integration
Pipe the Phase 10 todo list into the actionize skill to schedule deadlines and Telegram reminders for the research questions InfraNodus surfaced.
Why LLM Wiki + InfraNodus Beats Plain RAG
Most "chat with your documents" workflows re-derive insight from raw chunks on every query. They don't remember what they figured out yesterday, and they have no way to tell you what's missing — only what's there. The LLM Wiki + InfraNodus combination is different:
- Synthesis is persisted, not recomputed. Pages capture insight as durable markdown.
- The graph is persisted too. The
infranodus/ontology folder is the structural counterpart of the markdown wiki — one for reading, one for navigating. - Cross-references compound. New sources extend existing pages and append new
[[wikilinks]]relations rather than starting fresh. - You can read your wiki without an LLM. Plain markdown, Obsidian-compatible, git-versioned. You can also visualize it without an LLM by pasting the ontology into infranodus.com.
- Gaps are visible, not hidden. InfraNodus tells you which concept clusters aren't yet bridged — the negative space of your reading.
- The wiki tells you what to read next. Phase 10's gap-driven research questions turn the wiki into an active research partner instead of a passive archive.
- The wiki replaces chat history. Insights from a conversation get written into pages, ontology relations, and named InfraNodus memory graphs — so they survive past the context window.
Install the Skill
Download the skill bundle from the InfraNodus skills releases page and install it like any other Claude Skill. See the main installation guide for step-by-step instructions per LLM client (Claude Web / Desktop, Claude Code, ChatGPT, OpenClaw, Cursor).
Claude Code
cd ~/.claude/skills
curl -L -o skill-llm-wiki.zip https://github.com/infranodus/skills/releases/latest/download/skill-llm-wiki.zip
unzip skill-llm-wiki.zip
Claude Web / Desktop
Go to Settings > Capabilities (or
Menu > Customize in the newer UI), enable
Code execution and file creation, scroll to the
Skills section, and upload the
skill-llm-wiki.zip file.
OpenClaw
install this skill: https://github.com/infranodus/skills/releases/latest/download/skill-llm-wiki.zip
Required for the full experience: this skill is designed to call the InfraNodus MCP server for ontology generation, gap analysis, GraphRAG retrieval, and structural diagnostics. Without it, the wiki still works as plain markdown — but you lose Phase 10 entirely (the part that tells you what to read next). Get an InfraNodus API key to lift the free-tier rate limits.
How to Invoke It
Once installed, the skill activates automatically on phrases like "set up an LLM wiki", "build me a second brain", "I have a folder of papers I want to organize", "help me plan my research", or "create a persistent knowledge graph from these documents."
You can also invoke it explicitly:
Use the llm-wiki skill to set up a research wiki for my reading on
political philosophy. I have ~30 PDFs in ~/Dropbox/PoliPhil and I want
to track contradictions between authors and produce an essay at the end.
The skill responds with the discovery questions, scopes the
tier (Light / Medium / Heavy), proposes a directory layout
(including the infranodus/ folder), writes the
schema files with InfraNodus tool wiring baked in, and
scaffolds the wiki on disk.
Then, on every new source you add:
Process the new sources in raw/papers/ and update the wiki.
The skill ingests each file, updates the relevant wiki
pages, calls the ontology-creator skill to append new
[[wikilinks]] relations, and refreshes the
infranodus/ folder.
When you want to know what to read next:
Run Phase 10. What are the biggest gaps in the wiki right now and
what should I read next?
The skill calls
generate_topical_clusters →
generate_content_gaps →
generate_research_questions on the current
InfraNodus graph and writes a prioritized todo list to
todos/. You can then visualize the same graph
at infranodus.com
to see the gaps as visual negative space between clusters.
When you want to query the wiki:
Use retrieve_from_knowledge_base to answer: how do the authors I've
read so far disagree about the role of consent in legitimacy?
GraphRAG retrieval over the ontology + wiki returns answers grounded in your own materials, with cluster context.