Install MCP Server to Terminal / Local CLI


If you'd like to run InfraNodus MCP server directly from your CLI (and thus make it available to any AI agent like OpenClawthat has access to your CLI), you can run it via the MCPorter tool. MCPorter can can run any MCP server directly from your Terminal CLI or Typescript projects. This is useful if you have AI agents installed on your server or local machine and you want to give them access to various tools available via MCP servers. This can also be a quick way for your web app to connect to an external service without the overhead of understanding its API.

In this guide, we will show how you can install mcporter and how you can run it with InfraNodus MCP server tools.

Installing MCPorter

If you are already using LLM clients like Cursor or Claude on your machine, you might already have mcporter installed.

In order to check this, open your Terminal and run the command that will list the MCP servers available:

mcporter list

If there is no error and you see a list of MCP servers, you or your LLM tools can easily call them from the command line or from your agentic execution workflows.

If there is an error and you'd like to install mcporter, you can run:

npm install -g mcporter

This will install mcporter globally on your machine and make it available via a command line. Then run the command to list the MCP servers available above.

If you want to install mcporter locally to a specific project where you will be using it, you can run

npm install mcporter

You can also run MCPorter without installing it on your machine via the npx package manager — in this case, npx will call mcporter from its latest package, so you can also use mcporter through npx without the need to install it, just run the following command to verify it's available (this lists the MCP servers available):

npx mcporter list

In most cases, what you're going to see is:

No MCP servers configured.

If you see an MCP server that you need configured, then these server tools can be called and you can use them in your LLM workflows.

Configuring MCPorter and Installing MCP Servers

The next step is to check which configuration files your MCPorter is using. Whether you run it from a version that you installed on your machine or using the npx package manager, MCPorter will have access to servers that are listed in the config files it has access to.

Check which configuration files MCPorter uses:

mcporter config list

Usually, you have 3 types of config files:

  • Global MCPorter config file located at ~/.mcporter/mcporter.json
  • Local MCPorter config file that will be used if you run MCPorter in a specific project: ~/your-project-folder/config/mcporter.json
  • Imported config files that provide access to MCP servers available to your other tools, like Cursor, for instance, in ~/.cursor/mcp.json or for Claude in ~/.claude folders

Your choice of which configuration file to use depends on your particular use case:

  1. If you are using the MCP server in your Node.Js / Typescript project, install the server configuration in a local project you're using the server in via the configuration file where you will also provide the API credentials.
  2. If you want the MCP server to be accessible globally to all applications that use mcporter (for instance, OpenClaw), then you can install it globally using OAuth automatic authentication.
  3. If you want the MCP server to be accessible locally within the scope of a particular project (for instance, for editing a website or working on code), install it locally either via automatic OAuth configuration or using the manual configuration file.
IMPORTANT SECURITY NOTE

You should assess the security risks as it means that any LLM application will be able to access any of the MCP servers you have listed in your global config. So we would recommend against installing MCP servers that provide access to your personal account or sensitive information globally and to instead only make them available locally in the projects you need them.

a) Installing a Remote InfraNodus MCP Server with Automatic OAuth Authentication

You can install the remote InfraNodus MCP server. This means that you will authenticate using the browser (via OAuth) and your credentials will be stored in a session and not in the configuration file locally.

An alternative is to install the InfraNodus MCP server via a configuration file where you provide the API key directly. This may be preferable for using several InfraNodus accounts (e.g. one for text processing, one for GraphRAG workflows, one for memory).


To install your InfraNodus MCP server globally, run:

mcporter config --config ~/.mcporter/mcporter.json add infranodus --url https://mcp.infranodus.com --auth oauth

To install your InfraNodus MCP server locally, run:

mcporter config add infranodus \
									--url https://mcp.infranodus.com/ \
									--transport http \
									--auth oauth \
									--header "accept=application/json, text/event-stream" \
									--scope home

In both cases, you will then need to authenticate the InfraNodus MCP server using your InfraNodus API key available at https://infranodus.com/api-access

After you get the API key, run this command to authenticate:

mcporter auth infranodus --reset

You will then see a browser popup window where you can enter your InfraNodus API key. After it's added, you will see "Authorization successful" message and can close the browser window and come back to your CLI.

Your server is authorized.

Verify your installation with this command:

mcporter config list

You should see:

infranodus
  Source: local (/Users/dmt/.mcporter/mcporter.json)
  Transport: http (https://mcp.infranodus.com/)
  Auth: oauth

Project config: /Users/dmt/Software/mcp-server-infranodus/config/mcporter.json (missing)
System config: /Users/dmt/.mcporter/mcporter.json

This confirms that mcporter is using the global system config and has access to the infranodus server via mcp.infranodus.com using OAuth for authentication.

You can also verify access to the MCP server tools with:

mcporter list

This lists the specific servers available:

mcporter 0.7.3 — Listing 1 server(s) (per-server timeout: 30s)
- infranodus (27 tools, 0.8s)
✔ Listed 1 server (1 healthy).

If you see this message, the installation was successful.

b) Installing the InfraNodus MCP Server Locally using the API Key in a Configuration File

This setup is convenient if you want to make the InfraNodus MCP server available to a particular project where you provide the API key manually. This is also useful if you want to use several InfraNodus accounts: then you can give each server a different name, e.g. infranodus-expert-in-a, infranodus-kg-memory etc. and call them in accordance to the task at hand.


Manual global installation of the InfraNodus MCP server

You can also add the InfraNodus MCP server manually globally by opening the ~/.mcporter/mcporter.json file and adding the configuration in the same format as is used for Cursor or Claude:

{
	"mcpServers": {
		"infranodus": {
			"command": "npx",
			"args": ["-y", "infranodus-mcp-server"],
			"env": {
				"INFRANODUS_API_KEY": "YOUR_INFRANODUS_API_KEY_HERE"
			}
		}
	}
}

Using Cursor / Claude InfraNodus MCP Server Setup

If you already have the MCP servers you need in your Cursor's configuration file, you can run the command to import Cursor's config into MCPorter:

mcporter config import cursor --source

Then run to verify:

mcporter config list
mcporter list

What happens here is that mcporter will automatically scan the configuration file you specify (Cursor in this case) and will use that installation and the API key provided there to interact with your MCP server.

If your server is not listed, you might have to copy that config from Cursor into MCPorter's config:

mcporter config import cursor --copy

To verify your installation, run:

mcporter list

You will see something like:

mcporter 0.7.3 — Listing 1 server(s) (per-server timeout: 30s)
- infranodus (27 tools, 0.4s) [source: ~/.cursor/mcp.json]
✔ Listed 1 server (1 healthy).

Running MCP Tools with MCPorter in Terminal / CLI

The most interesting part is the ability to run MCP tools with MCPorter in your Terminal or CLI. As LLM clients have access to CLI this also means that your AI agents can also run any of those tools.

It may be important, however, to provide a skill file to your LLM client that describes how your client should be using the server. Some clients, like Cursor, have automatic access to all the tool definitions and descriptions in the MCP server that was manually added to them. Some other clients, like OpenClaw, do not need to have the MCP server added. Instead, they rely on mcporter to provide them a schema for each particular server / tool.

To see the schema for the MCP server you're using:

mcporter list infranodus --schema

This will list all the tools available in the MCP server as Typescript functions with schema definitions. This is sufficient for any LLM tool to understand how to use the tools, which parameters to provide, and what objectives each tool can help accomplish.

You can call the tools automatically via AI agents like OpenClaw or manually using the command line or directly from your Typescript / Javascript code. Below we explain how you can do it both ways.

Automatic Tool Calling via OpenClaw

For instance, the generate_content_gaps tool schema looks like this:

/**
 * Generate content gaps from text, URL, or an existing graph
 * using knowledge graph analysis.
 *
 * @param text? Text to retrieve content gaps from.
 * @param url? URL to fetch content from or YouTube video URL.
 * @param graphName? Name of an existing InfraNodus graph.
 */
function generate_content_gaps(
  text?: string,
  url?: string,
  graphName?: string
);

For instance, we can ask a local autonomous AI agent OpenClaw to run content gap analysis using InfraNodus on this text with this prompt:

find content gaps using infranodus in this text: "God said, 'You shall not eat of the fruit of the tree which is in the midst of the garden, neither shall you touch it, lest you die.'" But the serpent said to the woman, "You will not die. For God knows that when you eat of it your eyes will be opened, and you will be like God, knowing good and evil."

MCPorter will first invoke the MCP server and read the schema, then it will understand that it should use the generate_content_gaps tool and make the following request:

mcporter call infranodus.generate_content_gaps text="God said, 'You shall not eat of the fruit of the tree which is in the midst of the garden, neither shall…"

This request is calling the infranodus.generate_content_gaps tool via the command line available to OpenClaw.

The tool provides this result:

{
  "contentGaps": [
    "Gap 1: 2. Edenic Growth (tree midst fruit garden touch) -> 3. Moral Awareness (knowing good evil)",
    "Gap 2: 1. Divine Consumption (eat god eye open) -> 2. Edenic Growth (tree midst fruit garden touch)",
    "Gap 3: 1. Divine Consumption (eat god eye open) -> 3. Moral Awareness (knowing good evil)"
  ]
}

It will then ingest this information and augment its response with this data:

Here are the content gaps identified by InfraNodus via knowledge graph analysis:

Gap 1: Edenic Growth ↔ Moral Awareness
(tree, midst, fruit, garden, touch) → (knowing, good, evil)
The text doesn't connect the physical setting (the tree, the garden, touching
the fruit) with the moral outcome (knowing good and evil). What's the mechanism?
How does eating a physical fruit produce moral knowledge?

Manual Tool Calling

If you would like to call the tool manually, simply run:

mcporter call infranodus.generate_content_gaps text="God said, 'You shall not eat of the fruit of the tree which is in the midst of the garden, neither shall…"

mcporter will connect to the server, invoke the tool, send the text parameter to it, and get the response:

{
  "contentGaps": [
    "Gap 1: 1. Divine Nourishment (fruit eat god) -> 2. Sacred Sanctuary (tree midst garden)"
  ]
}

You can also query the content gaps directly from your existing InfraNodus graphs. For instance, if you want to get the gaps in your graph gato-politics-memory:

mcporter call infranodus.generate_content_gaps graphName="gato-politics-memory"

The response:

{
  "contentGaps": [
    "Gap 1: 3. Meaning Justification ([[relational_meaning]] [[semantic_meaning]] ...) -> 6. Governance Accountability ([[transnational]] [[accountability]] ...)",
    "Gap 2: 2. Protocol Coordination ([[protocols]] [[coordination]] ...) -> 3. Meaning Justification ([[relational_meaning]] ...)",
    "Gap 3: 1. Harm Metrics ([[metrics]] [[harm]] ...) -> 3. Meaning Justification ([[relational_meaning]] ...)"
  ]
}

You can then reuse it in your LLM workflow.

To read more about the tool definitions available for the InfraNodus MCP server, please, go to the MCP Tools page.