// documentation

Setup guides & FAQ

Everything you need to get NodeGhost running — from getting your API key to configuring Home Assistant voice AI.

Get your API key


After completing checkout, NodeGhost will automatically send your API key to the email address you used at signup. The key looks like this:

ng-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Keep this key private — it controls access to your plan's request quota. If you lose it, contact [email protected] and we'll reissue one.

Free plan: No credit card required. Sign up and your key is emailed instantly. Free tier includes 50 requests/month.

Using the API


NodeGhost is a drop-in replacement for the OpenAI API. There are three ways to use it depending on how you onboarded.

Path 1 — ng- key (Stripe or USDC)

If you signed up via Stripe or paid with USDC, you have an ng- API key. Register your model endpoint once, then use it like any OpenAI-compatible API:

curl https://nodeghost.ai/v1/chat/completions \ -H "Authorization: Bearer ng-your-key-here" \ -H "X-Endpoint-Key: sk-your-model-api-key" \ -H "Content-Type: application/json" \ -d '{ "model": "your-model-here", "messages": [{"role": "user", "content": "Hello!"}] }'
X-Endpoint-Key: Pass your model provider's API key (OpenAI, Groq, DeepSeek, etc.) in the X-Endpoint-Key header. Register your endpoint first at POST /v1/endpoint/register.

Path 2 — Native POKT app stake

If you staked a POKT application wallet directly on Shannon mainnet, use the /pokt/ endpoint. Pass your model provider API key as Authorization and specify your provider with X-Endpoint:

curl https://nodeghost.ai/pokt/v1/chat/completions \ -H "Authorization: Bearer sk-your-model-api-key" \ -H "X-Endpoint: https://api.openai.com" \ -H "Content-Type: application/json" \ -d '{ "model": "your-model-here", "messages": [{"role": "user", "content": "Hello!"}] }'
X-Endpoint accepts any OpenAI-compatible provider URL — https://api.openai.com, https://api.deepseek.com, https://api.groq.com/openai, or your own self-hosted model. Defaults to OpenAI if omitted.

Path 3 — Python (ng- key)

from openai import OpenAI client = OpenAI( api_key="ng-your-key-here", base_url="https://nodeghost.ai/v1", default_headers={"X-Endpoint-Key": "sk-your-model-api-key"} ) response = client.chat.completions.create( model="your-model-here", messages=[{"role": "user", "content": "Hello!"}] ) print(response.choices[0].message.content)

Supported model providers

Provider X-Endpoint value Example model
OpenAI https://api.openai.com gpt-4o-mini
DeepSeek https://api.deepseek.com deepseek-chat
Groq https://api.groq.com/openai llama-3.1-70b-versatile
Anthropic https://api.anthropic.com claude-3-5-haiku-20241022
Self-hosted (Ollama etc.) https://your-server.com llama3.2:3b

Rate limits


Each plan includes a monthly request quota. Requests reset on the 1st of each month.

PlanMonthly requestsPrice
Free50$0/mo
Basic2,500$1/mo
Pro7,500$3/mo
Premium25,000$10/mo
Business75,000$30/mo

1 REQUEST = 1 POKT RELAY = 400,000 COMPUTE UNITS

Home Assistant overview


NodeGhost works as the AI brain for Home Assistant — giving you a genuinely intelligent voice assistant that runs privately, without sending your conversations to Google, Amazon, or Apple.

The full private stack looks like this:

1

Local speech-to-text (Whisper)

Converts your voice to text entirely on your device. Nothing leaves your home at this stage.

2

NodeGhost AI (via POKT)

Your text is routed through the POKT decentralized network to your registered model endpoint. No logging of request content — ever.

3

Local text-to-speech (Piper)

The AI response is converted back to voice on your device. Your assistant speaks back to you.

Total cost: $1/month on the Basic plan. Smarter than Alexa, more private than everything.

Install the integration


NodeGhost uses the Extended OpenAI Conversation integration available through HACS (Home Assistant Community Store).

Step 1 — Install HACS

If you don't have HACS installed yet, follow the official guide at hacs.xyz. It adds a community add-on store to your Home Assistant instance.

Step 2 — Install Extended OpenAI Conversation

1

Open HACS in Home Assistant

Go to HACS → Integrations → search for "Extended OpenAI Conversation" → Download.

2

Restart Home Assistant

After downloading, restart Home Assistant to load the new integration.

3

Add the integration

Go to Settings → Devices & Services → Add Integration → search for "Extended OpenAI Conversation".

4

Configure NodeGhost credentials

When prompted, enter the following:

API Key: ng-your-key-here Base URL: https://nodeghost.ai/v1 Model: your-model-here

The model name must match your registered endpoint. Pass your model provider's API key in the X-Endpoint-Key header if your integration supports custom headers — otherwise register your endpoint via POST /v1/endpoint/register first.

5

Set as your conversation agent

Go to Settings → Voice Assistants → select your assistant → set Conversation Agent to "Extended OpenAI Conversation".

Tip: Once configured, you can talk to your assistant by saying "Hey Jarvis" (or whatever wake word you've set) and it will use NodeGhost for the AI response.

Add local voice (Whisper + Piper)


For fully private voice — no cloud at any step — install the local speech add-ons. These run entirely on your Home Assistant hardware.

Install Whisper (speech to text)

1

Go to Settings → Add-ons → Add-on Store

Search for "Whisper" — install the official "Whisper" add-on by Home Assistant.

2

Start the add-on and enable on boot

In the add-on settings, toggle "Start on boot" and "Watchdog" then hit Start.

3

Add the Wyoming integration

Go to Settings → Devices & Services → Add Integration → search "Wyoming Protocol" → configure it pointing to the Whisper add-on.

Install Piper (text to speech)

Repeat the same steps for the "Piper" add-on. Once both are installed, go to Settings → Voice Assistants and set:

Speech-to-text: Whisper Text-to-speech: Piper Conversation: Extended OpenAI Conversation (NodeGhost)
Hardware note: Whisper runs best on a Raspberry Pi 4 or 5 with at least 4GB RAM. On older hardware, use the "tiny" model for faster response times.

Remote access


To use the Home Assistant app and your NodeGhost voice assistant when you're away from home, you need a way to reach your Pi remotely. The free option is a Cloudflare Tunnel.

Option A — Cloudflare Tunnel (free)

1

Get a free domain or use one you own

Cloudflare Tunnels require a domain managed by Cloudflare. You can transfer an existing domain or register one at cloudflare.com.

2

Install the Cloudflare Tunnel add-on in HA

In HACS, search for "Cloudflare Tunnel" and install it. Configure it with your Cloudflare token from the Cloudflare Zero Trust dashboard.

3

Point the HA app at your tunnel URL

In the Home Assistant app settings, set your external URL to your Cloudflare tunnel address (e.g. https://home.yourdomain.com).

Option B — Nabu Casa ($6.50/month)

Nabu Casa provides an easy one-click remote access tunnel. It works alongside NodeGhost — Nabu Casa handles the network tunnel, NodeGhost handles the AI. Go to Settings → Home Assistant Cloud to subscribe.

Note: Either remote access option works with NodeGhost. Your AI requests always route through NodeGhost regardless of how you connect remotely.

Run your own AI model through NodeGhost


NodeGhost is a privacy-preserving inference gateway — not a managed model service. Register your own OpenAI-compatible endpoint and route all inference through NodeGhost's infrastructure, giving you complete control over which AI model processes your data while NodeGhost handles auth, rate limiting, and decentralized routing.

This means you can run an open source model like Llama, Mistral, or Qwen on your own server, point NodeGhost at it, and use the same https://nodeghost.ai/v1 endpoint you already know. Your model does the inference. NodeGhost handles everything else.

Available on all plans. Custom endpoint registration is included at every tier — from Free to Business. You bring the model, NodeGhost brings the infrastructure layer.

Why run your own model?

There are several reasons you might want to bring your own endpoint:

  • You've fine-tuned a model on proprietary data and need inference for it
  • You want complete privacy — no third party sees your prompts at any layer
  • You want to use a specific open source model not available through NodeGhost's default backend
  • You're building a product on top of your own model and need auth and rate limiting infrastructure
  • You want to protect your model endpoint from being exposed publicly

How it works

When you register a custom endpoint, NodeGhost stores it linked to your API key. Every inference request you make is authenticated and rate limited as normal — then forwarded to your endpoint via the POKT network. Your model processes the request and returns the response through NodeGhost back to your application.

Your endpoint is never exposed publicly. All traffic enters through nodeghost.ai/v1 and NodeGhost proxies it to your server privately.

Register your endpoint


Your endpoint must be publicly accessible over HTTPS and OpenAI-compatible. Any server running Ollama, LM Studio, vLLM, or a custom FastAPI wrapper will work.

Step 1 — Run an OpenAI-compatible model server

The most common option is Ollama. Install it on any VPS or server and expose it publicly:

# Install Ollama on your server curl -fsSL https://ollama.com/install.sh | sh # Pull a model ollama pull llama3.2 # Start with HTTPS via nginx reverse proxy # Your endpoint will be: https://your-server.com/v1/chat/completions
HTTPS required. NodeGhost only accepts endpoints served over HTTPS. Use a reverse proxy like nginx with a Let's Encrypt certificate to secure your Ollama server.

Step 2 — Register your endpoint with NodeGhost

Call the registration endpoint with your ng- API key:

curl -X POST https://nodeghost.ai/v1/endpoint/register \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_NG_KEY" \ -d '{ "endpoint_url": "https://your-server.com", "endpoint_name": "My Llama Server" }'

NodeGhost will verify your endpoint is reachable and return a confirmation:

{ "success": true, "endpoint_url": "https://your-server.com", "endpoint_name": "My Llama Server", "message": "Custom endpoint registered. Your inference calls will now route to this endpoint via the POKT network." }

Step 3 — Use NodeGhost as normal

Nothing changes in your application. Keep using https://nodeghost.ai/v1 with your ng- key. NodeGhost silently routes to your registered model endpoint:

from openai import OpenAI client = OpenAI( base_url="https://nodeghost.ai/v1", api_key="YOUR_NG_KEY" ) # Routed through POKT to your registered endpoint response = client.chat.completions.create( model="llama3.2", # match your model name messages=[{"role": "user", "content": "Hello"}] )

Check your routing status

curl https://nodeghost.ai/v1/endpoint/register \ -H "Authorization: Bearer YOUR_NG_KEY"

Remove your custom endpoint

To remove your registered endpoint:

curl -X DELETE https://nodeghost.ai/v1/endpoint/register \ -H "Authorization: Bearer YOUR_NG_KEY"

Full privacy stack — zero third parties


If complete privacy is your goal — where no third party ever sees the content of your AI requests — this is the architecture that achieves it.

The goal: Your prompts leave your device, get authenticated through NodeGhost, and land on your own server running your own model. Nobody else is in the chain.

The full stack

Here's every component and who controls it:

Your application
Home Assistant, custom app, anything OpenAI-compatible
You control
NodeGhost gateway — powered by POKT
Auth, rate limiting, and decentralized routing via POKT Shannon mainnet — never logs content
NodeGhost
Your VPS running Ollama
Any provider — Hetzner, DigitalOcean, Vultr, etc.
You control
Your open source model
Llama, Mistral, Qwen, Phi — weights downloaded once, run locally
You control

What this means for privacy

With this setup your prompts travel from your application to NodeGhost for authentication and routing through the POKT decentralized network, then directly to your own server. The request enters through a gateway with no single point of control — POKT's on-chain verification ensures the routing is transparent and auditable. The model that processes your request runs on hardware you control. No AI company, no cloud provider, no GPU farm ever sees what you're asking.

NodeGhost sees that a request happened — the timestamp and your API key — but never the content. That metadata is used only for rate limiting and is retained for 90 days.

Recommended models for self-hosting

These open source models run well on a modest VPS with 16GB+ RAM:

General purpose
Llama 3.2 8B
Fast, capable, 8GB RAM minimum
Coding & reasoning
Qwen 2.5 14B
Strong reasoning, 16GB RAM recommended
Lightweight
Phi-4 Mini
Excellent quality, runs on 4GB RAM
High performance
Mistral 7B
Fast inference, 8GB RAM minimum
VPS recommendation: A Hetzner CX32 (4 vCPU, 8GB RAM, ~€8/month) runs Llama 3.2 8B comfortably via Ollama. For larger models, a CX42 (8 vCPU, 16GB RAM) handles most 14B models well.

What is POKT?


POKT is the native token of Pocket Network — the decentralized infrastructure that NodeGhost is built on. Every AI request you make through NodeGhost is routed through the POKT Shannon network, verified on-chain, and settled between gateway operators and node suppliers.

This is what makes NodeGhost fundamentally different from other AI providers — the routing is on-chain and decentralized, so there's no single company that could log, monitor, or sell your requests even if they wanted to.

For most users, POKT is invisible — you just use your ng- API key and pay via Stripe. But if you're crypto-native and want to interact with the network directly, you can stake your own application wallet and pay per relay in POKT tokens.

Stake your app wallet


Advanced users can bypass the Stripe subscription entirely by staking a POKT application wallet directly on Shannon mainnet. This gives you pay-per-relay access at the rate set by each service — for example ai-inference is priced at $0.0004 per relay. Each service has its own relay price, so the cost depends on which service you stake against.

Advanced: This requires familiarity with blockchain wallets and the POKT CLI. If you're new to crypto, the Stripe plans are much easier to get started with.

What you'll need

A funded POKT wallet with enough tokens to stake an application. Current minimum is 1,000 POKT. You'll also need pocketd installed on your machine.

Stake your application

pocketd tx application stake-application \ --config=app-stake-config.yaml \ --keyring-backend=test \ --from=your-wallet-name \ --network=main \ --fees 2000upokt \ --yes

Your app-stake-config.yaml should specify the ai-inference service:

stake_amount: "1001000000upokt" service_ids: - ai-inference

Once staked, delegate your application to NodeGhost's gateway address:

pocketd tx application delegate-to-gateway pokt1ecrykpsr87juxcpdn2yxq8mnfrvhrs85dk5y3t \ --from=your-wallet-name \ --keyring-backend=test \ --network=main \ --fees 2000upokt \ --yes
Required: The delegate step is mandatory — without it you will get a "gateway does not have delegation" error when making requests. Wait ~10 minutes after staking for the session to roll over before testing.

Gateway address for reference:

Gateway address: pokt1ecrykpsr87juxcpdn2yxq8mnfrvhrs85dk5y3t

Using the gateway after staking

Once staked and delegated, hit the native POKT endpoint with your model provider's API key as Authorization and your provider URL as X-Endpoint:

curl https://nodeghost.ai/pokt/v1/chat/completions \ -H "Authorization: Bearer sk-your-model-api-key" \ -H "X-Endpoint: https://api.openai.com" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Hello!"}] }'

Supported providers via X-Endpoint: OpenAI, DeepSeek, Groq, Anthropic, or any OpenAI-compatible self-hosted model. If X-Endpoint is omitted, defaults to https://api.openai.com.

Need help? Join the POKT Discord at discord.gg/pokt or email us at [email protected].

Autonomous AI payments


NodeGhost supports fully autonomous agent onboarding via USDC on the Base network. An AI agent can pay for its own inference credits without any human involvement — no email, no credit card, no signup form.

The payment flow is designed for agents that hold their own crypto wallets and need to autonomously manage their inference budget. Every wallet is screened by AnChain.AI on each payment. NodeGhost does not store wallet addresses or payment history — only a transaction hash is retained to prevent duplicate crediting.

Privacy note: NodeGhost stores only the transaction hash, relay count, and your ng- key. Your wallet address is screened and immediately discarded. No payment history is retained.

Payment parameters

ParameterValue
NetworkBase (Ethereum L2)
TokenUSDC
USDC contract0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913
Receiving address0xCE8C9AaBA3Eff65516AeA85c5a8375Ff8995f332
Minimum payment$10 USDC
Maximum payment$500 USDC per transaction
Service fee3% (covers compliance screening + operations)
Price per relay$0.0004 (matches POKT 400,000 CU)

$10 USDC → $9.70 after 3% fee → 24,250 relays  |  $500 USDC → $485 after fee → 1,212,500 relays

Step 1 — Check eligibility


Before sending payment, call this endpoint to screen your wallet and get exact payment instructions. Always call this first — it confirms your wallet is eligible and returns the current receiving address.

REQUEST
GET /v1/agent/payment-info?wallet=0xYOUR_WALLET&chain=eth
RESPONSE
{
  "eligible": true,
  "payment": {
    "send_to": "0xCE8C9AaBA3Eff65516AeA85c5a8375Ff8995f332",
    "network": "Base",
    "token": "USDC",
    "min_payment_usdc": 10,
    "max_payment_usdc": 500,
    "service_fee": "3%",
    "price_per_relay": 0.0004,
    "example_min": "$10 → 24,249 relays after 3% fee",
    "example_max": "$500 → 1,212,500 relays after 3% fee"
  },
  "next_steps": [...]
}

If your wallet is flagged by AnChain.AI risk screening, the response will return eligible: false. Do not send payment if ineligible — it will not be credited.

Step 2 — Send payment


Send USDC to the receiving address on the Base network. NodeGhost automatically polls the Base network every 2 minutes for incoming payments — you do not need to notify NodeGhost or call any endpoint. Once detected, your wallet is re-screened by AnChain.AI, the 3% fee is deducted, and relays are credited automatically.

How NodeGhost matches your payment to an account

NodeGhost identifies which account to credit using this priority order:

  • If your ng- key is included in the transaction memo (recommended) — credits that key directly
  • If your wallet address has previously paid — credits the existing account linked to that wallet
  • If neither — creates a new account for your wallet address and credits it

Optional but recommended: Include your ng- API key in the transaction input data field (hex encoded). This ensures instant matching with no lookup step:

ENCODING YOUR KEY IN THE MEMO (JavaScript)
// Convert your ng- key to hex for the transaction data field
const apiKey = 'ng-yourkey';
const hexData = '0x' + Buffer.from(apiKey).toString('hex');
// Use hexData as the transaction input/data field when sending USDC

Payment limits

Minimum $10 USDC: Payments below $10 are detected but not credited — they are logged as below_minimum and the funds are not returned. Always send at least $10.
Maximum $500 USDC per transaction: Payments above $500 are capped — relays are credited for $500 worth and the remainder is not refunded. Split large payments into multiple transactions if needed.

Step 3 — Confirm credits


NodeGhost credits relays automatically — you don't need to call any endpoint to trigger it. However you can poll this endpoint with your transaction hash to confirm relays have been credited and check your current balance:

REQUEST
POST /v1/agent/payment-status
Content-Type: application/json

{
  "tx_hash": "0xYOUR_TRANSACTION_HASH",
  "api_key": "ng-yourkey"
}
RESPONSE — credited
{
  "found": true,
  "tx_hash": "0x...",
  "status": "credited",
  "usdc_amount": 10,
  "fee_amount": 0.3,
  "usdc_after_fee": 9.7,
  "relays_credited": 24249,
  "credited_at": "2026-03-26T04:22:04Z",
  "balance": {
    "relays_used": 0,
    "relays_remaining": 24249,
    "relays_total": 24249
  }
}
RESPONSE — below minimum
{
  "found": true,
  "tx_hash": "0x...",
  "status": "below_minimum",
  "usdc_amount": 5,
  "relays_credited": 0
}

If the transaction is not yet found, the poller runs every 2 minutes — wait and try again. Once status is credited, use your ng- key at https://nodeghost.ai/v1 immediately. 1 relay = 1 request = 400,000 CU.

Full agent payment flow

For autonomous agents, the complete flow from wallet to inference looks like this:

// 1. Check eligibility and get receiving address
GET /v1/agent/payment-info?wallet=0xYOUR_WALLET&chain=eth

// 2. Send USDC on Base to the returned address
// Include ng- key in memo if you have one

// 3. Wait ~2 minutes for auto-detection, then confirm
POST /v1/agent/payment-status { tx_hash, api_key }

// 4. Use your ng- key for inference
POST /v1/chat/completions
Authorization: Bearer ng-yourkey

Hosted models


NodeGhost offers access to hosted open source models routed through the POKT decentralized network. No external API key required — just your ng- key. These models are served by independent node operators on Shannon mainnet, earning POKT relay rewards for every request.

Privacy note: Hosted model requests are routed through POKT the same way as all NodeGhost traffic — decentralized, no logs, no surveillance. The model name is anonymized at the supplier level.
Model Endpoint Model name Status
Llama 3.2 1B Instruct /text-generation/v1/ pocket_network Live

Text generation


Access Llama 3.2 1B Instruct via the POKT decentralized network. No model API key needed — requests are handled by independent POKT node operators. Use pocket_network as the model name.

Example request (curl)

curl https://nodeghost.ai/text-generation/v1/chat/completions \
  -H "Authorization: Bearer ng-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "pocket_network",
    "messages": [{"role": "user", "content": "Hello!"}],
    "max_tokens": 200
  }'

Example request (Python)

from openai import OpenAI

client = OpenAI(
    api_key="ng-your-key-here",
    base_url="https://nodeghost.ai/text-generation/v1"
)

response = client.chat.completions.create(
    model="pocket_network",
    messages=[{"role": "user", "content": "Hello!"}],
    max_tokens=200
)

print(response.choices[0].message.content)
Note: Hosted models count against your relay balance at the same rate as standard inference. Max 2,047 tokens per request. Logprobs are not supported.

Tools as POKT services


NodeGhost provides AI tools as first-class POKT services. Each tool call is a single relay — flat cost, no tokens consumed, no context window, completely stateless. Tools return structured data only, never raw web content, making them safe and prompt-injection resistant by design.

All tools require a valid ng- API key. Tool calls count against your relay balance — web search costs 1 relay at 50,000 CU, which is cheaper than inference (400,000 CU). This means your relay balance goes further when agents use search than when they run inference.

ToolEndpointCU per relayStatus
Web searchPOST /v1/tools/search50,000Live
Tool discoveryGET /v1/toolsFreeLive
Privacy note: All tool calls are stateless. No query is stored. No session is maintained. Each request is completely independent — the tool server has no memory of previous calls.

Tool discovery


Query this endpoint to discover all available NodeGhost tools, their descriptions, parameters, and endpoints. Designed for autonomous agents that need to discover available capabilities without hardcoded configuration.

REQUEST
GET /v1/tools
RESPONSE
{
  "provider": "NodeGhost",
  "description": "Privacy-preserving AI tools routed through POKT",
  "cost": "1 POKT relay per tool call = 400,000 CU",
  "tools": [
    {
      "name": "web-search",
      "description": "Search the web privately via Brave Search...",
      "endpoint": "https://nodeghost.ai/v1/tools/search",
      "method": "POST",
      "parameters": { ... },
      "privacy": "Queries are never logged or stored."
    }
  ]
}

No authentication required for tool discovery. Authentication is required to call individual tools.

Frequently asked questions


Can I use NodeGhost without Stripe or USDC?
+
Yes — crypto-native users can stake a POKT application wallet directly on Shannon mainnet and use the gateway without any subscription or payment. Stake against the ai-inference service, delegate to the NodeGhost gateway address, and hit https://nodeghost.ai/pokt/v1/chat/completions with your own model provider API key. See the "Stake your app wallet" section above for full instructions.
What is the X-Endpoint header?
+
When using the native POKT stake path (/pokt/v1/chat/completions), the X-Endpoint header tells NodeGhost which model provider to forward your request to — for example https://api.openai.com, https://api.deepseek.com, or https://api.groq.com/openai. If omitted, defaults to OpenAI. NodeGhost never holds your model API key — it's passed directly through to your provider.
Do you log my conversations?
+
No. NodeGhost does not log, store, or inspect the content of your AI requests. This is an architectural guarantee — requests are routed through a decentralized network where no single entity has visibility into what you're asking. We log only usage metadata (request count, timestamp, API key) for billing and rate limiting purposes, retained for 90 days.
What models does NodeGhost use?
+
NodeGhost is a bring-your-own-model gateway. Register any OpenAI-compatible endpoint — OpenAI, Groq, Together AI, Anthropic, or your own self-hosted model like Llama or Mistral — and NodeGhost routes your inference through the POKT decentralized network. You choose the model, NodeGhost handles the infrastructure.
Is NodeGhost compatible with Home Assistant?
+
Yes — NodeGhost works as a drop-in replacement for OpenAI inside the Extended OpenAI Conversation integration. Set the Base URL to https://nodeghost.ai/v1 with your ng- API key, and set the model name to match your registered endpoint. Register your model endpoint first at POST /v1/endpoint/register. See the Home Assistant setup guide above for step-by-step instructions.
How is this different from just using OpenAI directly?
+
Three main differences: (1) Privacy — requests route through a decentralized POKT network, not a centralized corporate server. (2) Price — NodeGhost plans start at $1/month vs OpenAI's $20/month. (3) Sovereignty — no single company controls the infrastructure your requests flow through.
What happens when I hit my request limit?
+
Your API key will return a 429 rate limit error until your quota resets on the 1st of the next month. You can upgrade your plan at any time to get more requests immediately. We don't charge overage fees — you just hit the limit and stop.
How is NodeGhost pricing determined?
+
NodeGhost pricing is designed to stay competitive with native POKT stake cost — the price anyone can access by staking directly on the network. We don't offer price locks because our infrastructure costs are tied to the POKT protocol, which can change. What we do promise is that Stripe pricing will always be close to native stake cost plus a small convenience premium for not needing crypto or a CLI. If you'd rather pay at the protocol level directly, see the native POKT stake section above.
Can I use NodeGhost for commercial projects?
+
Yes — the Business plan at $30/month is designed for commercial use and includes 75,000 requests/month. For higher volume or custom arrangements, contact [email protected].
What is POKT Network and why does it matter?
+
Pocket Network (POKT) is a decentralized infrastructure protocol that coordinates thousands of independent node operators. NodeGhost is built on top of POKT's Shannon mainnet — meaning your requests are routed through a network of independent suppliers rather than servers owned by a single company. This is what makes the privacy guarantee architectural rather than just a policy promise.
Do you offer refunds?
+
Yes — we offer a 7-day refund window on all paid plans. If NodeGhost doesn't work for your use case within the first 7 days, contact [email protected] for a full refund.
I lost my API key — what do I do?
+
Email [email protected] from the address you signed up with and we'll reissue your key. For security, the old key will be revoked when we issue the new one.