Get started

Install

Linux for now. macOS and Windows after the Linux flow stabilises.

One-liner

$ curl -fsSL https://intelnav.net/install.sh | sh

Detects your OS, arch, and GPU vendor; pulls the matching intelnav + libllama tarballs from GitHub Releases; drops binaries into ~/.local/intelnav/bin; runs intelnav doctor at the end. Re-run to upgrade.

From source

If you'd rather build it yourself:

$ git clone https://github.com/IntelNav/intelnav$ cd intelnav$ bash scripts/provision.sh$ cargo build --release -p intelnav-cli -p intelnav-node$ bash scripts/install-libllama.sh$ ./target/release/intelnav

Provision installs system deps + Rust. install-libllama.sh auto-detects your GPU and fetches a prebuilt tarball from the latest IntelNav/llama.cpp release.

First launch

Just run intelnav. The TUI:

  1. Writes config.toml with auto-picked free ports.
  2. Generates your peer identity at ~/.local/share/intelnav/peer.key.
  3. Auto-discovers libllama in ~/.cache/intelnav/libllama/bin.
  4. Fetches the bootstrap seed list from GitHub releases.
  5. Probes your GPU and shows the contribution gate.

Pick a slice to host (recommended) or run as a DHT relay, then chat. See the onboarding doc for the full first-run walkthrough.

Supported backends

PlatformBackendStatus
Linux x86_64cpushipping
Linux x86_64vulkanshipping
Linux x86_64rocmshipping (15 GPU archs native)
Linux x86_64cudashipping (sm_86, sm_89)
macOS arm64metaltarball ships, runtime path WIP
Windows x64vulkantarball ships, runtime path WIP