Get started
Install
Linux for now. macOS and Windows after the Linux flow stabilises.
One-liner
$ curl -fsSL https://intelnav.net/install.sh | shDetects your OS, arch, and GPU vendor; pulls the matching intelnav + libllama tarballs from GitHub Releases; drops binaries into ~/.local/intelnav/bin; runs intelnav doctor at the end. Re-run to upgrade.
From source
If you'd rather build it yourself:
$ git clone https://github.com/IntelNav/intelnav$ cd intelnav$ bash scripts/provision.sh$ cargo build --release -p intelnav-cli -p intelnav-node$ bash scripts/install-libllama.sh$ ./target/release/intelnav
Provision installs system deps + Rust. install-libllama.sh auto-detects your GPU and fetches a prebuilt tarball from the latest IntelNav/llama.cpp release.
First launch
Just run intelnav. The TUI:
- Writes
config.tomlwith auto-picked free ports. - Generates your peer identity at
~/.local/share/intelnav/peer.key. - Auto-discovers libllama in
~/.cache/intelnav/libllama/bin. - Fetches the bootstrap seed list from GitHub releases.
- Probes your GPU and shows the contribution gate.
Pick a slice to host (recommended) or run as a DHT relay, then chat. See the onboarding doc for the full first-run walkthrough.
Supported backends
| Platform | Backend | Status |
|---|---|---|
| Linux x86_64 | cpu | shipping |
| Linux x86_64 | vulkan | shipping |
| Linux x86_64 | rocm | shipping (15 GPU archs native) |
| Linux x86_64 | cuda | shipping (sm_86, sm_89) |
| macOS arm64 | metal | tarball ships, runtime path WIP |
| Windows x64 | vulkan | tarball ships, runtime path WIP |