Setup¶
- Install Docker Desktop
- Open the Start menu
- Type
cmdand press Enter to open a terminal - Paste the following into the terminal and press Enter:
powershell -ep Bypass -c "irm https://get.uni-ai.dev/ps | iex"
- Install Docker Desktop, Python, Git
- Open a terminal
- Paste the following and press Enter:
curl -sSL get.uni-ai.dev/sh | sh
Warning
UNI has not yet been tested on macOS. Some features may not work on ARM.
- Install Docker Desktop, Python, Git
- Open a terminal
- Paste the following and press Enter:
curl -sSL get.uni-ai.dev/sh | sh
About Docker Desktop
Docker is an app that creates isolated environments called containers. UNI lives inside one of these containers—a private box with its own files and settings, so it runs the same on any computer and won't interfere with the rest of your system.
Launching¶
To start UNI, open a terminal, type uni, and press Enter.
If UNI started successfully, the terminal will display two URLs:
| URL | Description |
|---|---|
| Local | URL for opening UNI in your browser. This is usually https://localhost:6001 |
| Network | URL for opening UNI on your other devices (phone, tablet). Needs firewall setup. |
Open the Local URL, click past the security warning, and follow the setup instructions.
Connecting from another device
To pair one of your other devices (phone, tablet, laptop) on the same network: open the UNI settings interface on your computer, go to Devices → Pair new device, and follow the instructions.
Updating¶
To update UNI to the latest version:
- Stop the server by running the
uni server downcommand - Run the
uni --pullcommand
Hardware requirements¶
For real-time conversation using a local LLM and lightweight TTS, you'll need an Nvidia or AMD GPU with at least 16 GB VRAM (24+ GB recommended for larger models). The more VRAM you have, the better the models you can run. Multi-GPU systems are supported.
Compatibility
Some plugins require an Nvidia GPU with CUDA support.
If you don't have a dedicated GPU, you can connect UNI to a cloud LLM instead. Any OpenAI-compatible API works (e.g., openrouter.ai, together.ai). You still control the client, avoid vendor lock-in, and your conversations usually aren't used for training — though you should review your provider's privacy policy. TTS can still run locally on CPU.