UNI is still under development — Please check back at the end of March
Skip to content

Open source
AI companion

Talk naturally. Customize everything. Runs at home.

Talk naturally

Low-latency voice interaction. Speak or type. Conversation flows in real time.

Customize everything

Pick your LLM, voice, personality, and avatar. Add new skills with plugins.

Runs at home

Runs on your hardware by default. Data encrypted at rest, cloud optional.

What's included

Character

Avatar, voice, and personality. Choose from presets or build your own character from scratch.

Local LLMs

llama.cpp built in. Run language models on your own hardware by default—no cloud required.

Wake words

Say your trigger word and go hands-free. UNI listens until it hears you, then captures your request.

Memory

Remembers you across sessions — preferences, names, past topics. Stored locally and encrypted.

Tool calling

Search the web, set timers, check the weather. Sensitive actions require your approval.

UI cards

Plugins push interactive content to your screen — weather forecasts, countdowns, or custom widgets.

Camera

Share your camera and UNI can see what you see. Ask about what's on screen or get visual assistance.

Impulses

UNI can reach out to you — remind you of appointments, alert you of events, or check in on a schedule.

Networking

Pair your phone, tablet, or laptop and talk to UNI from anywhere in your home.

Extend with plugins

Voice engines

Voice cloning, natural expression, and audio effects.

Wake words

Go hands-free with custom trigger phrases.

Avatars

2D and 3D characters with reactive expressions.

Skills

Search the web, play music, set timers, and more.

Context

Background knowledge fed into every conversation.

Build your own

Written in Python. Start with the SDK.

See all plugins →

Stays with you

UNI can message you through Discord, Matrix, or other channels when you're away from home.

Common questions

What hardware do I need?

For fully local AI, a GPU with 24GB+ VRAM is recommended. The more VRAM, the better models you can run. You can also use cloud APIs if your hardware is limited. Full requirements

Can I use cloud AI services?

Yes. UNI can connect to any OpenAI-compatible API, including hosted services. Privacy is the default — the important part is that you control what goes where.

Which LLM backends work?

llama.cpp works out of the box. You can also use vLLM, LiteLLM, or any service with an OpenAI-compatible completions endpoint.

How do I access UNI from other devices?

UNI serves a web interface over HTTPS on your local network. Open the URL in any browser, pair your device, and you're connected.

Free. Open source. Forever.

Licensed under AGPLv3.