Skip to content

Open source
AI companion

Talk naturally. Customize everything. Runs at home.

Talk naturally

Speak, pause to think, trail off, interrupt. Or type when you'd rather not.

Customize everything

Pick your LLM, voice, personality, and avatar. Add new skills with plugins.

Runs at home

Runs on your hardware by default. Data encrypted at rest, cloud optional.

UNI showing a weather forecast and timer on screen

Skills

Search the web, set timers, check the weather. Say "tell me the weather every morning at 7" and UNI handles the rest.

What's included

Character

Avatar, voice, and personality. Choose from presets or build your own character from scratch.

Local LLMs

llama.cpp built in. Run language models on your own hardware by default. No cloud required.

Wake words

Say your trigger word and go hands-free. UNI listens until it hears you, then captures your request.

Memory

Remembers you across sessions: preferences, names, past topics. Stored locally and encrypted.

Tool calling

Search the web, set timers, check the weather. Sensitive actions require your approval.

UI cards

Plugins push interactive content to your screen: weather forecasts, countdowns, or custom widgets.

Camera

Share your camera and UNI can see what you see. Ask about what's on screen or get visual assistance.

Impulses

Plugins can add proactivity to UNI: reminders, notifications, or even spontaneous conversation.

Networking

Pair your phone, tablet, or laptop and talk to UNI from anywhere in your home.

Stays with you

UNI can reach out to you through messaging apps when you're away from home.

Extend with plugins

Voice engines

Voice cloning, natural expression, and audio effects.

Wake words

Go hands-free with custom trigger phrases.

Avatars

2D and 3D characters with reactive expressions.

Skills

Search the web, play music, set timers, and more.

Context

Background knowledge fed into every conversation.

Build your own

Written in Python. Start with the SDK.

See all plugins →

Common questions

What hardware do I need?

For fully local AI, a GPU with 24GB+ VRAM is recommended. The more VRAM, the better models you can run. You can also use cloud APIs if your hardware is limited. Full requirements

Can I use cloud AI services?

Yes. UNI can connect to any OpenAI-compatible API, including hosted services. Privacy is the default, but you control what goes where.

Which LLM backends work?

llama.cpp works out of the box. You can also use vLLM, LiteLLM, or any service with an OpenAI-compatible completions endpoint.

How do I access UNI from other devices?

UNI serves a web interface over HTTPS on your local network. Open the URL in any browser, pair your device, and you're connected.

Free. Open source. Forever.

Licensed under AGPLv3.