Jobs & events¶
Background jobs¶
Run code on a background thread according to a schedule using @uni.job:
@uni.job(interval="30m")
def refresh_cache():
update_cached_data()
log_cache_statistics()
@uni.job(daily="15:00")
def daily_backup():
cleanup_old_backups()
create_new_backup()
| Format | Units | Examples |
|---|---|---|
| interval | s (seconds), m (minutes), h (hours), d (days) |
"30s", "2.5h", "1m30s" |
| daily | 24-hour notation | "15:00", "09:30" |
Event listeners¶
UNI provides system events for lifecycle management and device interaction.
Application lifecycle¶
Hook into load and unload with @uni.on_load and @uni.on_unload:
@uni.on_load
def load(event: uni.LoadEvent):
# Loading screen displays until all load events complete
download_required_models()
@uni.on_unload
def cleanup(event: uni.UnloadEvent):
release_resources()
For the nightly maintenance cycle (conversation history reset, cleanup), use @uni.on_nightly_reset:
@uni.on_nightly_reset
def nightly_maintenance(event: uni.NightlyResetEvent):
consolidate_logs(event.conversation_history)
clean_temporary_data()
Device events¶
React to devices connecting/disconnecting with @uni.on_connect and @uni.on_disconnect. Use uni.current_device to identify the current device (.id for the stable identifier, .name for the display name).
reminders = {}
@uni.tool
def pin_reminder(text: str):
"""
Pin a reminder on the current device.
Args:
text: The reminder contents
"""
reminders[uni.current_device.id] = text
uni.push_text_card(
title="🔔 Reminder",
text=text,
id="reminder"
)
@uni.on_connect
def handle_device_connect(event: uni.DeviceConnectEvent):
"""Restore pinned reminder when device reconnects."""
if uni.current_device.id in reminders:
uni.push_text_card(
title="🔔 Reminder",
text=reminders[uni.current_device.id],
id="reminder"
)
@uni.on_dismiss
def handle_dismiss(event: uni.CardDismissEvent):
"""Discard pinned reminder when dismissed."""
if event.card_id == "reminder":
del reminders[uni.current_device.id]
Audio processing¶
Process incoming microphone audio live for use cases like speaker recognition:
@uni.on_incoming_audio
def recognize_speaker(event: uni.IncomingAudioEvent):
# event.chunk: audio samples (16-bit PCM bytes)
# event.sample_rate: sample rate in Hz
submit_for_analysis(event.chunk, event.sample_rate)
# Use @uni.user_prompt_modifier to inject current speaker name
Performance
These listeners should return fast to minimize latency. Move heavy processing to a background thread.
Response events¶
Analyze/process LLM-generated responses:
@uni.on_response_generated
def summarize_interaction(event: uni.ResponseGeneratedEvent):
# event.trigger: "user" or "impulse"
# event.user_prompt: original user prompt (if trigger == "user")
# event.impulse_data: impulse data (if trigger == "impulse")
# event.final_prompt: final processed prompt
# event.full_response: full LLM response
if event.trigger == "user":
def process():
if todos := extract_todos(event.user_prompt, event.full_response):
save_todos(todos)
uni.run_task(process)
Don't block
Response listeners should not block for extended durations. Use uni.run_task() for heavy processing.