Skip to content

Approach

Less magic. More consent.

The most common failure mode in AI products is confident invisibility — the system does things on your behalf without telling you first, and often without telling you after. For an assistive product that sits between a person and their computer, that failure mode is unacceptable. Our approach centres on three choices we have made carefully, and one we have deferred.

Narration before action.

The agent speaks its intent before it acts. If you ask it to book a meeting, it says which calendar, which attendees, which time — and waits. Only then does it act. This is slower than an invisible assistant. Deliberately. For assistive use, the visible pause is the product.

Local language, not translated English.

French Canadian deployments get a French-language agent — not an English agent wearing a French wrapper. Voice, prompts, narration, error text, all native. We add a language by deploying with a pilot partner in that language, not by running a phrase file through a translation API.

Readable logs, always.

Every session the agent generates a plain-language log of what it did, readable on the device and exportable. The log is the product's accountability surface — for users, for their support networks, and for institutions that need a record.

Deferred

What we have not decided.

We have not committed to a head-tracking or gaze-based input layer. We believe voice will carry the first generation of pilots; eye and head input can follow if real users ask for them. We would rather ship one input modality that works well than two that work adequately.

We have not priced the product. Pilots in 2026 are quoted individually based on scope and scale. We will publish standard pricing after the first two pilot deployments close.