ai design
os
Natural language becomes the interface, through back-and-forth between people and machines.
Users interact with AI through chat, voice, or multimodal dialogue, engaging in multi-turn conversations where context and memory matter.
The paradigm spans from simple chatbots
to sophisticated assistants like GPT, Claude, or Alexa, where the boundary between a “tool”
and a “partner” blurs /
this is what makes it a foundational piece of manyagentic
experiences as well.
Conversational UIs lower the barrier to entry /people can simply ask for what they need. But designing them requires solving
for ambiguity, grounding, tone, and trust.
The paradigm spans from simple chatbots
to sophisticated assistants like GPT, Claude, or Alexa, where the boundary between a “tool”
and a “partner” blurs /
this is what makes it a foundational piece of manyagentic
experiences as well.
Conversational UIs lower the barrier to entry /people can simply ask for what they need. But designing them requires solving
for ambiguity, grounding, tone, and trust.
ai /d – os
6
new
paradigms
conversational
Natural language as the interface. Voice-based interactions where users talk to AI systems / like voice assistants, voice search, or spoken commands in ambient interfaces.
Voice-based interactions
eg. assistants, search, commands
AI agents that act autonomously or semi-autonomously on behalf of users. They initiate actions, make decisions, and carry out tasks with minimal input.
AI agents act on behalf of users
eg. scheduling, booking, personal agents
Command-based AI adds an intelligent, deterministic layer to familiar GUIs / clear commands (“Summarize this,” “Remove background”) executed for precision, speed, and control.
direct actions via structured commands
eg. productivity tools
AI as a creative partner: generating, editing, and refining alongside the human. Defined by iterative prompt > output > modification loops where prompting is ongoing, not one-shot.
Iterative prompt > output > refine cycles
eg. writing, design, media
Embodied AI - Context-aware, environmental AI. Subtle, background intelligence that adapts to user behavior, mood, environment, or presence, often without direct interaction. Often manifested in Robotics, AR / VR.
Context-aware, adaptive AI in the background
eg. robotics, AR/VR, smart homes
Generative UI
Generative UI spins up just-in-time, context-aware ephemeral apps / assembled from intent and data, then dissolving the moment the job is done.
Dynamic, just-in-time interfaces
eg. adaptive dashboards, workflow UIs
ai
paradigm
comparison
conversational
Chatbots, voice, turn-taking UI
Ask / Explore
Multi-turn input (chat or voice)
Reactive & contextual
Assistive
Guidance, access, dialogue
Hallucinations, slow UX
Agentic & Assistive
Copilots, planners, agents
Delegate / Offload
Goal input (task or intention)
Plans, executes, adapts
Semi- to fully-autonomous
Multi-step tasks, execution
Misalignment, control loss
Command-Based
Buttons, filters, short prompts
Instruct / Execute
One-shot commands / UI actions
Deterministic + AI-enhanced
Reactive
Precision, productivity
Weak fallback UX, prompt ambiguity
Co-Creation & Generative
AI image/text/video tools
Create / Remix / Iterate
Prompt > edit > refine loop
Suggests, varies, evolves
Assistive
Ideation, iteration, creativity
Fatigue, authorship blur
Ambient & Contextual
System-level, passive nudges
Sense / Nudge / Monitor
Minimal or no direct input
Background + proactive
High
Frictionless UX, awareness
Privacy, visibility, control
generative ui
currently creating itself
design
principles
1.
put humans first
2.
keep people in control
3.
open the black box
4.
design for the mess
5.
calibrate through honesty
6.
make models learnable
7.
support co-creation, not dictation
8.
protect what matters
9.
know what not to automate
10.
design for everyone, with everyone
11.
embrace the variability
12.
provide context, not just answers
Here for the mechanics, not the metaphor?
Natural language becomes the interface, through back-and-forth between people and machines.
Users interact with AI through chat, voice, or multimodal dialogue, engaging in multi-turn conversations where context and memory matter.
Conversational UIs lower the barrier to entry /people can simply ask for what they need. But designing them requires solving
for ambiguity, grounding, tone, and trust.
The paradigm spans from simple chatbots
to sophisticated assistants like GPT, Claude, or Alexa, where the boundary between a “tool”
and a “partner” blurs /
this is what makes it a foundational piece of manyagentic
experiences as well.
The paradigm spans from simple chatbots
to sophisticated assistants like GPT, Claude, or Alexa, where the boundary between a “tool”
and a “partner” blurs /
this is what makes it a foundational piece of manyagentic
experiences as well.
Conversational UIs lower the barrier to entry /people can simply ask for what they need. But designing them requires solving
for ambiguity, grounding, tone, and trust.