Towards AIblog

From Interface to Behavior: The New UX Engineering

Thursday, April 2, 2026Yelpin SergeyView original
Last Updated on April 2, 2026 by Editorial Team Author(s): Yelpin Sergey Originally published on Towards AI. Agentic UX is the next step in the evolution of interfaces. Services are learning to listen to the user, understand intent, and act on their own — moving beyond familiar buttons and forms. This article explores what agentic interaction is, what skills designers now need, how to design system behavior, what mistakes to avoid, and how to integrate the AX approach into your workflow. Traditionally, a UX designer was responsible for the visual mechanics of interaction: where to place a button, how a user fills out a form, and in what order screens appear. The main goal was to make the path clear and manageable, so the user would not get lost, feel overloaded, or be left wondering what to do next. Designers built the rhythm of the interface: what appears on screen, when, and with what emphasis. They managed attention like a director manages lighting and movement on stage. Today, this work does not disappear — but it is supplemented by a new focus: designing the behavior of agent-based systems. Where there used to be a button — there is now dialogue.Where there were forms — there are now intentions. The user no longer looks for what to click — they express an intent, and the system responds with an action. This is how Agentic UX (AX) takes shape: an interaction model where the primary object of design is not the screen, but the behavior of the system. 1. What Is an Agent in UX An agent is not a chatbot with prewritten answers. It’s a digital performer that understands user intent, clarifies details, and acts on its own. It doesn’t wait for clicks — it collaborates. In the past, the user followed a path of “select → fill out → confirm,” but now the agent performs these steps autonomously, asking only for what truly matters. An agent represents a new layer of UX — one where interaction is built not through buttons, but through meaning and context. An agent can exist inside an application, as part of a website, or as a standalone service. Its defining quality is that it drives the scenario rather than waiting for the user to initiate action. Examples of existing agentic solutions: -Work and productivity systems: Microsoft Copilot — creates documents, emails, and summaries directly from chat, leveraging Microsoft Graph context and connected services (Outlook, Excel, Teams). Google Duet AI — writes emails, builds presentations, and formats reports based on textual descriptions. Notion AI / Agents (3.0) — add tasks, update databases, and execute multi-step workflows while preserving contextual memory. -E‑commerce and consumer services: Amazon Rufus — a search assistant that answers questions like “What’s a good gift for a 5-year-old?”, analyzes reviews, and builds tailored recommendations. Shopify Sidekick — a merchant assistant that analyzes a store, writes product descriptions, selects relevant items, and even configures necessary plugins. Instacart “Ask Instacart” — helps users find groceries and adds them to the cart based on the meaning of the request. – Design tools: Figma AI / Figma Make — turns ideas into layouts, creating interface structures directly from text descriptions. Photoshop Firefly — understands commands like “remove background” or “add light” and executes them automatically. Canva Magic Studio — designs visuals and copy in a unified style based on the described task. Figma AI “First Draft” -Development and coding: GitHub Copilot Workspace — understands code, builds plans, fixes errors, and prepares pull requests. Claude — Computer Use — an agent with “screen and cursor” capabilities that can click and type directly within the interface. OpenAI Operator — performs actions on web pages such as scrolling, filling out forms, and completing purchases, essentially “working on behalf of the user.” Netlify + ChatGPT — a “prompt-to-action” example: the agent receives a text description of a website and deploys a project on Netlify. 2. Agentic UX as the New Interaction Engineering In agentic UX, designers are no longer creating interfaces in the traditional sense — they are constructing system behavior: how the agent understands a task, clarifies details, and responds with actions. Agentic UX is not about visual composition, but about compiling meanings and reactions. Where the “user journey” was once a path across screens, it is now a scenario of mutual understanding between human and system. 2.1 A New Object of Design — The Meaning Loop UX transforms into a behavioral loop: intent → interpretation → action → feedback → new intent. Each turn of this loop can be designed just like animation or interface logic was before. The designer’s challenge is to preserve the natural flow so the agent doesn’t seem “alien” or “smarter than necessary”. For example: When a user says, “Book a table for tomorrow,” the agent may clarify details like time, location, and preferences. But the designer decides where to stop clarifying — to keep the conversation natural and prevent it from becoming an interrogation. → In the end, the designer controls not the screens, but the level of initiative the system demonstrates. 2.2 Behavioral Directing An agent should behave as if it is part of the user’s context, not just a static interface. Agents now have tone, pauses, hesitation, and empathy — all of which become new tools for UX design. The UX designer is now a director of reactions: how the agent responds to an error, how it expresses uncertainty, how it shifts initiative back and forth. In the past, an interface might display “404 error.” Now, the agent says, “It seems that event doesn’t exist. Would you like to create a new one?”This is no longer just text — it’s an act of interaction, carefully planned in tone and delivery. 3. Quick Guide: How to Design Agent Behavior Define the point of intent (what the user wants). Script the agent’s reaction (what it does, what it clarifies). Adjust initiative (when the agent takes over, when it returns control to the user). […]