From Chat to Chaos: How Linear Interfaces are Failing Power Users

Escaping the Conversational Chaos of Modern AI

As a designer with over a decade in the trenches, I’ve seen countless technologies promise to change the world. Few have arrived with the force of Large Language Models (LLMs). They are, without a doubt, a monumental leap forward. Yet, as I spend my days using these tools, I’m struck by a strange paradox: while the underlying models are evolving at a breathtaking pace, the user interfaces we use to interact with them are stuck in a frustrating state of sameness.

We’ve been given rocket engines, but we’re all still sitting in the same, slightly uncomfortable, single-seat cockpit.


Almost every major LLM, from ChatGPT to Claude, defaults to the same core design pattern: the linear, chronological chat thread. It’s familiar, it’s simple, but for any task more complex than asking for a recipe, it begins to break down. This isn't a niche complaint; it's a fundamental usability flaw that's holding back the true potential of this technology for professionals who rely on it for deep, iterative work. After discussing this with dozens of colleagues in design, engineering, and product management, the consensus is clear: getting lost in a sea of answers is a universal pain point for anyone using LLMs for serious, multi-step research.

We are all drowning in our own curiosity, and the tools we use offer no life raft.


Defining the Problem: Conversational Chaos

My go-to tool for research is Perplexity. Its ability to synthesize information and provide cited sources is invaluable. However, my workflow inevitably devolves into what I call "conversational chaos." I’ll start with a broad query, which sparks a follow-up question, then another, and another. Soon, I’m twenty questions deep, and my thread has become a tangled, unnavigable mess. If I want to refer back to the context of question three to inform question twenty-one, I’m forced to embark on a frustrating journey of scrolling and scanning, trying to mentally piece together a conversation that has lost all structure.

Conversational chaos is the cognitive friction that arises when non-linear research is forced into a linear chat interface. The simple, chronological feed borrowed from messaging apps fails when deep research requires discovery, tangents, backtracking, and synthesis. LLMs are trained to retain context, but the UI doesn't help users keep track of their own mental map. As sessions grow, context gets lost, forcing the user to re-prompt and re-navigate, draining time and trust.

For tools like Perplexity, this is critical. Users on Reddit consistently report that the tool "can't handle long conversations," starts to "lose context after 2-3 conversations," and that long threads become "cumbersome and practically unmanageable." When a tool's core function is providing accurate, context-aware information, its inability to remember context undermines its entire value proposition.

Despite innovation, most tools still default to chronological chat. ChatGPT, Gemini, Perplexity, and Claude offer sidebars to switch between conversations, not within them. Workspaces like ChatGPT's Canvas and Claude's Artifacts are powerful, but too heavy for quick research flow. What's missing is a lightweight, intuitive way to navigate within a single session. The accessibility of chat made LLMs popular. But now, it's also their greatest design limitation.

Letting the Data Guide the Way

Before committing to a design, I dug into Perplexity's usage data. What I found was fascinating and directly informed the solution:

  • Referral Traffic: 3.4% mobile, 96.5% desktop

  • Platform Sessions: 63.5% mobile, 36.5% desktop

  • User Base: 22 million monthly active users

(Source: www.searchenginejournal.com)

The metrics are similar with other LLMs such as Claude, Gemini and ChatGPT. This paradox reveals the modern researcher's workflow: research starts on desktop during focused work sessions, then continues on mobile for quick check-ins, follow-ups, and references throughout the day. Users aren't starting deep research on phones, they're maintaining continuity with threads initiated elsewhere. This cross-device reality makes conversational chaos exponentially worse.

Navigating a long thread is frustrating on a 27-inch monitor; on a 6-inch mobile screen, it becomes nearly impossible.

The solution must be continuity-first, providing seamless, state-synced experiences across devices. This isn't just UX improvement, it's a strategic imperative for the platform's most valuable users.

The Design Journey: From Nested Nightmares to Modal Maps

My first instinct, as a designer, was to sketch out a solution. The most obvious idea was a nested chat history, something akin to Reddit threads or nested comments. You could branch off from any response, creating a tree-like structure.

But as I prototyped this, I quickly realized it was a usability trap. The solution was more complex than the problem it was trying to solve and it fails for three critical reasons:

  • Cognitive Overload: Deeply nested structures create walls of overwhelming text that violate the principle of minimizing cognitive load.

  • Navigational Complexity: Permanent nested structures clutter interfaces with expand/collapse controls, especially problematic on mobile.

  • Accessibility Nightmare: Properly implementing accessible nested structures is incredibly difficult; non-semantic HTML makes them unusable for screen readers.

I needed something simpler. Something that didn't fundamentally alter the chat interface but provided an elegant layer of control on top of it. This led me to a different pattern: a Context Modal Window. A non-intrusive overlay that would provide a high-level map of the conversation, allowing users to quickly jump between key questions without losing their current place.

The concept is straightforward: a small, clickable icon next to the chat input field that launches a modal window. This window displays a clean, numbered list of every question you've asked in the current thread. Tapping on any question instantly scrolls the main chat window to that exact point in the conversation.

This approach follows established modal design best practices:

  • User-Initiated: Only triggered by explicit user action (clicking "Asked Queries").

  • Single, Focused Purpose: Provides a scannable map of user queries for quick navigation.

  • Preserves Context: Overlay maintains visual connection to main chat.

  • Clear Escape Hatch: Multiple ways to dismiss (X button, ESC key, click outside).

The elegance lies in its ephemerality. Unlike nested lists that permanently complicate interfaces, the modal provides powerful functionality on-demand, then disappears completely. This embodies "progressive disclosure," keeping complexity hidden until explicitly requested.

The Proof of Concept:

I created a proof-of-concept to demonstrate how this would feel in practice on both desktop and mobile.

Desktop Experience

  • Trigger: Subtle "Asked Queries" icon near conversation title.

  • Modal UI: Clean, chronological list of only user prompts (AI responses omitted for clarity).

  • Key Features: Clickable prompts instantly scroll to conversation points.

Mobile Experience

  • Trigger: Thumb-friendly placement in action bar.

  • Modal UI: Larger screen portion while maintaining visual connection to chat.

  • Key Features: Large, tappable targets optimized for vertical scrolling.

This lightweight solution acts as a "cognitive scaffold," offloading the mental burden from the user's brain onto the interface. It frees cognitive resources for higher-level tasks like synthesizing information and formulating critical questions.It's a minimal, data-informed enhancement that dramatically improves the user experience for a core professional workflow.

This isn't about reinventing the wheel. It's about adding power steering.

Beyond the Chat box: The Rise of Node-Based Interfaces

While I believe the modal is the right-fit solution for existing chat interfaces, it's also worth noting that the industry is slowly exploring more radical departures from the linear thread. A fascinating and powerful alternative is the node-based interface.

Google's experimental platform, Opal, is a prime example. It allows users to build "mini-apps" using natural language, which it then translates into a visual workflow of connected nodes. Users can edit the logic either by continuing the conversation or by directly manipulating the nodes on the canvas. This hybrid approach gives users a visual map of the AI's "thinking" process, offering both high-level simplicity and granular control.

Similarly, open-source tools like ComfyUI, primarily used for image generation, have demonstrated the power of node-based workflows. They allow users to connect different models, prompts, and post-processing steps in a visual web, enabling a level of complexity and reusability that is simply impossible in a linear chat. These interfaces treat AI interaction not as a conversation, but as a system to be built and fine-tuned.

The Bigger Picture: Designing Human-AI Collaboration

Solving conversational chaos represents more than fixing a UX problem; it's building blocks for next-generation intelligent systems. The future of AI interfaces isn't more sophisticated chatbots, but adaptive, context-aware systems that understand not just what we say, but what we're trying to achieve.

Current LUIs (Language User Interfaces) often function as "black boxes" where users provide input and receive output with little control over the process. This lack of control leads to frustration and trust breakdown when AI fails. Features like navigation modals, along with source citations and explainability, grant users agency and foster true partnership rather than transactional interactions. The rise of node-based interfaces is a sign of a more profound shift. Both point to the same conclusion: the future of AI interaction is not just about building smarter models; it's about designing more intelligent interfaces.

We are moving from a paradigm of "prompt and response" to one of "co-creation and collaboration." Our tools need to evolve to support this shift. They need to become our partners in exploration, not just our servants in answering. They need to provide us with maps to our own thoughts, allow for non-linear exploration, and give us the controls to steer with precision.

The Evolving Role of Product Designers: A Call to Action

As generative AI handles rote production tasks, designers' value shifts toward strategy, curation, and ethical stewardship. The real work becomes:

  • Identifying critical, unmet user needs through deep workflow analysis.

  • Synthesizing quantitative data to understand complex user behaviours.

  • Applying fundamental UX principles to evaluate interaction patterns.

  • Articulating visions for features that improve human-AI partnerships.

  • Prioritizing accessibility and inclusivity.

  • Translating UX friction into product strategy.

This exemplifies strategic product design: understanding business, market, and users to "build the right things at the right time."

In the rush to ship "AI features," many companies release technologically impressive but fundamentally unusable products, forgetting decades of human-computer interaction research. The biggest innovation opportunities aren't always in training new models; they're in the friction points of current user experiences.

The goal was never creating the most powerful AI; it's creating the most useful and usable one. As designers, developers, and product leaders, we must champion the human in the human-AI equation. The most valuable designers in the AI era will apply foundational principles to the novel challenges this technology presents.


  • What's Your Experience? How has the linear chat interface helped or hindered your own work with LLMs? Have you developed any personal workarounds for "conversational chaos"?

  • Evaluating the Solution: What are your thoughts on the proposed "Context Modal"? Can you see it fitting into your workflow, and are there any potential downsides or improvements you can envision?

  • The Future of Interfaces: Beyond chat and nodes, what other design patterns or metaphors do you think could shape the future of human-AI interaction?

  • The Designer's Role: How do you see the role of product and UX designers evolving in an AI-first world? What skills will be most critical in the next five years?