Cognition UI
An adaptive interface for large language models designed to humanize synthetic intelligence through fluid typography and streaming latency reduction.
Explain the concept of "optimistic rendering" in the context of generative AI streams.
Roles
- Product Design
- Prompt Engineering
- Frontend Architecture
Tech Stack
- React Server Components
- Tailwind CSS
- OpenAI API (Stream)
- Radix UI
Stats
The Friction
Interacting with Large Language Models often feels robotic. The standard "loader spinner" creates anticipation anxiety, and raw text dumps overwhelm the user. The goal was to simulate a "thought process" essentially creating a UI that feels like it's breathing rather than just processing.
Fluid Typography & Streaming
Token-by-Token Rendering
Instead of buffering chunks, we utilized a custom readable stream reader that paints characters the microsecond they arrive. This creates a hypnotic cadence that keeps the user engaged during generation.
Markdown Stability
Streaming Markdown often causes layout shifts as bold tags or code blocks close. We built a parser that predicts block closures and reserves vertical space, eliminating layout jank during generation.
async function* streamResponse(reader) { const decoder = new TextDecoder(); while (true) { const { done, value } = await reader.read(); if (done) break; // Yield decoded chunks for optimistic UI yield decoder.decode(value, { stream: true }); // Artificial aesthetic delay for readability await wait(getRandomDelay(10, 30)); } }
"The interface should be invisible. The user isn't talking to a computer; they are exploring a thought space. The latency is the texture of that exploration."
Visual Modules
Semantic Search
Vector embedding visualization
Code Highlighting
Custom Prism.js implementation
// AI logic handling const init = () => { return Spark.connect(); }