UX Designer (embedded within the client’s in-house product team via agency collaboration)
Collaborated with product managers, AI engineers, UX researchers, UI designers, and agency stakeholders
I worked as a UX Designer embedded inside the product team of a leading productivity platform through an agency partnership. Our goal was to integrate AI-powered writing and editing features directly into the existing platform, without disrupting users’ established workflows. The challenge was balancing innovation with usability: introducing AI without overwhelming users unfamiliar with AI tools or adding cognitive load to core tasks. Through early research with users including students, entrepreneurs, and content creators, we uncovered key pain points around trust, discoverability, and uncertainty about what AI could do. Our approach focused on designing intuitive, contextual entry points for AI that felt natural within users’ existing writing flows. We followed an iterative design process, continuously validating concepts with users while navigating constraints like launch timelines, AI model limitations, and data privacy requirements. Success was defined by achieving high discoverability and adoption of AI features without increasing user confusion or support burden. Metrics included AI feature adoption, task success rates, and qualitative satisfaction from usability testing.
We conducted interviews and surveys with over 400 users to understand mental models and expectations for AI-assisted writing. A key insight was that users wanted clear guidance on what AI could do, rather than an open-ended prompt. Based on these insights, we explored different ways to invoke AI—initially testing a persistent AI sidebar, which users found distracting. Through iterative usability testing, we arrived at a lightweight inline trigger and a global AI command palette, allowing users to summon AI help contextually without clutter. Across three rounds of usability testing, we refined designs by adding suggested prompts, clearer undo options, and feedback mechanisms for AI output confidence. Each iteration improved user outcomes: discoverability of the AI trigger increased from 60 percent to 100 percent, and task success rates rose from 70 percent to 92 percent. Collaboration with AI engineers ensured technical feasibility while preserving user experience priorities.
The final design delivered a seamless AI experience through contextual triggers and guided prompts, validated by measurable gains in discoverability, usability, and user confidence. Post-launch, the AI feature achieved over 50 percent adoption among beta users in the first month, with lower-than-expected support tickets. Deliverables included a high-fidelity prototype, annotated interaction guidelines, a research findings report, and a tailored set of UX principles for AI interaction. This project reinforced my belief in progressive disclosure and iterative, evidence-driven design when introducing complex technologies into familiar user environments.