GPT-Powered Voice Assistant Persona
Year
2023
Company
Native Voice AI
OVERVIEW
Role: Sole UX Researcher
Time Frame: 1 Month
Methods: Focus Groups, Workshopping, Survey Interviews, Concept Testing
PROBLEM
Traditional voice assistants like Siri, Alexa, and Google Assistant had limited scope and struggled with contextual, multi-turn conversations. Users were frustrated with Alexa's limitations, but this didn't necessarily mean they wanted a general-purpose assistant—instead, they sought an assistant that was actually helpful in their specific needs.
Key Research Questions:
1. Do users actually want a GPT-powered voice assistant?
2. What key features, use cases, and interactions should this assistant provide?
3. Should we develop persona-specific assistants or a general AI? If persona-based, which should be prioritized first?
4. How should tone, personality, and conversation style be designed for maximum user engagement?
5. What research-backed insights should shape product development and strategic decisions?
KEY FINDINGS
1. Users Were Dissatisfied with Alexa but Didn't Necessarily Want a General LLM Assistant
Users expressed frustration with Alexa’s lack of intelligence in conversations, but they also didn’t want an AI that tried to do everything.
A broad GPT assistant risked feeling too generic and overwhelming.
📌 "I like the idea of a more advanced assistant, but I wouldn't want one that tries to do too much. I need it to be great at what it's designed for."
2. Users prefer personas, but not too many. A ‘team’ of assistants was appealing. Users preferred assistants that felt tailored to their specific needs. Personas created stronger emotional engagement and habitual usage patterns.
📌 "If there were too many assistants, it'd get overwhelming. I'd forget who does what." (P7)
3. Rachel (Lifestyle) and Alex (Research) emerged as the most compelling assistants. Mention Alex requires more complex conversations, and tradeoff between frequency and excitement. Given Native Voice’s existing partnerships with iHeartRadio & TuneIn, the research confirmed that Rachel was the best choice for V1 because:
✅ Leverages existing brand integrations (music, events, entertainment).
✅ Simpler AI scope—doesn’t require complex multi-turn conversations like Alex.
✅ Has strong engagement potential—used multiple times per day.
Alex should be built next, but it requires:
❌ More challenging brand partnerships (Britannica, Google Search).
❌ More advanced AI capabilities (deep contextual conversations).
4. add findings regarding preferred conversational styles.
REFINED VOICE ASSISTANT PERSONAS
1️⃣ Rachel – Social & Lifestyle Concierge
📌 The best friend who always knows the hottest restaurants, events, and nightlife.
🔹 Top Use Cases: Restaurant reservations, event discovery, social planning.
🔹 Brand Integrations: iHeartRadio, TuneIn, OpenTable, Resy, Ticketmaster.
2️⃣ Alex – Knowledge-Packed Research Assistant
📌 The professor you always wished you had—insightful, reliable, and smart.
🔹 Top Use Cases: Deep research, fact-checking, learning new topics.
🔹 Brand Integrations: Britannica, AI News, National Geographic, Google Search.
Research Challenges & How I Addressed Them Challenge #1: Overcoming Existing Mental Models Problem: Users had difficulty imagining AI capabilities beyond their experience with Alexa/Siri. Solution:
Created progressive disclosure exercises starting with familiar scenarios, then gradually introduced more advanced LLM capabilities Designed speculative "day in the life" scenarios to help users envision future interactions Used Wizard of Oz prototyping to simulate more complex interactions than currently available
Challenge #2: Minimizing Confirmation Bias Problem: Risk of designing questions that would confirm our hypotheses about persona-based assistants. Solution:
Created balanced discussion guides that equally explored benefits of both general and specialized assistants Included devil's advocate questions that deliberately challenged our team's assumptions Had a second researcher review all materials for potential bias Conducted separate analysis sessions where we specifically looked for disconfirming evidence
Challenge #3: Capturing Nuanced Conversational Preferences Problem: Traditional usability methods weren't capturing conversational nuances. Solution:
Developed a custom conversation scoring framework to evaluate tone, helpfulness, and naturalness Used comparative A/B testing with different conversational styles to identify preferences Implemented real-time reaction tracking during concept testing to capture immediate emotional responses Created voice persona cards that participants could sort and annotate with specific feedback
CONTEXT
Native Voice had previously built brand-specific voice assistants for companies like iHeartRadio and TuneIn, using pre-LLM technologies such as intent-based entity matching (similar to traditional assistants like Siri and Alexa). These assistants handled structured commands but lacked true conversational depth.
With OpenAI's release of public API access to GPT, the company saw an opportunity to leverage LLMs to create a next-generation, conversational voice assistant. My role was to lead a foundational and generative research initiative to uncover:
How users would engage with a GPT-powered voice assistant in a multimodal experience.
What personas, tone, and use cases would make the assistant most compelling.
Whether users preferred a multipurpose LLM assistant or persona-specific assistants.
How research could shape the assistant's design, functionality, and go-to-market strategy.
This research was conducted before OpenAI launched native voice functionality for GPT, making it a first-mover exploration into user expectations for LLM-driven voice interactions.
RESEARCH APPROACH
To uncover actionable insights, I conducted mixed-methods research that combined qualitative and quantitative techniques, ensuring a holistic understanding of user preferences.
Participant Recruitment & Sampling Strategy
To ensure diverse representation, I recruited participants across:
Demographics: Age (18-65), gender (48% female, 44% male, 8% non-binary), income levels, urban/suburban/rural distribution.
Technology Adoption: Balanced mix of early adopters, mainstream users, and laggards.
Voice Assistant Experience: Included both power users and non-users to capture fresh perspectives.
Use Case Diversity: Recruited participants across lifestyle, research, fitness, and productivity domains to test different assistant personas.
Focus Groups & Brainstorming Workshops:
Two one-hour sessions with four participants per group.
Used Miro for digital whiteboarding to map user frustrations with current assistants and identify ideal features & personas.
Explored mental models around AI assistants—how users expect them to behave, engage, and provide value (tone, personality, and functionality).
2. Survey:
A survey of 100 participants validated findings from the focus groups.
The survey asked about multi-turn voice assistant use cases in areas such as mental health, fitness, and news.
Users selected their top preferences and provided detailed reasons for their choices.
DELIVERABLE
A one page report with the key findings and charts (add image for survey results)
IMPACTI presented the findings at the beginning of an internal workshopping session with the Product and Design team. We then brainstormed 4 personas.
VOICE ASSISTANT PERSONA CREATION (ADD USE CASES)
Blair - Your No-Nonsense Personal Admin
Fast, efficient, and hyper-competent. Blair cuts through the noise to get things done—no fluff, just results.
Jay - Your Motivational Personal Trainer
Your go-to fitness coach, tracking meals, workouts, and progress while keeping you motivated every step of the way.
Alex - Your Knowledge-Packed Research Assistant
The professor you always wished you had—thoughtful, insightful, and always ready with deep dives and smart insights.
Rachel - Your Social & Lifestyle Concierge
Your fun-loving, in-the-know bestie who always has the best restaurant, event, and entertainment recommendations.
3. 1:1 Interviews & Concept Testing:
Seven participants participated in 60 minute concept testing sessions, where I presented voice assistant personas (e.g., a generic assistant and 4 distinct personas).
I gathered feedback on overall usefulness, excitement, and engagement with each concept.
Captured user feedback on conversational tone & personality.
Stakeholder Management & Cross-Functional Collaboration The success of this research relied on effective collaboration across multiple teams. Here's how I approached stakeholder management: Executive Alignment
Conducted early stakeholder interviews with the CEO, CTO, and Head of Product to understand business priorities and constraints Created a research brief with clear objectives that mapped directly to strategic business questions Established regular executive check-ins to share preliminary insights and maintain alignment
Engineering & Design Partnership
Paired with lead engineers to understand technical capabilities and limitations before finalizing research scope Co-created concept prototypes with the design team to ensure research stimuli were technically feasible Facilitated collaborative analysis sessions where engineering and design stakeholders directly observed user feedback Developed a shared research repository where all teams could access and comment on findings
Product Team Integration
Established a "research steering committee" with key product managers to ensure findings could translate to roadmap decisions Created modular research deliverables tailored to different stakeholder needs (executive summary, technical implications, design recommendations) Facilitated a collaborative roadmapping workshop where cross-functional teams used research insights to prioritize features
This collaborative approach ensured research findings had champions across the organization and directly influenced decision-making at multiple levels.
Strategic Impact: How Research Drove Product Decisions
1️⃣ Research Directly Shifted Product Strategy
🔹 Before Research: The team considered a general-purpose AI assistant. 🔹 After Research: The team pivoted to persona-based assistants, prioritizing Rachel (lifestyle) as the first launch.
2️⃣ Research-Informed Conversation Design & UX
🔹 Users preferred a balance between proactive suggestions and non-intrusive responses. 🔹 Refined tone: More conversational, less robotic. 🔹 Avoiding "AI Overload"—simplified command structures to reduce cognitive load.
3️⃣ Research Validated Brand Partnerships
🔹 Rachel was the best first launch because Native Voice already had iHeartRadio & TuneIn partnerships. 🔹 Alex required more advanced integrations, making it a second-phase priority.
Learnings & Application to Prototype Development 💡 From Research to Design Requirements
Translated user verbatims into specific conversation design principles for each persona Created a "conversation playbook" that established guardrails for tone, proactivity levels, and turn-taking patterns Developed persona-specific success metrics that would eventually be used for future product evaluation
💡 Prototype Validation
Created low-fidelity conversation simulations to validate key interaction patterns Tested different visual interface designs to understand how multimodal interactions enhanced the experience Established a measurement framework for future evaluation once the product moves beyond prototype stage
Final Roadmap & Next Steps
✅ Phase 1: Launch Rachel → Leverages existing brand partnerships & has high engagement potential.
📚 Phase 2: Expand to Alex → Higher cognitive load but valuable for power users.
📅 Phase 3: Explore Blair & Jay → Niche audiences, lower priority.
Conclusion
Instead of competing with OpenAI's inevitable multipurpose GPT assistant, Native Voice used UX research to define a unique, persona-driven approach. By focusing on Rachel first, we created a differentiated, engaging experience that users actually wanted.
🚀 Final Impact
✅ Shifted product strategy from general AI to persona-based assistants.
✅ Ensured user needs directly influenced voice design & functionality.
✅ Created a research-driven product roadmap, prioritizing high-impact personas.
✅ Established a foundation for measuring success in future product iterations.