Sky Glass TV Voice Control Design
1 July 2025
Role:
-
Led design and definition of meeting accessibility standards according to W3C
-
Applied lived experience to advocate for practical, inclusive changes
-
Led user interviews
-
Co-led final stakeholder presentation
-
Helped adapt team structure to better handle time zone conflicts
-
Contributed to critiques, maintaining accessibility principles throughout
Designing Inclusive Voice Control for Shared Home Use

Summary (speed-read approved)
Sky’s voice control system wasn’t meeting the needs of real households with accessibility and personalisation requirements. Our team redesigned the voice setup flow to support multiple users with individual accessibility preferences, making the experience more inclusive, intuitive, and user-led. I led the accessibility research and design focus, ensuring user needs were addressed in every stage of the process.
The Challenge
As part of my training in my recent level 7 qualification UX course with KCL, Sky was an industry client who commissioned teams of students to redesign the voice control system of their Sky Glass TVs.
They made it clear that accessibility was up most in their minds, and wanted something innovative, ‘blue sky thinking’. Taking this on board, we came up with the following concept. Sky Glass TV’s voice interaction system was designed for single-user control — but in real homes, TVs are shared by families with vastly different needs. Our research showed that users were frustrated by inaccessible defaults, inconsistent voice control, limited customisation, and a lack of visual feedback. This issue was one I raised due to personal experience with members of my household who have mixed accessibility needs, and was met with enthusiasm by my team and Sky.
One of our hypothetical user scenarios based on real user interviews captured this clearly:
The Sharma Family
-
Zayan (8) is sensitive to noise and light
-
His mum struggles with hearing but avoids subtitles
-
His granddad has arthritis and struggles with the remote
They constantly had to re-adjust the TV for each person — a frustrating, inaccessible experience.

Goals
-
Create user-specific voice profiles that store individual accessibility preferences
-
Reduce friction in shared households
-
Improve clarity, trust, and control in the voice interaction setup
Constraints
-
Interface had to remain simple, friendly, and voice-first
-
Had to be usable without a touchscreen or physical input
-
Voice data privacy was a major user concern
We had 4 weeks to complete this project with the goal to have a mid fidelity prototype to show our ideas by the end of this timeframe.
Research
We used a combination of interviews, competitor research, and empathy mapping to uncover key pain points with current voice control systems.
Research Methods
-
Discovery interviews with users aged 20–60. Participants came from diverse cultural and national backgrounds including India, the UK, and other parts of Europe. Interviews were conducted with users who had wide range of accessibility issues
-
Empathy maps based on real quotes and behaviours
-
Competitive analysis (Alexa, Siri, Samsung, Google TV)
-
Problem definition and “How might we…” workshop
-
Wizard of Oz testing to simulate voice interactions
Key Insights
-
Users wanted fast, non-intrusive control, especially when multitasking
-
Distrust was common due to lack of transparency and constant listening
-
Accessibility settings were hard to find or confusing
-
Voice feedback (beeps, icons) was inconsistent or missing
-
Users wanted more personalisation beyond functional commands
Research Images
Strategy
We reframed the problem as:
How might we improve Sky Glass to adapt to different users’ needs — effortlessly, enjoyably, and without being overwhelming?
We prioritised features that offered user control, clarity, and adaptability, grounded in UX heuristics:
-
Recognition over recall
-
Visibility of system status
-
User control and freedom
-
Flexibility and efficiency of use
Design Process
We created a mid-fidelity prototype walking users through voice-based profile setup. The TV could recognise users and automatically adjust key settings.
Key features:
-
Voice-activated wake-up (“Hello Sky”)
-
Profile creation flow with name, avatar, preferences
-
Interaction tone selection: Friendly, Neutral, Adaptive
-
Custom voice phrases (e.g. “Open the table” → “Open football scores”)
-
Smart accessibility setup: Visual, Audio, Subtitles, Sign Language, Mobility
-
Shared Viewing Mode that blends needs of all users
Initial Mid-fidelity Prototype Design
User Testing & Iteration
We tested using the Wizard of Oz method, with one interviewer asking the questions, and the other acting as the Voice control, replying in kind to the users (it was quite a fun experience with lots of laughs all around). We interviewed 8 people and gathered feedback from their interactions with our voice prototype.
What we learned:
-
Text readability was impacted by background images
-
Progress indicators were unclear or missed
-
Abstract labels like “Control Sky Glass entirely with your voice” were confusing
-
Avatar names like “Thunder” or “Rain” didn’t make immediate sense to users
-
Setting screens felt like reading a manual — too much at once
-
Users wanted previews before enabling accessibility features
-
Voice data privacy and control were serious concerns
How we improved it:
-
Softened backgrounds and improved contrast
-
Added step indicators
-
Split avatar/voice tone into separate steps, renamed avatars (e.g. Amira, Keith)
-
Added visible “Skip” button
-
Reduced categories from 6 to 5 and switched to horizontal carousel
-
Renamed “Voice Control” to “Voice Guidance” and added previews
-
Introduced sliders for screen reader volume/speed
-
Old vs new setting screen (reduction from 6 → 5 categories)
-
Use arrows, labels or subtle highlights to show the change clearly.
My Role & Contributions
-
Led design and definition of meeting accessibility standards according to the W3C
-
Applied lived experience to advocate for practical, inclusive changes
-
Led user interview prep, using background in psychology
-
Led user interviews
-
Co-led final stakeholder presentation
-
Helped adapt team structure to better handle time zone conflicts
-
Contributed to critiques, maintaining accessibility principles throughout
High-Fidelity Prototype
Impact and Takeaways (no not the deliveroo kind sadly)
This project reinforced that inclusive design is never one-size-fits-all — it must adapt to diverse bodies, minds, and moments.
Designing voice-first for a shared device meant thinking beyond features — it meant designing for family dynamics, cognitive load, and emotional friction.
Next Steps
-
Test updated prototype with users who have specific accessibility needs
-
Run co-design workshops with people with lived experience of disability
-
Explore passive profile recognition (e.g., voice pattern detection)
Tools Used








.png)
.png)






