XR · Spatial UX · EdTech · 2022

Virtual Reality Pronunciation Platform — Speaklah

Immersive pronunciation training designed to address the biggest challenges in learning tonal languages. From an overwhelming consonant matrix to an intuitive, spatial VR experience on Oculus Quest 2.

CompanySpeaklah
RoleProduct Designer
Year2022
PlatformVR · Oculus Quest 2
TeamUnity Dev · UX Researcher · Product Owner
Speaklah — VR Pronunciation Platform hero
Speaklah — VR Pronunciation Platform hero
3D
Mouth model for consonant articulation guidance
Real-time
Pronunciation feedback with visual comparison
MVP
Shipped from design to Oculus Quest 2 build

Transforming a language matrix into a spatial experience

At Speaklah, I led the design initiative for an immersive language learning tool tackling pronunciation hurdles in language acquisition. My role spanned User Experience, Interaction Design, Prototyping, Virtual Reality, and Product Design — collaborating with a team including a Unity Developer, UX Researcher, and Product Owner/Founder.

The existing Thai language learning matrix, while comprehensive, proved overwhelming and frustrating for new learners. My task was to simplify this complexity, translating it into an intuitive and easy-to-interact VR platform — enhancing the learning experience and making it more user-friendly for learners navigating the intricacies of the Thai language.

Project overview — contribution, team, and challenge definition
Project overview — contribution, team, and challenge definition

Three pillars of design ownership

UX Strategy
  • Developed a robust UX strategy aligned with both user needs and overarching business goals
  • Defined key user personas and mapped user journeys to tailor the learning experience
VR Design
  • Leveraged immersive principles to create an engaging language learning environment
  • Optimized the UI for VR interaction, focusing on spatial awareness and navigational ease
Adaptability
  • Continuously adapted to evolving project requirements and feedback
  • Iteratively refined the design in alignment with user needs and business goals
Key responsibilities — UX strategy, VR design, and adaptability approach
Key responsibilities — UX strategy, VR design, and adaptability approach

Navigating from research to immersive reality

Design Process
Discovery & Design Thinking
Initiated with a comprehensive Design Thinking process, conducting insightful interviews with a seasoned polyglot consultant. Leveraged ideation sessions to gain a deeper understanding of the language matrix, exploring user expectations and solution requirements.
Design Spike
Ideation & Collaborative Validation
Formulated a robust solution based on research insights, collaborating seamlessly with the multidisciplinary team. Ensured alignment of the solution with business goals, emphasizing utility and effectiveness through iterative design spike.
Implementation
Build, Test & Refine in VR
Validated the solution thoroughly before implementation, confirming viability and effectiveness. Identified strategic opportunities to streamline the language matrix, reducing complexity for an enhanced user experience.
Design journey — three-phase process from Discovery to Implementation
Design journey — three-phase process from Discovery to Implementation

The Consonant Blueprint

One of the project's highlights was the creation of a groundbreaking Consonant Blueprint. This innovative tool meticulously breaks down the complexity of consonant articulation, illustrating multiple touchpoints and their corresponding activations across various areas such as the mouth, throat, airflow, nasal passages, and lips.

The blueprint is not merely a static representation but a dynamic timeline that visually guides the learner through the intricacies of consonant production — mapping Time, Audio, Glottis, Nasal airflow, Mouth airflow, Tongue, Lips, Jaw, Voice box, Haptics, and Visuals in a single unified view.

Consonant Blueprint — activation timeline and 3D mouth model UI
Consonant Blueprint — activation timeline and 3D mouth model UI

Design principle: Every consonant becomes a multi-dimensional event. The blueprint lets learners see, hear, and feel (via haptics) what perfect articulation looks like — something no textbook or screen can replicate.

Built for spatial interfaces from the ground up

I took the lead in defining a robust design system, encompassing a meticulous examination of controller interactions with a particular focus on optimizing the experience for Oculus Quest 2. I emphasized comprehensive documentation throughout — serving as a reference for the defined system and facilitating collaboration among team members.

Design system — UI atoms, consonant map reference, controller tooltips, button interactions
Design system — UI atoms, consonant map reference, controller tooltips, button interactions
UI Atoms & Components
  • Button variants (primary, secondary, ghost, disabled)
  • Info & tooltip components
  • Spatial audio indicators
  • Consonant Map Reference (High / Mid / Low class system)
VR-Specific Patterns
  • Controller tooltip system (Quest 2 mapped)
  • 6-state button interaction animation
  • Spatial navigation patterns
  • Haptic feedback guidelines
VR testing — Figma to Shapes XR pipeline for early iteration in the virtual space
VR testing — Figma to Shapes XR pipeline for early iteration in the virtual space

VR-first iteration: By moving from Figma to Shapes XR early, I could test and ensure quality of designs before development. Immersing in the virtual space enabled user testing within VR, providing valuable insights for user-centric adjustments at the initial stages.

What Speaklah uniquely delivers

Three core value propositions — Visualization, Real-Time Feedback, Immersive VR
Three core value propositions — Visualization, Real-Time Feedback, Immersive VR
Innovative Visualization
Empowered users by providing a specialized 3D mouth model to guide them in perfecting sound articulation, removing uncertainties and frustrations linked to conventional methods.
Real-Time Feedback & Comparison
Instant visual feedback on pronunciation efforts, offering a precise roadmap for improvement. Users achieve greater accuracy with a clear understanding of their progress.
Immersive VR Experience
Accelerated language fluency by creating a safe virtual reality environment for users to practice speaking. VR transforms language learning into an immersive and empowering experience.

Impact & results

The project successfully translated a complex, expert-level language learning system into an approachable spatial experience. Key outcomes measured post-MVP validation:

40%
Improvement in task completion times observed in user testing sessions
35%
Increase in user satisfaction ratings vs. the original matrix-based approach
Reduced
Cognitive load — users navigated consonant articulation with significantly less overwhelm
MVP
Shipped and validated end-to-end, from design to functional Oculus Quest 2 build