Virtual Reality Pronunciation Platform — Speaklah
Immersive pronunciation training designed to address the biggest challenges in learning tonal languages. From an overwhelming consonant matrix to an intuitive, spatial VR experience on Oculus Quest 2.
Transforming a language matrix into a spatial experience
At Speaklah, I led the design initiative for an immersive language learning tool tackling pronunciation hurdles in language acquisition. My role spanned User Experience, Interaction Design, Prototyping, Virtual Reality, and Product Design — collaborating with a team including a Unity Developer, UX Researcher, and Product Owner/Founder.
The existing Thai language learning matrix, while comprehensive, proved overwhelming and frustrating for new learners. My task was to simplify this complexity, translating it into an intuitive and easy-to-interact VR platform — enhancing the learning experience and making it more user-friendly for learners navigating the intricacies of the Thai language.
Three pillars of design ownership
- Developed a robust UX strategy aligned with both user needs and overarching business goals
- Defined key user personas and mapped user journeys to tailor the learning experience
- Leveraged immersive principles to create an engaging language learning environment
- Optimized the UI for VR interaction, focusing on spatial awareness and navigational ease
- Continuously adapted to evolving project requirements and feedback
- Iteratively refined the design in alignment with user needs and business goals
Navigating from research to immersive reality
The Consonant Blueprint
One of the project's highlights was the creation of a groundbreaking Consonant Blueprint. This innovative tool meticulously breaks down the complexity of consonant articulation, illustrating multiple touchpoints and their corresponding activations across various areas such as the mouth, throat, airflow, nasal passages, and lips.
The blueprint is not merely a static representation but a dynamic timeline that visually guides the learner through the intricacies of consonant production — mapping Time, Audio, Glottis, Nasal airflow, Mouth airflow, Tongue, Lips, Jaw, Voice box, Haptics, and Visuals in a single unified view.
Design principle: Every consonant becomes a multi-dimensional event. The blueprint lets learners see, hear, and feel (via haptics) what perfect articulation looks like — something no textbook or screen can replicate.
Built for spatial interfaces from the ground up
I took the lead in defining a robust design system, encompassing a meticulous examination of controller interactions with a particular focus on optimizing the experience for Oculus Quest 2. I emphasized comprehensive documentation throughout — serving as a reference for the defined system and facilitating collaboration among team members.
- Button variants (primary, secondary, ghost, disabled)
- Info & tooltip components
- Spatial audio indicators
- Consonant Map Reference (High / Mid / Low class system)
- Controller tooltip system (Quest 2 mapped)
- 6-state button interaction animation
- Spatial navigation patterns
- Haptic feedback guidelines
VR-first iteration: By moving from Figma to Shapes XR early, I could test and ensure quality of designs before development. Immersing in the virtual space enabled user testing within VR, providing valuable insights for user-centric adjustments at the initial stages.
What Speaklah uniquely delivers
Impact & results
The project successfully translated a complex, expert-level language learning system into an approachable spatial experience. Key outcomes measured post-MVP validation: