How can we access spatial computing without the headset?
Turn your smartwatch into a spatial controller
What’s happening?
At CES 2025, startup Doublepoint unveiled WowMouse, a free Apple Watch app that transforms hand gestures into a remote controller for Macs and smart home devices. This launch follows their successful Android app.
Why it matters
1. Technological Transition
We’re in a transition period between traditional screen interfaces and future spatial computing.
This approach could bridge this gap by turning your smartwatches into spatial controllers for your devices.
2. Adoption Benefits
While the industry is heavily focused on VR/AR hardware, user adoption for XR gadgets would take time.
This move lowers a barrier to adopt future-like interactions today while solving real problems like controlling devices afar when touch screens or voice commands aren't ideal.
3. Accessibility Advantages
The gesture-based approach expands smart home accessibility by offering non-verbal device control to users who require or prefer it.
How this Watch app works
Point at a device/target, and tap your fingers
Rotate your palm to adjust intensity, like dimming lights
Get immediate feedback through your watch (visual, haptic, audio)
Customize sensitivity to match your gestures
Key insights for spatial interaction design
1. Natural Learning & Interaction
Start with simplest gesture (pointing and tapping)
Add advanced features through natural extensions (rotation)
Builds on familiar smartwatch interactions (Apple watch’s double tap)
2. Accessibility Focus
Supports both left and right-handed use
Multiple feedback channels for different needs (visual, haptic, audio))
3. Error Prevention
Clear connection feedback
Different sensitivity modes to reduce false positives
Real-world limitation:
Gesture recognition accuracy(97%) still falls short of ideal user expectations.
Users should manually reconnect when switching between devices —creating friction.
Worth thinking about
Imagine the spatial computing era arrives,
How might your current interface evolve into spatial interactions?
What existing user behaviors in your product naturally align with spatial interactions? (e.g., scrolling, zooming, navigation)