What if every gesture you made — a swipe, a tap, a flick — could instantly control your devices without touching a thing?
Photo source:
kickstarter
For decades, the way we interact with
technology has been bound to physical interfaces — screens, remotes, keypads.
Even as devices have evolved, the method of control has remained largely
static. CenWatch introduces a new interaction model: one where the air itself
becomes an interface, and your gestures shape how devices respond.
This refined wristband integrates LiDAR
sensors to detect finger position with millimeter-level accuracy, while an
onboard IMU interprets the wrist’s movement through space. Together, they
construct a virtual control surface in front of you — invisible, yet entirely
responsive. Whether you’re writing mid-air, navigating menus with swipes, or
typing without a physical keyboard, CenWatch interprets your intention with
precision. Pairing is simple, and once connected, devices respond as soon as
your wrist lifts — then disconnect the moment it lowers. The system is
platform-agnostic, functioning across Android, iOS, Windows, macOS, Linux,
smart TVs, VR systems, and IFTTT-linked devices.
Beyond interface innovation, CenWatch
addresses structural limitations in modern tech ecosystems. For AR glasses, it
acts not only as a gesture-based controller but as a processing center,
lightening the hardware load and extending battery performance. In smart
environments, it replaces the need for remotes or voice inputs entirely. A
twist of the wrist moves between devices. A subtle gesture executes a command.
With a signal range of nearly 300 feet, CenWatch expands interaction from a
confined surface to open space — fluid, continuous, and screenless.
Please subscribe to have unlimited access to our innovations.