Gestures Everywhere is a multimodal framework for supporting ubiquitous computing.
Our framework aggregates the real-time data from a wide range of heterogeneous sensors, and provides an abstraction layer through which other ubiquitous applications can request information about an environment or a specific individual.
The Gestures Everywhere framework supports both low-level spatio-temporal properties, such as presence, count, orientation, location, and identity; in addition to higher-level descriptors, including movement classification, social clustering, and gesture recognition.
This video shows an example of the Gestures Everywhere framework being used for tracking a user as they walk around the 5th floor of the MIT Media Lab.
The four purple circles denote the location of the Glass Infrastructure displays, these circles will be filled if a presence is detected at that location. The filled green circles represent the estimated location of a user. If the likelihood of a user's location drops below a specific uncertainity threshold then their location will no longer be displayed (hense why the user's location in the video is not displayed as they walk down the long corridor until they reach the displays located at the end of corridor).