Enabling Expressive User Interaction in a Multimodal Interface for Object Selection in Virtual 3D Environments
Mentor: Dr. Sriganesh Madhvanath, Senior Research Scientist, HP Labs India
Duration: July 2011-June 2012
Technologies: StanfordNLP, Blender, Microsoft Kinect v1, Microsoft speech API, C++, Java
We developed a system which facilitates interaction with a 3D scene using speech and visual gestures. The system provides functionality of using speech as a filter to the pointing gesture so as to alleviate the shortcomings of unimodal interface for 3D environments. The system also provides functionality of using only speech to refer to desired objects. References can be made using distinguishable properties of the object, spatial location in the scene and spatial relation with another object.
The project got selected at 14th ACM International Conference on Multimodal Interaction (ICMI '12) as a demo presentation and the paper got published in conference proceedings.[paper]