Category Archives: interfaces

2019 SG:D Immersive Analytics @ PIXEL

This talk is for anyone interested in data analysis, data exploration, immersion or using data to solve problem in-situ, in real-time rather than after the event away from the sources of the data. This talk is about Immersive Analytics, an emerging field of research and development which seeks a deeper engagement with the analysis and data in an immersive sense in virtual or augmented reality.

13th June, 06:00 PM-08:30 PM, PIXEL Singapore

What is it about?
This talk introduces and discusses many examples of immersion, data analysis and hence immersive analytics.
The first thing to understand is that there are many meanings of the term “immersive” alongside different approaches to analytics. There are two primary facets related to the term immersive analytics. The first, and more literal aspect, is to be immersed or submerged in the data and analytic task. This gives rise to the examination of the range of human senses, modalities and technologies which might allow one to have their various senses fully immersed. A second facet, is the provision of computational analysis methods which facilitate a deep mental involvement with the task and data. Smooth interaction with the data and analytic task might allow people to concentrate and focus their attention, allowing them to enter a “flow state” which affords them the depth of thought required to be fully immersed.

OBJECT INTERACTION IN AR

2019 Seminar: Object recognition in HCI with Radar, Vision and Touch.

In April 2019, I will deliver a new lecture on Object Recognition in HCI with Radar, Vision and Touch in the School of Computing in the National University of Singapore.

To access the paper noted here, click on the names of the papers below to access the PDF directly. To access the bibtex and official ACM copy, click on the ACM logo beside the paper name.

Abstract

The exploration of novel sensing to facilitate new interaction modalities is an active research topic in Human-Computer Interaction. Across the breadth of HCI we can see the development of new forms of interaction underpinned by the appropriation or adaptation of sensing techniques based on the measurement of sound, light, electric fields, radio waves, biosignals etc. In this talk I will delve into three forms of sensing for object detection and interaction with radar, blurred images and touch. 

RadarCat (UIST 2016, Interactions 2018, IMWUT 2018) is a small, versatile system for material and object classification which enables new forms of everyday proximate interaction with digital devices. RadarCat exploits the raw radar signals that are unique when different material and objects are placed on the sensor. By using machine learning techniques, these objects can be accurately recognized. An object’s thickness, state (filled or empty mug) and different body parts can also be recognized. This gives rise to research and applications in context-aware computing, tangible interaction (with tokens and objects), and in industrial automation (e.g., recycling), or laboratory process control (e.g., traceability). While AquaCat (MobileHCI 2017 workshop) is a low-cost radar-based system capable of discriminating between a range of liquids and powders. Further in Solinteraction we explore two research questions with radar as a platform for sensing tangible interaction with the counting, ordering, identification of objects and tracking the orientation, movement and distance of these objects. We detail the design space and practical use-cases for such interaction which allows us to identify a series of design patterns, beyond static interaction, which are continuous and dynamic with Radar.  

Beyond Radar, SpeCam (MobileHCI ’17) is a lightweight surface color and material sensing approach for mobile devices which only uses the front-facing camera and the display as a multi-spectral light source. We leverage the natural use of mobile devices (placing it face-down) to detect the material underneath and therefore infer the location or placement of the device. SpeCam can then be used to support “discreet computing” with micro-interactions to avoid the numerous distractions that users daily face with today’s mobile devices. Our two-parts study shows that SpeCam can i) recognize colors in the HSB space with 10 degrees apart near the 3 dominant colors and 4 degrees otherwise and ii) 30 types of surface materials with 99% accuracy. These findings are further supported by a spectroscopy study. Finally, we suggest a series of applications based on simple mobile micro-interactions suitable for using the phone when placed face-down with blurred images. 

Finally, with touch we can show a sensing technique for detecting finger movements on the nose, using EOG sensors embedded in the frame of a pair of eyeglasses (ISWC 2017). Eyeglasses wearers can use their fingers to exert different types of movement on the nose, such as flicking, pushing or rubbing. These subtle gestures in “discreet computing” can be used to control a wearable computer without calling attention to the user in public. We present two user studies where we test recognition accuracy for these movements. I will conclude this talk with some speculations around how touch, radar and vision processing might be used to realise “blended reality” interactions in AR and beyond. 

I will also use this talk to answer questions on the upcoming Blended Reality Summer School, May 13, 2019 to May 17, 2019 at the Keio-NUS CUTE Center, National University of Singapore. Applications for this will open soon. 

ACM Logo
ACM Logo
ACM Logo
ACM Logo
ACM Logo
ACM Logo