Category Archives: MobileHCI

OBJECT INTERACTION IN AR

2019 Seminar: Object recognition in HCI with Radar, Vision and Touch.

In April 2019, I will deliver a new lecture on Object Recognition in HCI with Radar, Vision and Touch in the School of Computing in the National University of Singapore.

To access the paper noted here, click on the names of the papers below to access the PDF directly. To access the bibtex and official ACM copy, click on the ACM logo beside the paper name.

Abstract

The exploration of novel sensing to facilitate new interaction modalities is an active research topic in Human-Computer Interaction. Across the breadth of HCI we can see the development of new forms of interaction underpinned by the appropriation or adaptation of sensing techniques based on the measurement of sound, light, electric fields, radio waves, biosignals etc. In this talk I will delve into three forms of sensing for object detection and interaction with radar, blurred images and touch. 

RadarCat (UIST 2016, Interactions 2018, IMWUT 2018) is a small, versatile system for material and object classification which enables new forms of everyday proximate interaction with digital devices. RadarCat exploits the raw radar signals that are unique when different material and objects are placed on the sensor. By using machine learning techniques, these objects can be accurately recognized. An object’s thickness, state (filled or empty mug) and different body parts can also be recognized. This gives rise to research and applications in context-aware computing, tangible interaction (with tokens and objects), and in industrial automation (e.g., recycling), or laboratory process control (e.g., traceability). While AquaCat (MobileHCI 2017 workshop) is a low-cost radar-based system capable of discriminating between a range of liquids and powders. Further in Solinteraction we explore two research questions with radar as a platform for sensing tangible interaction with the counting, ordering, identification of objects and tracking the orientation, movement and distance of these objects. We detail the design space and practical use-cases for such interaction which allows us to identify a series of design patterns, beyond static interaction, which are continuous and dynamic with Radar.  

Beyond Radar, SpeCam (MobileHCI ’17) is a lightweight surface color and material sensing approach for mobile devices which only uses the front-facing camera and the display as a multi-spectral light source. We leverage the natural use of mobile devices (placing it face-down) to detect the material underneath and therefore infer the location or placement of the device. SpeCam can then be used to support “discreet computing” with micro-interactions to avoid the numerous distractions that users daily face with today’s mobile devices. Our two-parts study shows that SpeCam can i) recognize colors in the HSB space with 10 degrees apart near the 3 dominant colors and 4 degrees otherwise and ii) 30 types of surface materials with 99% accuracy. These findings are further supported by a spectroscopy study. Finally, we suggest a series of applications based on simple mobile micro-interactions suitable for using the phone when placed face-down with blurred images. 

Finally, with touch we can show a sensing technique for detecting finger movements on the nose, using EOG sensors embedded in the frame of a pair of eyeglasses (ISWC 2017). Eyeglasses wearers can use their fingers to exert different types of movement on the nose, such as flicking, pushing or rubbing. These subtle gestures in “discreet computing” can be used to control a wearable computer without calling attention to the user in public. We present two user studies where we test recognition accuracy for these movements. I will conclude this talk with some speculations around how touch, radar and vision processing might be used to realise “blended reality” interactions in AR and beyond. 

I will also use this talk to answer questions on the upcoming Blended Reality Summer School, May 13, 2019 to May 17, 2019 at the Keio-NUS CUTE Center, National University of Singapore. Applications for this will open soon. 

ACM Logo
ACM Logo
ACM Logo
ACM Logo
ACM Logo
ACM Logo

Oct: Report on MobileHCI 2014 and UIST 2014

MobileHCI 2014 

 

MobileHCI 2014 was a single track conference so I saw all the papers presented. I’ve selected a few to highlight which are of interest to me or those I work with. There were many impressive talks and papers which I’ve not noted here as they simply aren’t relevant in my day to day work. As I was the general chair for this conference I hope you aren’t offended if I don’t list your paper here!

You can see some of the videos or images I took (or others gave me) from the various talks and sessions of note here from Flickr.

All the papers are here and can be downloaded by anyone (for a year), after this you will need an ACM DL account. Please note, I’m providing two links for each paper. One is the permanent link to the ACM DL (which you need personal or institutional access to read) and the second is the link to the page for the Open TOC the ACM provides for the next year. If you click on the Open TOC link you then need to search for the paper yourself to click on again! I cannot link to the Open TOC links from this blog. 

Papers

1. Was it worth the hassle?: ten years of mobile HCI research discussions on lab and field evaluations
Refers back to their “lasting impact paper” from 2004 which was awarded this year. As we said their 2004 paper was called “childish and perverse” at the time. This paper discusses the less controversial point of when and how to go into the field (not if). [Permanent Link] [Open TOC Link

2. Toffee: enabling ad hoc, around-device interaction with acoustic time-of-arrival correlation
I suggest this paper as there were a set of around-device sensing (eg. GSM SideSwipe at UIST or cameras by Song et al at UIST) this year and I found this one nicely written and well presented.  [Permanent Link] [Open TOC Link]

3. Around-body interaction: sensing & interaction techniques for proprioception-enhanced input with mobile devices.
This will be a highly cited paper I expect. A nicely put together set of methods in the paper, well described and shown in a convincing manner at MobileHCI.  [Permanent Link] [Open TOC Link]

4. Contextual experience sampling of mobile application micro-usage.
I would read this if you are thinking of using ESM. A well written paper and I was convinced to read it in detail by the conference presentation. [Permanent Link] [Open TOC Link]

5. Portallax: bringing 3D displays capabilities to handhelds
A nice hardware/software system to make a mobile device into a stereoscopic 3D enabled device from our friends in Bristol [Permanent Link] [Open TOC Link]

6. An in-situ study of mobile phone notifications
Best paper by one of our former SACHI speakers with nice insights and a well put together paper.   [Permanent Link] [Open TOC Linkhttp://dl.acm.org/citation.cfm?id=2628364]

7. Texting while walking: an evaluation of mini-qwerty text input while on-the-go
A very nice presentation and detailed paper   [Permanent Link] [Open TOC Link]

UIST 2014

This was a multi-track conference so I only saw 50% of the papers. You can see some of the videos or images I took from the various talks and sessions of note here from Flickr.

All the papers are here and can be downloaded by anyone (for a year), after this you will need an ACM DL account. Please note, I’m providing two
links for each paper. One is the permanent link to the ACM DL (which
you need personal or institutional access to read) and the second is the
link to the page for the Open TOC provided for the next year.
If you click on the Open TOC link you then need to search for the paper
yourself to click on again! I cannot link to the Open TOC links from
this blog.

1. FlexSense: A Transparent Self-Sensing Deformable Surface
This was a really nice piece of work from industry and academia. The paper is great to read and should give you ideas for follow on work. [Permanent Link] [Open TOC Link]

2. Graffiti Fur: Turning Your Carpet into a Computer Display
This was a little crazy but a fun concept. Some simple and basic research which results in interesting sensing and actuation at a distance. This is a good example of why the UIST demo program is so important and popular to bring the ideas trapped on paper into an interactive space for discussion with the authors. Will probably be a product soon enough! [Permanent Link] [Open TOC Link]

3. Deconstructing and Restyling D3 Visualizations
After Miguel‘s transmogrify work in 2013 it’s nice to see others taking visulisations apart in novel ways. [Permanent Link] [Open TOC Link]

4. InterState: A Language and Environment for Expressing Interface Behavior
State machines brought to life in a visual programming language to express the behaiour of interfaces. [Permanent Link] [Open TOC Link]

5. ParaFrustum: Visualization Techniques for Guiding a User to a Constrained Set of Viewing Positions and Orientations
I enjoyed this talk and paper as it reminded me of Richard Webber’s work on graph layout viewpoints.  [Permanent Link] [Open TOC Link]

6. InterTwine: Creating Interapplication Information Scent to Support Coordinated Use of Software
This was a nice paper showing how context awareness can come up into the application level to create coordinated support. [Permanent Link] [Open TOC Link]

7. Sensing Techniques for Tablet+Stylus Interaction (best paper)
A fantastic paper (basically 2.5 papers in one).  [Permanent Link] [Open TOC Link]

8. RoomAlive and the Dyadic projected spatial augmented reality papers will be of interest to those looking at digital-physical interaction, uses of the Kinect or those interested in whole room interaction. I visited the Microsoft test home on campus a number of years ago so painting the walls with projectors wasn’t that surprising to me. The innovation in the setup, configuration and interaction is very impressive in both RoomAlive and Dyadic. It will be interesting to watch how anyone takes such a multi-projector system from the lab to the home or workplace given the power, space, lighting, cost etc. constraints. RoomAlive [Permanent Link] and Dyadic [Permanent Link]. [Open TOC Link]

The best talk I saw was this as the entire talk was given as a demo with the presentation material inside the tool. Kitty: Sketching Dynamic and Interactive Illustrations

MobileHCI 2014

Dr. Sara Diamond the President of the Ontario College of Art and
Design University and I are the general co-chairs for MobileHCI 2014 the 16th
International Conference on Mobile Human-Computer Interaction in
Toronto,  Canada.

MobileHCI 2013

I am one on the Associate Chairs for MobileHCI 2013, the 15th
International Conference on Human-Computer Interaction with Mobile
Devices and Services (MobileHCI 2013) which will be held in Munich, Germany August 27 – 30, 2013.

“MobileHCI is the world’s
leading conference in the field of Human Computer Interaction
concerned with portable and personal devices and with the services to
which they enable access. MobileHCI provides a multidisciplinary
forum for academics, hardware and software developers, designers
and practitioners to discuss the challenges and potential solutions for
effective interaction with and through mobile devices,
applications, and services.”

http://www.mobilehci2013.org/

Call for MobileHCI 2012 Tutorials

MobileHCI 2012 continues to build on the tradition of previous conferences with a high quality tutorial program. We invite proposals for 1, 2 or 3 hour tutorials on emerging and established areas of research and practice. Tutorials will be held on the first day of the conference and are expected to provide participants with new insights and skills relevant to the area.

A MobileHCI tutorial is an in-depth presentation of one or more state-of-the-art topics presented by researchers or practitioners within the field of Mobile HCI. The scope for tutorials is broad and includes topics such as new technologies, research approaches and methodologies, design practices, user/consumer insights, investigations into new services/applications/interfaces, and much more.

A tutorial should focus on its topic in detail and include references to the “must read” papers or materials within its domain. A participatory approach in which the tutorial participants actively engage in exercises is welcomed, though not required. In addition we welcome proposals incorporating hands-on work where the outcome is a working prototype. The tutorial organizers will work with the main session organisers to provide 2 spots in the demo session to showcase the best prototypes that emerge from the tutorial program.

The expected audience will vary in terms of prior knowledge, but will largely consist of researchers, Ph.D. students, practitioners, and educators.

We encourage you to review the scope and nature of the previous tutorial program at  http://www.mobilehci2011.org/tutorials.

Submission Instructions:

  1. We may invite a small number of tutorials from Bay Area experts that we think will be particularly interesting to attendees. In order to avoid overlaps with those tutorials we suggest reviewing the 2012 Tutorials page (which we will update to reflect invited tutorials) before submitting.
  2. Remember that a MobileHCI 2012 tutorial should last between 1 and 3 hours.
  3. In your proposal include a brief biography of the presenter(s), the title of the tutorial, and a sufficiently detailed description of the tutorial (the intended topics, the depths to which you will cover them, and activities that attendees will engage in) to convey what you expect attendees to have learned at the end of the tutorial.
  4. Send a PDF version of your tutorial proposal directly to the Tutorial Chairs at tutorials@mobilehci2012.org 
  5. The Tutorials Chairs will evaluate all proposals and communicate acceptance decisions to the proposers. 
  6. Accepted tutorial proposals will be included in the main conference proceedings

Timeline:

  • Submission deadline:  May 4th, 2012 
  • Proposers notified:      June 11th, 2012

We look forward to your submissions!

2012 Tutorial Chairs

August 2011 Papers – UMAP 2011, MobileHCI 2011 and ASONAM 2011

I recently presented a paper co-authored with Mike Bennett at Stanford University entitled “Creating Personalized Digital Human Models Of Perception For Visual Analytics” at UMAP 2011 in Girona, Spain, on Thursday July 14th. 


You can see see a video of the user modelling anthem below. 

Umer Rashid and I co-authored a paper with Jarmo Kauko and Jonna Häkkiläat at Nokia Research Center entitled “Proximal and Distal Selection of Widgets: Designing Distributed UI for Mobile Interaction with Large Display”. 


It will be presented by Umer Rashid at MobileHCI 2011 in Stockholm, Sweden on Friday September 2nd. 

I also co-authored a paper with Michael Farrugia and Neil Hurely entitled “SNAP: Towards a validation of the Social Network Assembly Pipeline” which was presented by Michael Farrugia at the International Conference on Advances in Social Networks Analysis and Mining in Kaohsiung City, Taiwan, on Monday July 25th.