Category Archives: ubicomp

OBJECT INTERACTION IN AR

2019 Seminar: Object recognition in HCI with Radar, Vision and Touch.

In April 2019, I will deliver a new lecture on Object Recognition in HCI with Radar, Vision and Touch in the School of Computing in the National University of Singapore.

To access the paper noted here, click on the names of the papers below to access the PDF directly. To access the bibtex and official ACM copy, click on the ACM logo beside the paper name.

Abstract

The exploration of novel sensing to facilitate new interaction modalities is an active research topic in Human-Computer Interaction. Across the breadth of HCI we can see the development of new forms of interaction underpinned by the appropriation or adaptation of sensing techniques based on the measurement of sound, light, electric fields, radio waves, biosignals etc. In this talk I will delve into three forms of sensing for object detection and interaction with radar, blurred images and touch. 

RadarCat (UIST 2016, Interactions 2018, IMWUT 2018) is a small, versatile system for material and object classification which enables new forms of everyday proximate interaction with digital devices. RadarCat exploits the raw radar signals that are unique when different material and objects are placed on the sensor. By using machine learning techniques, these objects can be accurately recognized. An object’s thickness, state (filled or empty mug) and different body parts can also be recognized. This gives rise to research and applications in context-aware computing, tangible interaction (with tokens and objects), and in industrial automation (e.g., recycling), or laboratory process control (e.g., traceability). While AquaCat (MobileHCI 2017 workshop) is a low-cost radar-based system capable of discriminating between a range of liquids and powders. Further in Solinteraction we explore two research questions with radar as a platform for sensing tangible interaction with the counting, ordering, identification of objects and tracking the orientation, movement and distance of these objects. We detail the design space and practical use-cases for such interaction which allows us to identify a series of design patterns, beyond static interaction, which are continuous and dynamic with Radar.  

Beyond Radar, SpeCam (MobileHCI ’17) is a lightweight surface color and material sensing approach for mobile devices which only uses the front-facing camera and the display as a multi-spectral light source. We leverage the natural use of mobile devices (placing it face-down) to detect the material underneath and therefore infer the location or placement of the device. SpeCam can then be used to support “discreet computing” with micro-interactions to avoid the numerous distractions that users daily face with today’s mobile devices. Our two-parts study shows that SpeCam can i) recognize colors in the HSB space with 10 degrees apart near the 3 dominant colors and 4 degrees otherwise and ii) 30 types of surface materials with 99% accuracy. These findings are further supported by a spectroscopy study. Finally, we suggest a series of applications based on simple mobile micro-interactions suitable for using the phone when placed face-down with blurred images. 

Finally, with touch we can show a sensing technique for detecting finger movements on the nose, using EOG sensors embedded in the frame of a pair of eyeglasses (ISWC 2017). Eyeglasses wearers can use their fingers to exert different types of movement on the nose, such as flicking, pushing or rubbing. These subtle gestures in “discreet computing” can be used to control a wearable computer without calling attention to the user in public. We present two user studies where we test recognition accuracy for these movements. I will conclude this talk with some speculations around how touch, radar and vision processing might be used to realise “blended reality” interactions in AR and beyond. 

I will also use this talk to answer questions on the upcoming Blended Reality Summer School, May 13, 2019 to May 17, 2019 at the Keio-NUS CUTE Center, National University of Singapore. Applications for this will open soon. 

ACM Logo
ACM Logo
ACM Logo
ACM Logo
ACM Logo
ACM Logo

PerDis 2013

This week I was attending the Second International Symposium on Pervasive Displays at the Google campus in Mountain View, California. I am attending to present a paper co-authored with Jakub Dostal and Per Ola Kristensson in SACHI, along with being a session chair on the second day of the conference.

The paper is titled.
Dostal, J., Kristensson, P.O. and Quigley, A. 2013. Multi-view
proxemics: distance and position sensitive interaction. In Proceedings
of the 2nd International Symposium on Pervasive Displays (PerDis 2013).
ACM Press.

More details

UbiComp 2013

I am joining the Technical Program Committee of the 2013 ACM International Conference on Pervasive and Ubiquitous Computing (UbiComp 2013).

The UbiComp 2013
Program Chairs are Marc Langheinrich, John Canny, and Jun Rekimoto and
they said of UbiComp 2013.

That it is the first merged edition of the
two most renowned conferences in the field: Pervasive and UbiComp. While
it retains the “UbiComp” short-name in recognition of the visionary
work of Mark Weiser, its long name (and focus) reflects the dual history
of the new event, i.e., it seeks to publish any work that one would
previously expect to find at either UbiComp or Pervasive. The conference
will take place from September 8-12 in Zurich, Switzerland. Aaron has
previously served on a number of Pervasive and UbiComp Technical program
committees and looks forward to serving on this first joint conference
UbiComp TPC which is now the premier forum for Ubiquitous and Pervasive
Computing research.
http://www.ubicomp.org/

Social TV and Second Screen Apps

Social television is “a general term for technology that supports communication and social interaction in either the context of watching television, or related to TV content” [1]. Each day millions of Facebook updates, Tweets or blog posts relate to television shows. Some of these comments relate to a particular TV show, at a particular moment and go stale very quickly. For example, during a live political debate a politician mis-quotes someone, makes an incorrect assertion or comments on a particular law. Quickly, the mass audience will post links to the law in question, post corrections or messages with links to past statements which contradict what is being said live on TV. Hashtags, lists, message boards or meta-pages rapidly emerge as hundreds, thousands or millions of people are simultaneously watching events unfold on screen. In other cases, viewers of a particular show will post trivia or links to images, videos or webpages related to the characters or real actors who are currently on screen. Generally, comments relate to a person, topic or location being discussed in a show, be it fictional or factual.

Social TV is “a growing force: the masses talking back through social media” [2]. Social TV, “It’s about allowing people to engage a little more than they have been able to in the past with what they’re watching, One of the great prompters of conversation is what you’re watching on the telly. In the past we sit in the lounge room and talk to the person sitting next to us, in the future it will become easier and easier to engage with people who are not in the same room.” [3] said ABC’s manager of new media services, Chris Winter. Considering how we have interacted around television in the past it was once a clear example of same-time, same-place interaction but now this is changing to same-time, different place interaction with many others.

Social television systems can “integrate voice communication, social media, text chat, presence and context awareness, TV recommendations, ratings, or video-conferencing with the TV content either directly on the screen or by using ancillary devices.” [1]

I’m teaching an Advanced Interactive Technologies module in the University of St Andrews and for our second (and group) assignment we have 8 groups taking on a user-centered interaction design process for the development of a Social TV and Second Screen Application.

Students are completing a survey and analysis, requirements capture, paper prototype, low-fidelity mockup, testing the mockup with users. Following this we expect them to refine their mockups. Along with a report each team is going to present their design and mockup in public.

For the purposes of this project student teams are going to focus on exploring the integration of social media both directly on the screen and on ancillary devices.

Firstly, as a group they need to survey how social media is currently used by people watching live(real-time) or recorded TV shows including news, documentaries, movies, fictional shows, political debates, sports, arts and reality TV. Looking at examples from the European, African, UK, USA, Japan, China, Indian and Australian TV markets will inform their requirements and design.

Each project team are taking on the role of a user-centred interaction design team who has been tasked to create a second screen Social TV experience with Social Media for an imaginary client “MyTVTube”.

I look forward to posting links to some examples of what the teams produce here in time!

    1. Social Television  
    2. A Social- Media Decoder  
    3. Social revolution coming to Australian TV [Sydney Morning Herald, Feb 23, 2012]

    Pervasive 2012 – Doctoral Consortium



    The Pervasive 2012 doctoral consortium provides a collegial and supportive forum in which PhD students can present and defend their doctoral research-in-progress for constructive feedback and discussion. The consortium is guided by a panel of experienced researchers and practitioners with both academic and industrial experience. It offers students the valuable opportunity to receive high-quality feedback and fresh perspectives from recognized international experts in the field, and to engage with other senior doctoral students.

    Applicants should be far enough into their PhD research to have identified the salient issues and appropriate research methodology, as well as achieved some results. Preference will be given to applicants who are at a stage where they have completed some portion of the research but are still at a stage that will still permit them to incorporate feedback received at the consortium into their planned PhD research.

    Format

    The doctoral consortium will be a seminar-style event taking place the day before the main Pervasive 2012 sessions. Time will be allotted to each student for a brief research presentation, and for in-depth, constructive discussion amongst the panellists and other participants. In order to allow for sufficient depth of discussion, the number of accepted participants will be limited to ten.
    For the 2012 Pervasive doctoral consortium we aim to include a number of new features including a panel session. The short “Ask a PhD” panel session is a free-form question-and-answer discussion in which the consortium panellists will share their advice and experiences regarding such topics as going on the job market, international career paths, academic versus industry career paths, post-docs versus permanent positions, job offer negotiation, and other topics of relevance to PhD students. This session will serve as a capstone event to the consortium, allowing students to reflect and consider important career issues together.

    Submission

    Submissions (of up to 5 pages) should be formatted according to the guidelines of Springer’s LNCS format. A maximum of 4 pages should be devoted to the Research Summary, described below and 1 page for the students biographical sketch, also described below. The topic scope for submission to the doctoral consortium is the same as those listed in the Pervasive 2012 call for papers. Submissions should consist of the following:
    1. Research summary describing the work in progress, and including a 100 word abstract. Things to consider for inclusion in the research summary are:
      • the expected contribution to the field;
      • the original idea or thesis statement;
      • the problem domain and the specific problem addressed;
      • a brief overview of related work;
      • the methodological approach;
      • research carried out and results so far.

      All research summaries should also outline what work remains to be done for the dissertation and indicate the plan for completion.


    2. Student biographical sketch, including the names and affiliations of the research advisor(s), the date that the student began the PhD programme, and the expected date of completion.
    All submissions should be made using the PCS submission system.
    All submissions will be reviewed by the DC chairs and consortium panellists. If accepted, an applicant may be asked to make minor clarifications and edits to their research summary before the final camera-ready version is due. The accepted doctoral consortium submissions will be published in the adjunct proceedings of the Pervasive 2012 program.

    Critical Dates

    Doctoral Consortium Chairs

    • Elaine M. Huang, University of Zurich, Switzerland

    • Aaron Quigley, University of St. Andrews, UK

    Nov 2009 UbiComp 2010

    I have been invited and will serve on the Program Committee for the 2010 ACM Conference on Ubiquitous Computing to be held in Copenhagen, Denmark, from September 26-29 2010. The deadline for Papers and Notes is March 12, 2010. The Call for Papers for UbiComp 2010 can be found here.

    Ubiquitous Computing (bridging the digital-physical divide) is central to several of the emerging research projects within the Human Interface Technology Research Laboratory Australia (HITLAB AU).

    UbiComp 2010 is the 12th UbiComp conference and is one of the premier venues for presenting research in the design, development, deployment, evaluation and understanding of ubiquitous computing systems.

    “UbiComp is an interdisciplinary field of research and development that utilizes and integrates pervasive, wireless, embedded, wearable and/or mobile technologies to bridge the gaps between the digital and physical worlds. UbiComp 2010 will bring together top researchers and practitioners who are interested in both the technical and applied aspects of Ubiquitous Computing technologies, systems and applications. The Ubicomp 2010 program features keynotes, technical paper and notes sessions, specialized workshops, live demonstrations, posters, video presentations, and a Doctoral Colloquium.”

    Sept 2009 Ubiquitous Computing Fundamentals

    I have a chapter on UbiComp User Interfaces in a new book called “Ubiquitous Computing Fundamentals” Edited by John Krumm in Microsoft and published by Chapman & Hall/CRC; 1 edition (September 18, 2009) ISBN: 978-1420093605. [ Amazon ]

    The user interface represents the point of contact between a computer system and a human, both in terms of input to the system and output from the system. There are many facets of a “Ubiquitous Computing” or ubicomp system, from the low-level sensor technologies in the environment, through the collection, management and processing of the context data through to the middleware required to enable the dynamic composition of devices and services envisaged. These hardware, software, systems and services act as the computational edi ce around which we need to build our Ubicomp User Interface, or UUI. The ability to provide natural inputs and outputs from a system which can allow it to remain in the periphery is hence the central challenge in UUI design.

    While this chapter surveys the current state of the art to the user beyond the classical keyboard, screen and mouse, it is important to also acknowledge that UUIs represent a paradigm shift in human computer interaction with input and output technologies not yet envisaged. UUIs are built around a next generation technological paradigm which in essence reshapes our relationship with our personal information, environment, artefacts and even our friends, family and colleagues. The challenge is not about providing the next generation mouse and keyboard but instead making the collection of inputs and outputs operate in a fuid and seamless manner.

    Chapters
    1. Introduction to Ubiquitous Computing Roy Want
    2. Ubiquitous Computing Systems Jakob Bardram 
and Adrian Friday
    3. Privacy in Ubiquitous Computing Marc Langheinrich
    4. Ubiquitous Computing Field Studies A.J. Bernheim Brush
    5. Ethnography in Ubiquitous Computing Alex S. Taylor
    6. From GUI to UUI: Interfaces for Ubiquitous Computing 
Aaron Quigley 7. Location in Ubiquitous Computing 
Alexander Varshavsky and Shwetak Patel
    8. Context-Aware Computing Anind K. Dey
    9. Sequential Sensor Processing John Krumm

    Mar 2009 Keynote talk ruSMART 2009 St.Petersburg, Russia

    I’ve been invited to present a keynote talk at the 2nd Conference on Smart Spaces ruSMART 2009 to be held from the September 15-16, 2009 in St.Petersburg, Russia. This event is co-located with The 9th International Conference on Next Generation Wired/Wireless Advanced Networking NEW2AN 2009. The topic of my talk is Ubiquitous computing User Interfaces, next generation effects and challenges.

    Keynote “Ubiquitous computing User Interfaces (UUI)”, Dr. Aaron J. Quigley (University College Dublin, Ireland)

    Abstract

    The user interface represents the point of contact between a computer system and a human, both in terms of input to the system and output from the system. Ubiquitous Computing or UbiComp consists of hardware, software, systems and services which act as the computational edifice around which we need to build our user interfaces to afford natural or “invisible” interaction styles. This is driven by the evolution from the notion of a computer as a single device, to the notion of a computing space comprising personal and peripheral computing elements and services all connected and communicating as required. This presentation discusses research and developments in the realisation of User Interfaces for UbiComp and in particular Smart Spaces. Examples are drawn from research and development groups around the world who are exploring mobile and embedded devices in almost every type of physical artefact including cars, toys, tools, homes, appliances, clothing and work surfaces.