Sept 29 – SydCHI for ACM VRST, ACM UIST and ACM UbiComp (IMWUT) paper presentations

On this day we welcomed the ACM SIGCHI Chapter – SydCHI here in Data61 for a series of practice talks from Rukshani, Owen, Jieshan and Don. The following are the details of these talks.

Rukshani Somarathna

Exploring User Engagement in Immersive Virtual Reality Games through Multimodal Body Movements

User engagement in Virtual Reality (VR) games is crucial for creating immersive and captivating gaming experiences that meet the expectations of players. However, understanding and measuring these levels in VR games presents a challenge for game designers, as current methods, such as self-reports, may be limited in capturing the full extent of user engagement. Additionally, approaches based on biological signals to measure engagement in VR games present complications and challenges, including signal complexity, interpretation difficulties, and ethical concerns. This study explores body movements, as a novel approach to measure user engagement in VR gaming. We employ E4, emteqPRO, and off-the-shelf IMUs to measure the body movements from diverse participants engaged in multiple VR games. Further, we examine the simultaneous occurrence of player motivation and physiological responses to explore potential associations with body movements. Our findings suggest that body movements hold promise as a reliable and objective indicator of user engagement, offering game designers valuable insights on generating more engaging and immersive experiences.

29th ACM Symposium on Virtual Reality Software and Technology (VRST 2023), Christchurch, New Zealand.

Rukshani Somarathna, Don Samitha Elvitigala, Yijun Yan, Aaron J Quigley, and Gelareh Mohammadi. 2023. Exploring User Engagement in Immersive Virtual Reality Games through Multimodal Body Movements. In Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology (VRST ’23). Association for Computing Machinery, New York, NY, USA, Article 3, 1–8. https://doi.org/10.1145/3611659.3615687

Yongquan Hu (Owen)

MicroCam: Leveraging Smartphone Microscope Camera for Context-Aware Contact Surface Sensing

The primary focus of this research is the discreet and subtle everyday contact interactions between mobile phones and their surrounding surfaces. Such interactions are anticipated to facilitate mobile context awareness, encompassing aspects such as dispensing medication updates, intelligently switching modes (e.g., silent mode), or initiating commands (e.g., deactivating an alarm). We introduce MicroCam, a contact-based sensing system that employs smartphone IMU data to detect the routine state of phone placement and utilizes a built-in microscope camera to capture intricate surface details. In particular, a natural dataset is collected to acquire authentic surface textures in situ for training and testing. Moreover, we optimize the deep neural network component of the algorithm, based on continual learning, to accurately discriminate between object categories (e.g., tables) and material constituents (e.g., wood). Experimental results highlight the superior accuracy, robustness and generalization of the proposed method.

Ubicomp/ISWC 2023, Oct 8-12 Cancun, Mexico

Yongquan Hu, Hui-Shyong Yeo, Mingyue Yuan, Haoran Fan, Don Samitha Elvitigala, Wen Hu, and Aaron Quigley. 2023. MicroCam: Leveraging Smartphone Microscope Camera for Context-Aware Contact Surface Sensing. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7, 3, Article 98 (September 2023), 28 pages. https://doi.org/10.1145/3610921

Jieshan Chen

Unveiling the Tricks: Automated Detection of Dark Patterns in Mobile Applications

Mobile apps bring us many conveniences, such as online shopping and communication, but some use malicious designs called dark patterns to trick users into doing things that are not in their best interest. Many works have been done to summarize the taxonomy of these patterns and some have tried to mitigate the problems through various techniques. However, these techniques are either time-consuming, not generalisable or limited to specific patterns. To address these issues, we propose \tool{}, a knowledge-driven system that utilizes computer vision and natural language pattern matching to automatically detect a wide range of dark patterns in mobile UIs. Our system relieves the need for manually creating rules for each new UI/app and covers more types with superior performance. In detail, we integrated existing taxonomies into a single and consistent one, conducted a characteristic analysis and distilled knowledge from real-world examples and the integrated taxonomy. Based on this, our \tool{} consists of two components, UI element detection and knowledge-driven dark pattern checker. For evaluation, we utilise Rico dataset and its semantic labelling to train and test each DL module in our system. We also contribute a new dark pattern dataset, which contains 4,999 benign UIs and 1,353 malicious UIs of 1,660 instances spanning 1,023 mobile apps. Our system achieves a superior performance in detecting dark patterns, with an overall performance of 0.83 in precision, 0.82 in recall, and 0.82 in F1 score. Our ablation experiments demonstrates the validity and necessity of each module. A user study involving 58 participants further showed that \tool{} significantly increases users’ knowledge of dark patterns, increasing the recall rate of detecting dark patterns from 18.5\% to 57.8\%. Our work is beneficial to end-users, app providers, and regulators, and can serve as a training tool for raising awareness of dark patterns.

Dr. Don Samitha Elvitigala

RadarFoot: Fine-grain Ground Surface Context Awareness for Smart Shoes

Everyday, billions of people use footwear for walking, running, or exercise. Of emerging interest are `smart footwear'', which help users track gait, count steps or even analyse performance. However, such nascent footwear lack fine-grain ground surface context awareness, which could allow them to adapt to the conditions and create usable functions and experiences. Hence, this research aims to recognize the walking surface using a radar sensor embedded in a shoe, enabling ground context-awareness. Using data collected from 23 participants from an in-the-wild setting, we developed several classification models. We show that our model can detect five common terrain types with an accuracy of 80.0% and further ten terrain types with an accuracy of 66.3%, while moving. Also, it can detect the gait motion types such aswalking’, stepping up',stepping down’, `still’, with an accuracy of 90%. Finally, we present potential use cases and insights for future work based on such ground-aware smart shoes.

The ACM Symposium on User Interface Software and Technology, UIST 2023, San Fransisco, USA

Don Samitha Elvitigala, Yunfan Wang, Yongquan Hu, and Aaron J Quigley. 2023. RadarFoot: Fine-grain Ground Surface Context Awareness for Smart Shoes. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST ’23). Association for Computing Machinery, New York, NY, USA, Article 87, 1–13. https://doi.org/10.1145/3586183.3606738