Category Archives: infovis

2019 SG:D Immersive Analytics @ PIXEL

This talk is for anyone interested in data analysis, data exploration, immersion or using data to solve problem in-situ, in real-time rather than after the event away from the sources of the data. This talk is about Immersive Analytics, an emerging field of research and development which seeks a deeper engagement with the analysis and data in an immersive sense in virtual or augmented reality.

13th June, 06:00 PM-08:30 PM, PIXEL Singapore

What is it about?
This talk introduces and discusses many examples of immersion, data analysis and hence immersive analytics.
The first thing to understand is that there are many meanings of the term “immersive” alongside different approaches to analytics. There are two primary facets related to the term immersive analytics. The first, and more literal aspect, is to be immersed or submerged in the data and analytic task. This gives rise to the examination of the range of human senses, modalities and technologies which might allow one to have their various senses fully immersed. A second facet, is the provision of computational analysis methods which facilitate a deep mental involvement with the task and data. Smooth interaction with the data and analytic task might allow people to concentrate and focus their attention, allowing them to enter a “flow state” which affords them the depth of thought required to be fully immersed.

VISSOFT 2018

2018 Keynote: VISSOFT, Madrid Spain

I will be a keynote speaker at the IEEE VISSOFT 2018 conference later this year. “The sixth IEEE Working Conference on Software Visualization (VISSOFT 2018) builds upon the success of the previous four editions of VISSOFT, which in turn followed after six editions of the IEEE International Workshop on Visualizing Software for Understanding and Analysis (VISSOFT) and five editions of the ACM Symposium on Software Visualization (SOFTVIS). Software visualization is a broad research area encompassing concepts, methods, tools, and techniques that assist in a range of software engineering and software development activities. Covered aspects include the development and evaluation of approaches for visually analyzing software and software systems, including their structure, execution behavior, and evolution.”

My first research paper was published in 1997 in the 1st Software Visualisation workshop SoftVis’97 in Australia entitled “Visualizing a reverse engineered system structure with dynamic 3-D clustered graph drawings“. Two years later I edited the proceedings of SoftVis’99 in which I published a paper entitled “ProVEDA: A scheme for Progressive Visualization and Exploratory Data Analysis of clusters“. Later I completed my PhD entitled “Large Scale Relational Information Visualization, Clustering, and Abstraction” which included a case study in Software Visulisation.

The field has grown considerably in the intervening 20 years with many new techniques and methods to support software engineers in evolution, program comprehension, reverse engineering and fresh development. I am looking forward to delivering an address with some new perspectives for the Software Visualisation community.

Big Data InfoVis summer school

SACHI along with other colleagues in Computer Science and across St Andrews are organising a SICSA supported “Big Data Information Visualisation” summer school in July of 2013. We are working on developing the program for this summer school bringing together expertise in a number of areas. Over the weeks and months ahead we will be adding to this website as we confirm topics and speakers. We already have a number of colleagues locally dealing with big data who are willing to act as mentors and domain experts during the summer

June 2012, InfoVis for UbiComp data summer school reflections

In late May of 2012 I organised and delivered a week long workshop on Information Visualisation for UbiComp Data, as part of the UbiOulu summer school. On a scale from worst (1) to best (5) my workshop was scored 4.35 on average by 14 of the 23 participants from the workshop who responded to a survey request, so there is room for improvement. This blog post covers the background to the workshop, the schedule, the results of a survey and some discussion.   
UbiOulu Summer School 2012 - photo  299
Professor Timo Ojala opening the UbiOulu Summer School 2012 

Background to the workshop 

I was invited by Professor Timo Ojala to run this workshop eight months before travelling to Oulu in Finland. Even with the time to prepare, it remained a slightly daunting task which worried me (as such things often do!). There were 23 graduate students from around Europe attending to learn about Infovis. When you consider they were collectively devoting half a person year to something, you want to make sure you make it worth their while! I have taught modules, courses and guest lectured about InfoVis on four different continents at this stage in my career. In addition I was a director of the Online Dublin Computer Science Summer School (ODCSSS) for four years which had over 80 interns through the process. I also organised the SICSA MMI Summer School on Multimodal Systems for Digital Tourism in St Andrews in June of 2011 with over twenty students and a dozen lecturers. None the less, this was the first time I had tackled this type of theory and practice experience.

Thankfully as all educators do, I was able to get advice from my colleagues. In SACHI, Dr. Miguel Nacenta provided some very useful advice and helped moderate some of my slightly more “ambitious” ideas. Dr Adrian Friday from the University of Lancaster was an instructor at the UbiOulu summer school in 2011 and was able to offer me invaluable advice (and healthy warnings about the social aspects of the program). Thanks also to Jean-Daniel Fekete from Inria and Professor Sheelagh Carpendale from Calgary who offered good advice on their experiences with such events. Thanks to all these folks and to the many authors whose work I drew on in covering InfoVis for UbiComp data. This preparation and consultation paid off, as on a scale from worst (1) to best (5) the “content” of the workshop was scored 4.42 on average by 14 of the 23 participants from the workshop who responded to the survey request. The style and delivery of the my workshop was rated as 4.53 by 13 participants.


Some of the possible directions Information Visualisation might take you.



Schedule of workshop 

For those interested in organising such a week long activity the schedule for the workshop was as follows:

1 Month before the workshop, 10 InfoVis papers from a range of areas. 

Monday
9-12: 4th International Open Ubiquitous City Seminar (25 minute talks by instructors, Q&A with speaker panel)
12-13:30: Lunch break
13:30-14: Summer School kick off (auditorium)
14-16: Lecture: Setting the stage – 7 challenges with Visualising Ubiquitous Computing Data 
18-24: Get Together Party

UbiOulu Summer School 2012 - photo  316
Aaron testing the robustness of the UbiOulu displays during the UbiOulu Summer School
Tuesday 
10 – 12.00: Lecture: Setting the stage – 7 challenges with Visualising Ubiquitous Computing Data 
12.00 – 13.15: Lecture – Infovis – Data Types 
13.15 – 14.15: Lunch 
14.15 – 15.00: Lecture – Infovis – Data Types 
15.00 – 16:00 Project – Surveying own and UbiOulu data types    
16.00 – 17.00: Lecture – InfoVis toolkits 
17.00 – 18.00: Project: Problem Identification Workshop (Small group discussion) 
Wednesday
10.00 – 10.30 Design proposal presentations from all groups and workshop D groups  
10.30 – 11.30 Lecture – Information Visualisation – Graph Layout 
11.30 – 11.00: Project: Design Proposal Generation – paper prototypes, sketches/mockups (Small group workshop)
13.00 – 14.00: Lunch 
15.00 – 17.30: Project: System Decomposition, Co-Design and Task Planning (workshop) 
With lecturer, per group, technical Review of Design Sketches (Group discussion)
17.30 – 18.00: FastFoot session – Presentation of prototype plans (presentations)  
Infovis for UbiComp data summer school group session
Group work
Thursday 
10.00 – 13.00: Rapid Prototyping Session (team programming) 
13.00 – 14.00: Lunch
14.00 – 15.00: Early Prototype Review and Feedback (group meetings with lecturer) 
12.00 – 17.30: Rapid Prototyping Session (team programming)
17.30 – 18.00: FastFoot session – Presentation of prototypes (presentations)  
Friday 
10.00 – 13.00: Prototype Improvement Session (team programming) 
13.00 – 14.00: Lunch 
14.00 – 17.30: Prototyping and evaluation Session (teams)
17.30 – 18.00: FastFoot session – Presentation of prototypes (presentations)  
Saturday 
3hr exam 
Final group presentations from all groups 
Final social party, sauna, disco etc. 
Team Black presenting their final InfoVis for UbiComp data prototype
Team Black presenting their final InfoVis for UbiComp data prototype
 

Survey Results

Following the workshop, Professor Timo Ojala surveyed all the UbiOulu participants and provided these survey results to the three workshop organisers. In developing this review of the survey, I’ve picked out what I feel is an honest sample of the positives and negatives from the free-form feedback.

Some of the things the students liked the most, relating to me were “Expertise and attitude of the lecturer”, “the energy from Aaron”, “Great lecturer”, “Really great guy and a great lecturer”, “The lectures were top notch”, “I like the style of Aaron how he presents things”, “Aaron was very good at running the workshop” and “the skill of the professor”. Of course, as any lecturer knows energy, engagement and motivation is a two way street, so my excitement and interest was largely fuelled by the motivation, dedication and hard work of the students. Those surveyed also commented that the workshop itself had, “thought out predefined groups”, “interesting project work”, “good balance between overview and in-depth information”, “Really hands-on work and experience, I mean REALLY. Not just some crappy pre-made work, but a real problem to be solved”, “hands on work was nice”, “Workshop was challenging, and the things it taught are definitely going to be useful in the future”, “how Aaron could take a seemingly complicated topic and make it easy to understand and enjoyable to sit through over three hours in an old classroom. The lectures were worth all the travel expenses alone!” and the “topic was as excellent as it was described in the school invitation”.

Of course, there are things to improve, and I’m focussing my attention on these in planning for future InfoVis courses, modules or summer schools. Things to improve include, “deepening of formal models of graph theory”, “data that was more ready to use”, “show different levels of evaluation”, “share some info regarding the tools needed in the workshop beforehand”, “post workshop reading list”,  “More information of data parsing methods”, “Some examples using the workshop specific data would’ve been nice”, “data parsing methods would’ve been nice to discuss more about”, “What do you think about having multiple smaller “discovery” projects using a better packaged data set and applying different visualisations to research some given research questions?”, “I would definitely include one or two programming tutorials in the “reading package””, and “Oh, I did hate the exam, but that’s no biggie”. 
The students surveyed had some very interesting things to say when asked, what was the most valuable thing you learned in your workshop. These included, “Visualisation techniques for the exploration of huge data sets”, “How to think about information visualisation in a new way which I had never thought of before”, “That information visualisation is not trivial and encompasses a lot of issues which are not immediately obvious to the lay person”, “How to approach Info Vis, in particular the user centred design of the visualisation” and of course that “The focus of any visualisation is the user”. 


Discussion 

I found the entire week and experience both refreshing, motivating and quite fun. Overall I was pleased with the structure of the workshop. There was enough time to cover some topics in detail, have some practical work, the local datasets were very useful, the fast foot sessions worked really well to have a clear daily focus for the teams, and the final session showed just want could be achieved. I’d like to improve the reading list, the approach to handling data, the upfront “pre-workshop” prep, the group matching and the availability of a “client” or “domain expert” for the data.

Going from the introduction to a topic to students delivering a prototype system 6 days later is very ambitious. When considering the majority of University teaching is delivered over the course of 3 months instead of 6 days it’s worth reflecting on the long term benefit each approach offers. The short sharp delivery, is, by its very nature an introductory and high level view of the subject. The prototypes, are needfully simple and directed to identifiable problems. However, the entire period is devoted to a single topic, everyone has come from far and wide with different educational backgrounds to learn about InfoVis. This allows both the educator and learner to maintain a singular focus, in quite a supportive and high energy learning environment. This is supported by a fun social program (see below) where invaluable learning on the topic continues, sometimes into the wee hours of the morning! At this summer school everyone put their lives on hold to focus on just InfoVis of UbiComp data for 6 days. Those who came to learn about InfoVis received a broad introduction and some practical experience. Those looking to go deeper into the subject, did, I hope receive valuable direction and the confidence to explore the topic further in their own research.

For myself, I rediscovered how fun InfoVis can be, how incredibly powerful it can be, how messy real data is, how empowering unlocking data and information can be and how much a small, yet focussed group can achieve in a short amount of time. It also revived my interest in running an InfoVis summer school in Scotland supported by my local and international colleagues. There are many students and researchers across Scotland who might benefit from a week long summer school supported by SICSA. I look forward to developing this workshop further in the future.

I’m going to follow up this post with an overview of the papers I asked the students to read, the topics I covered, the three hour exam and some slides from the student projects. Below you can see me trying my hand as DJ during the final farewell party. In “modest brag” terms I did manage to fill the dance floor (albeit with crowd pleasing tunes), while I spent the time considering the usability of the multi-part digital-physical interface before me.

Trading Consequences blog post

As part of the launch of the Trading Consequences project site I have written the first blog post in which I emphasise that the question is key in this project. “To understand the consequences of our trading history, historians need to ask difficult, subtle, multifaceted and challenging questions. Questions which aren’t polluted by knowledge of the limitations of the methods and technologies we have today. These insightful questions won’t come from a focus on what the tools of today can support, what the analysis or visualisation methods can do or what data is available. ” see the full blog post here.

Jan 2012 – New Grants, Research Fellow and PhD Scholarships


Along with colleagues in the Univeristies of Edinburgh and York we have achieved grant success with JISC. Our project “Trading Consequences” (Universities of Edinburgh, York and St Andrews) will examine the economic and environmental consequences of commodity trading during the nineteenth century using information extraction techniques to study large corpora of digitized documents through structured query and visualisation. There is a page on our research group’s website about this in more detail.

Along with Miguel Nacenta and colleagues from ADS and Historic Scotland we have been awarded a Smart Tourism grant named LADDIE or Large Augmented Digital Displays for Interactive Experiences of Historic Sites. In addition to this, along with colleagues from MUSA in St Andrews and Interface3 who have been awarded a second Smart Tourism Grant named SMART or Scotland’s Museums Augmented Reality Tourism. There is a page on our research group’s website about this in more detail.

I’m now advertising for a research fellow to work with me for 3 years (and beyond possibly). The deadline for applications 17th February 2012. We wish to recruit a Research Fellow in Human Computer Interaction to support a number of new and ongoing research projects in Ubiquitous User Interface development. Our research page has some more details but the primary advertisement and details can be found here on the vacancies site.

Finally, I am actively recruiting PhD students. If you are interested in postgraduate research in the area of Human Computer Interaction then please visit our scholarship page on our research group site further details and links.

October 2011 – Challenges in Information Visualisation

I gave a seminar in the School of Informatics in the Univeristy of Edinburgh on October 7th 2011 on the topic of the Challenges in Information Visualisation.

Information Visualisation is a research area that focuses on the use of graphical techniques to present abstract data in an explicit form. Such static (pictures) or dynamic presentations help people formulate an understanding of data and an internal model of it for reasoning about. Such pictures of data are an external artefact supporting decision making. While sharing many of the same goals of Scientific Visualisation, Human Computer Interaction, User Interface Design and Computer Graphics, Information Visualisation focuses on the visual presentation of data without a physical or geometric form.

As such it relies on research in mathematics, data mining, data structures, algorithms, graph drawing, human-computer interaction, cognitive psychology, semiotics, cartography, interactive graphics, imaging and visual design. In this talk Aaron will present a brief history of social-network analysis and visualisation, introduce analysis and layout algorithms we have developed for visualising such data. Our recent analysis focuses on actor identification through network tuning and our Social Network Assembly Pipeline, SNAP which operates on the premise of “social network inference” where we have studied it experimentally with the analysis of 10,000,000 record sets without explicit relations. Our visulisation has focussed on large scale node-link diagrams, small multiples, dynamic network displays and egocentric layouts.  The talk concludes with a number of challenges and open research questions we face as researchers in using visualisation in an attempt to present dynamic data sources.

Dec 2010 – Two Journal Papers: Visualisation and Usability

Along with Julie Doyle and Brian O’Mullane we have had our paper on “Usability by Proxy – Killing 2-N Birds with One Stone?” accepted to the Journal of Usability Studies. A controversial paper we look forward to its publication stimulating follow on research and debate.

Abstract:

Usability testing is a critical part of the design process for applications, which can require many iterations of testing with, often-times, many different groups of users. As such, the cost of testing is typically significantly high. In this article we propose a new UEM to address this problem, which we call Usability by Proxy. Usability by Proxy involves studying usability measures with a cohort at one level of expertise or ability to identify the expected values at the next level of expertise or ability. In this article, we begin the process of evaluating the effectiveness of this method through a usability study of the BioMOBIUS™ biomedical research platform, an application with intended usage by both biomedical engineers and clinicians. We ask whether testing usability with each specific user group is beneficial in identifying additional significant usability problems, or whether the costs in terms of time and resources outweigh these potential benefits

Along with Michael Farrugia we had had our paper on “Effective temporal graph layout: a comparative study of animation versus static display methods” accepted to the Journal of Information Visulisation. Again, this is a paper which turns some conventional wisdom in dynamic display on its head, in a small scale study followed up with a larger online study. Again, we look forward to this paper stimulate follow on work and the realisation of new forms of dynamic information display.

Abstract:

Graph drawing algorithms have classically addressed the layout of static graphs. However, the need to draw evolving or dynamic graphs has brought into question many of the assumptions, conventions and layout methods designed to date. For example, social scientists studying evolving social networks have created a demand for visual representations of graphs changing over time. Two common approaches to represent temporal information in graphs include animation of the network and use of static snapshots of the network at di erent points in time. Here we report on two experiments, one in a laboratory environment and another using an asynchronous remote web based platform, Mechanical Turk, to compare the e ciency of animated displays versus static displays. Four tasks are studied with each visual representation, two characterise overview level information presentation, and two characterise micro level analytical tasks. The results of this study indicate that static representations are generally more e ective particularly in terms to time performance, when compared to fully animated movie representations of dynamic networks.

Dec 2009 Chapters in book on Mining and Analysing Social Networks

Along with two of my graduate students we have had two book chapters accepted in the upcoming book entitled “Mining and Analyzing Social Networks” which is part of the book series of studies in Computational Intelligence, Springer-Verlag, Heidelberg Germany, 2010. Social Network Analysis and Visualization will form an aspect of collaborative and emerging visualization research projects within the Human Interface Technology Research Laboratory Australia (HITLAB AU).

These chapters are entitled “Actor Identification in Implicit Relational Data” and “Perception of Online Social Networks” which are detailed below.

Actor Identification in Implicit Relational Data
Michael Farrugia and Aaron Quigley

Abstract
Large scale network data sets have become increasingly accessible to researchers. While computer networks, networks of webpages and biological networks are all important sources of data, it is the study of social networks that is driving many new research questions. Researchers are finding that the popularity of online social networking sites may produce large dynamic data sets of actor connectivity.

Sites such as Facebook have 250 million active users and LinkedIn 43 million active users. Such systems offer researchers potential access to rich large scale networks for study. However, while data sets can be collected directly from sources that specifically define the actors and ties between those actors, there are many other data sources that do not have an explicit network structure defined. To transform such non-relational data into a relational format two facets must be identified – the actors and the ties between the actors. In this chapter we survey a range of techniques that can be employed to identify unique actors when inferring networks from non explicit network data sets.We present our methods for unique node identification of social network actors in a business scenario where a unique node identifier is not available. We validate these methods through the study of a large scale real world case study of over 9 million records.

Perception of Online Social Networks
Travis Green and Aaron Quigley

Abstract.
This paper examines data derived from an application on Facebook.com that investigates the relations among members of their online social network. It confirms that online social networks are more often used to maintain weak connections but that a subset of users focus on strong connections, determines that connection intensity to both connected people predicts perceptual accuracy, and shows that intra-group connections are perceived more accurately. Surprisingly, a user‘s sex does not influence accuracy, and one‘s number of friends only mildly correlates with accuracy indicating a flexible underlying cognitive structure. Users‘ reports of significantly increased numbers of weak connections indicate increased diversity of information flow to users. In addition the approach and dataset represent a candidate ―ground truth‖ for other proximity metrics. Finally, implications in epidemiology, information transmission, network analysis, human behavior, economics, and neuroscience are summarized. Over a period of two weeks, 14,051 responses were gathered from 166 participants, approximately 80 per participant, which overlapped on 588 edges representing 1341 responses, approximately 10% of the total. Participants were primarily university-age students from English-speaking countries, and included 84 males and 82 females. Responses represent a random sampling of each participant‘s online connections, representing 953,969 possible connections, with the average participant having 483 friends. Offline research has indicated that people maintain approximately 8-10 strong connections from an average of 150-250 friends. These data indicate that people maintain online approximately 40 strong ties and 185 weak ties over an average of 483 friends. Average inter-group accuracy was below the guessing rate at 0.32, while accuracy on intra-group connections converged to the guessing rate, 0.5, as group size increased.

Aug 2009 Invited Trinity Long Room Hub Talk


On October 1st 2009 I will give an invited talk at the Trinity Long Room Hub entitled ‘Using Information Visualisation as an Analytical Tool’ at 1.00 p.m. – 2.30 p.m. IIIS Seminar Room, C.6002, 6th Floor, Arts Building, TCD.

The following is the working abstract for the talk.

Abstract:
A byproduct of the explosive growth in the use of computing technology is that organizations are generating, gathering, using and storing data at an increasing rate. Consider the amount of data a Government census collect, the amount of data Google gathers and uses or details of all the transactions eBay must handle on a daily basis? To make this concrete the last US Census includes details of 304,059,724 people (US Census Bureau) with data on age, gender, ethnicity, household make up, home structure, income, farms, business and sales available. In July 2008 Google found 1 trillion (1,000,000,000,000) unique URLs on the web at once and eBay handles in excess of 1 billion payments per year. While Google and eBay and indeed their customers gain value from the applications on offer, simply storing the raw data after the fact is of little value unless useful high level information and hence knowledge can be derived from it. Many researchers and commercial organisations are facing similar tasks with large amounts of image data, video, geographic data, textual data or statistical data.

However when trying to understand details about millions of customers, webpages or products the amount of raw data makes the analysis task difficult. One approach to the problem is to convert the data into pictures and models that can be graphically displayed. The intuition behind the use of such graphics is that human beings are inherently skilled at understanding data in visual forms. We refer to the use of computer graphics to visually represent and convey the meaning of abstract information “Information Visualisation”.

This talk will outline how various types of information is modeled, managed, mined and hence visually presented on screen for exploration. Several large scale data and information visualisation methods will be described and discussed along with the 7 key challenges we face as researchers and developers in using visualisation in an attempt to present information. These 7 key challenges are: Empowerment, Connection, Volume, Hetrogeneity, Audience, Dynamism and Discovery.