Title
Crowd-sourced and Remote User Studies for Quality of Experience and Usability Research
Organizers
Babak Naderi and Matthias Hirth
Motivation and objectives
Laboratory studies are an established and essential tool for Quality of Experience (QoE) and User Experience (UX) research. However, they required well-equipped test rooms and personnel for the supervision of the test participants. Therefore, they are often cost- and time-intensive. Further, the number of test candidates is often limited due to the sparse laboratory space and need for the test taskers’ physical presence in the test environment. In the past months, the pandemic situation made it even more challenging to conduct laboratory studies by increasing the organizational overhead and limiting potential participants. Two possibilities to overcome the current situation are crowdsourcing and remote user studies. Crowdsourcing has been successfully used for QoE and UX research in the past years. Researchers developed best practices to quickly collect a large number of subjective ratings from a diverse set of participants and applied the crowdsourcing approach in many domains of QoE research. The diversity of the crowdsourcing workers enables researching cultural effects,
influencing factors generated by different end-user devices, and the impact of different surrounding environments that can hardly be assessed in a traditional laboratory setting. However, these opportunities come at costs. Experimenters have only limited control of the test settings and in which environmental conditions the study takes place. Additionally, the remote crowdsourcing study can be error-prone as the test participants are not under the direct supervision of the test conductor.
Another possibility that did not draw a lot of attention in the past years are supervised or unsupervised, individual remote test procedures. They can be viewed as a hybrid-procedure of crowdsourcing and traditional laboratory environments. While the tests are still conducted online, the test takes are not anonymous but pre-registered participants who might even be guided via a chat or video conferencing system. Such an approach can benefit from the broader reach of the online study while diminishing the challenges of a completely anonymous and unsupervised setting.
To fully utilize the advantages and benefits of crowdsourcing, several challenges need to be addressed. In particular, technical monitoring methods can help to gain a better understanding of the current hardware and environmental settings in which the test takes place. Here, recent developments and the increasing availability of Internet of Things, smart metering, and wearable devices opens up new opportunities to obtain insights into the user’s surroundings and the user herself to identify now โ currently hidden โ influence factors. Here, we need to keep privacy of crowd workers in mind. Another yet not fully solved research challenge is the reproducibility of subjective crowdsourcing studies. The results of repeated crowdsourced QoE studies sometimes differ significantly due to the users’ diversity, and their unknown devices and surroundings. This calls for developing new methodologies and test procedures to enable consistent test results across multiple studies and crowds. Finally, automation and workflows can help to reduce the operational errors and misinterpretations of experimenters. These findings can also be applied to remote user studies to understand the non-anonymous test taker’s surroundings better. However, for remote user studies, additional challenges arise that differ from the crowdsourcing setting. In particular, an appropriate way to supervise the remote users has to be found, including a communication channel and ways to observe the test itself.
Both crowdsourced tests and remote user tests can be applied in a broad range of settings. Starting with simple assessments of standard definition image and video quality, the complexity of the tasks and workflows can be increased even to support 3D and virtual reality content evaluation. This, however, also results in increasing complexity of the task interfaces with which the workers and test takers have to cope. Considering the short amount of time the test takers have to familiarize themselves with the task interface, the usability and the design of those interfaces play an important role. Still, to the best of our knowledge, the usability and the actual user experience of crowd workers has not been addressed yet.
In this context, the special session aims to foster contributions following the traditional way of optimizing and designing crowdsourced subjective studies for Quality of Experience and User Experience research and additionally wants to raise awareness and foster research in a new research direction, the Quality of Experience and User Experience of crowdsourcing workers. The special session also seeks to encourage researchers to exchange their experiences in remote user studies, with non-anonymous test takers, how best practices from crowdsourcing studies can be applied in this context, and discuss which new challenges arise. In this context, the aim of the special session is twofold. On the one hand, we want to foster contributions following the traditional way optimizing and designing crowdsourced subjective studies for Quality of Experience and User Experience research. This includes novel methodologies for quality assurance and replicability, new fields of application like assessing the QoE of IoT devices in a crowdsources fashion, and using new technologies like wearable to collect additional environmental and user signals. On the other hand, we also want to raise awareness and foster research in a new research direction, the Quality of Experience and User Experience of crowdsourcing workers. Crowdsourcing has become mature in the academic and business usage, and much effort is put into a (cost-) efficient and quality optimizing the design of the tasks. However, no or only very little efforts are made to improve the working experience of the workers.
The topics of interest include but are not limited to the following
- Crowdsourcing for subjective studies
- Novel applications
- Limitations of current crowdsourcing systems
- Quality control mechanism and reliability metrics
- Large scale crowdsourcing studies and diversity of participants
- Reproducibility of results
-
- Reproducibility and cross platform studies
- Assessment of hidden influence factors / Impact of hidden influence factors
- Bias estimation and bias reduction
- Automation and workflows
- Standardization of crowdsourcing test methods
- Usability and User Experience of crowdsourcing tasks
-
- Optimization of task interfaces and task workflows
- Relation to result quality and worker motivation
- Enhancing workersโ UX (e.g., by means of gamification of tasks)
- Quality of complex crowdsourcing workflows (e.g. combination of AI and Crowds)
- Interconnection of crowdsourcing and lab-based tests,
-
- Studies comparing results from lab and crowdsourcing
- Adaptations of established lab-test standards to the crowdsourcing environment
- Remote user studies
-
- Supervised remote user studies
- Remote studies with non-anonymous users
- (Crowdsourcing) best practices for remote user studies