Exploring Voice Assistant Interactions for People who Stammer
June 2020 – present
This project investigates how people who stammer engage with commerically available voice assistants to understand the current design limitations to successful interactions, and whether solutions can be found that go beyond creating more datasets and speech models.
A recent study was conducted with the help of STAMMA to recruit participants for a three week study in which people who stammer set-up and interacted with Google Nest devices. Findings indicate that people who stammer may engage in strategies they use when interacting with people to circumvent recognition errors. Additionally, participants noted varying success depending on how their stammer was manifesting during specific times of day, and an aversion to interacting with devices when other people are around.
Full findings will be available soon. A CUI 2020 paper provided an overview of the project.
This project was funded by the CHERISH Digital Economy Centre.
Trust in Always Listening devices: Knowledge And Theory in Imbued Voice Experiences (TALKATIVE)
September 2019 – present
This project works on understanding how the concept of trust manifests in voice assistant interactions. Using existing trust models as a foundation, the project interviewed 11 experts in the fields of human-computer interaction, speech technology, robotics, and human factors to synthesise specialist knowledge on current and future voice assistant interactions.
The findings examined the relationship between existing models of trust and voice assistant interactions, and identifies knowledge that researchers and designers can leverage to create an environment in which a trustworthy relationship has the potential to develop.
This project was funded by a Mozilla Research Grant RH 2019 via the name “Creating a trustworthy model for always-listening voice interfaces”.
Interactional Variation Online (IVO)
August 2021 – present
Additionally, the project will enable future research by developing appropriate technical protocols for capturing and analysing multimodal data, so that standardised methods can be reused and repurposed by researchers in other disciplines.
The IVO project aims to understrand virtual workplace communication through multimodal analysis of people’s interactions using videoconferencing software (e.g. Zoom, Microsoft Teams). In doing so, awareness-raising artefacts and training materials will be developed to identify success and failures in this emerging area of discourse.
This project is funded by UKRI-AHRC (Arts and Humanities Research Council) and the IRC (Irish Research Council) under the ‘UK-Ireland Collaboration in the Digital Humanities Research Grants Call’ (grant numbers AH/W001608/1 and IRC/W001608/1).
Swansea Cyber Clinic
August 2021 – present
The Swansea Cyber Clinic explores the extent to which cyber and hybrid crime victim services are adequate in a ‘Digital Society’ and aims to develop a ‘Cyber Clinic’ prototype, offering a blend of face-to-face and digital support, to both increase and research individuals’ resilience to victimisation.
This project brings in expertise from Law and Computer Science, and is supported by South Wales Police and Swansea Council for Voluntary Service (SCVS). It is funded by Swansea University’s Morgan Advanced Studies Institute (MASI).
The Cognitive Effects of Speech Interface Synthesis (COGSIS)
July 2017 – September 2019
The COGSIS project examined how speech synthesis design choices impact people’s partner models and user experiences, how these choices impact with context, and how they affect people’s own language production.
Project outputs include a review of prior speech work in HCI, a CHI 2019 Honourable Mention paper on the differences between HCI and human conversation, and a book chapter on uncanny valley effects in computer speech.
This project also led to the founding of the Conversational User Interface (CUI) conference series. The inaugural event in Dublin 2019 has since grown into an established conference with related workshops at numerous other venues.
This research was funded by a New Horizons grant from the Irish Research Council entitled “The COG- SIS Project: Cognitive effects of Speech Interface Synthesis” (Grant R17339).
Exploring Vague Language Use and Voice Variation in Human-Agent Interaction
My PhD thesis investigated the use of vague language and politeness in a voice-based computer instructor for conducting assembly tasks. Vague language and politeness are linguistic phenomena related to people’s management of social interactions. They are often used to negate the imposition that communuation like instruction giving can place on listeners.
Linguistic strategies using these phenomena were built into a voice-based agent that with and without vague language/politeness, and with different styles of voice (synthetic and human via a voice actor). Findings showed that including vague language and politeness in agent instructors was perceived as more likeable and sociable when paired with a human voice. With synthetic voices, participants felt a strong disparity between the humanness of the language with the machineness of the voices.
Outputs from my PhD include a journal article summarising the second study.
My PhD was funded by an EPSRC grant entitled Human-Agent Collectives: From Foundations to Applications [ORCHID] (grant reference EP/I011587/1).