PAIR @ CHI 2021

People + AI Research @ Google
People + AI Research
6 min readMay 14, 2021

--

Illustration by Shannon May for Google

This week, teams across Google shared their research at CHI — an annual conference where researchers and practitioners discuss the latest in interactive technology. CHI 2021 theme was “Making Waves, Combining Strengths”, and that nicely encompassses the research that we shared. These papers look at everything from AI’s role in creative endeavors, like composing music, to why we need to pay attention to the human endeavor of data work and cascades. Take a look at these 5 papers, their big takeaways and why you should read them.

“Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI

Authors: Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, Lora Aroyo

This paper won ‘Best of CHI 2021.’ Read the full paper.

Figure: Representation of data cascades in high-stakes AI

Who should read this? Anyone working on ML systems: developers, researchers, annotators, data scientists.

What do you want people to learn from this? Data is one of the most critical aspects of AI systems. Yet, it is the most undervalued aspect — especially relative to how model development is celebrated in AI. We document and introduce data cascades: “compounding events causing negative, downstream effects from data issues that result in technical debt over time.” Data cascades often originate upstream during data definition and collection, and can have serious consequences downstream in model deployment and inferencing. The cascades are messy, opaque, and long-term, causing negative impacts on the AI development process, including to beneficiary communities, burnout of relationships, discarding entire datasets, and performing costly iterations. This paper dives into where these data cascades originate from, the larger problems they point to, and what we can do to focus on the goodness of data and data work.

Why were you all interested in doing this research? Despite the infrastructural, technical side of data, it is a fundamentally human endeavour. People are involved in every step of the ML data pipeline: data collectors, annotators, and developers in creating, labelling, cleaning or training on the data. And yet the human labour in data is ignored by AI structures and institutions. The human aspects make ML data an excellent object of inquiry for Human-Computer Interaction. Our hope is to understand the human-data practices in current AI development and to chart a path for more responsible AI systems.

Onboarding Materials as Boundary Objects for Developing AI Assistants

Authors: Carrie Cai, Samantha Winter, Dave Steiner, Lauren Wilcox, Michael Terry

This paper won “Best Case Study of CHI 2021”. Read the full paper.

Who should read this? Product and project managers, UXers, and ML developers.

What’s the biggest takeaway from this case study? A “boundary object” is something that helps a cross-functional team more effectively communicate with one another, despite differences in domain expertise. For example, a storyboard is a boundary object that helps all members of a product team (e.g., software developers, UX researchers, designers, product managers) understand the proposed product and how it would be used by users.

We found that AI onboarding materials served as a useful boundary object to help a cross-functional AI team understand the needs of users. (Onboarding materials train users on how to use a system.)

When did your team see the benefits of onboarding materials? In our case, we created onboarding materials to teach doctors how to use an AI Assistant to help grade prostate cancer. In this design process, we discovered the crucial role that onboarding materials play for not only the users, but the entire team. Developing the onboarding materials yielded new insights into what information end-users needed to effectively use the AI in their work. This information also helped inform and prioritize the team’s efforts, from modeling to UX design.

Breakdowns and Breakthroughs: Observing the Flexibility and Limitations of Computational Tools for Musicians During the Pandemic

Authors: Carrie Cai, Michelle Carney, Nida Zada, Michael Terry

Read the full paper.

Who should read this? People interested in the future of remote work.

What’s the biggest takeaway from researching how musicians collaborated during the pandemic? We surveyed and interviewed musicians during the start of the COVID-19 pandemic to understand how it affected them. As might be expected, work from home made it challenging for musicians to learn, practice, and/or perform. One thing we noticed was that musicians couldn’t take their existing tools, like recording software, and adapt them to a remote, collaborative environment. The take-home point here is that despite the existence of fantastic tools like video chat or shared documents (e.g., Google Docs), there are still lots of tools we use on a daily basis that cannot be easily transformed for remote work collaborations.

Why were you all interested in doing this research? Magenta and PAIR study how AI can be used for creative pursuits, like making music. As part of this research, we conducted a study to understand how AI could assist musicians. This study took place during the COVID-19 pandemic, so we expanded the research to learn how the pandemic affected musicians’ music making practices.

AI as Social Glue: Uncovering the Roles of Deep Generative AI during Social Music Composition

Authors: Mia Suh, Emily Youngblom, Michael Terry, Carrie Cai

This paper won Best of CHI 2021 Honorable Mention. Read the full paper.

Who should read this? People who are designing and building AI-powered collaborative systems — such as researchers, product and project managers, and developers for ML and any AI-powered collaborative systems for creative work (like music, writing or painting).

What do you want people to learn from this? What’s the biggest takeaway? Although there’s been a lot of recent research on human-AI collaboration, less is known about how AI could affect human-human collaboration. We studied what happens to social dynamics when pairs of people compose music together, with and without the help of a generative AI (an AI that can generate music).

We found that AI can act as a “social glue” in co-creative activities — for example, AI helped human collaborators maintain forward momentum and cordiality in moments when there were creative tensions or disagreements. It also helped initially establish common ground between strangers, and served as a psychological safety net. Despite increasing the ease of collaboration, however, AI assistance may reduce the depth of human-human collaboration. Rather than grappling with each other’s ideas, users often offloaded that creative work to the AI. Users sometimes indicated that they felt more like joint “curators” or “producers” of art, rather than as the “composers” themselves. Researchers, designers, and practitioners should carefully consider these tradeoffs between ease of collaboration and depth of collaboration when building AI-powered systems.

Why were you all interested in doing this research? Partnering with AI for creative work is known to be still challenging. Little attention has been paid to collaboration between multiple humans and AI in creative work. Therefore, we conducted a study to understand how AI could play a role in human-human collaboration in the context of music creation.

Towards a Truly “Brilliant” AI-powered Clinical Decision Support System: Lessons from a Study of Rural Clinics in China

Authors: Dakuo Wang, Liuping Wang, Zhan Zhang, Ding Wang, Haiyi Zhu, Yvonne Gao, Xiangmin Fan, Feng Tian

Read the full paper here.

Who should read this? People who are designing and building AI-powered healthcare systems for hospitals and clinics. This paper also makes a good reading for clinicians, especially those who are already interacting with AI systems for their daily work, as this helps them to see the systems they use from the designers’ perspective.

What do you want people to learn from this? Firstly as system designers, we really ought to understand, respect and be considerate about the existing workflow and practice people follow!! This has been shown in different studies in the past 30 years and yet again we see the same pain points due to the incompatibility between the AI system and the existing work practice. Secondly, the users are often super receptive and open minded! The clinicians who participated in the study were open minded about working with the imperfect systems and they were resourceful as they made it work for them. Most importantly they offered new ways of how the system could be better deployed in supporting their work in a rural clinic in China.

Anything else you want people to know about the paper? The context of the study matters so much and it is also what makes this paper interesting. It is done in rural clinics in China. The learnings from the paper are not only applicable for rural China, but other NBU settings too. Whoever is familiar with the healthcare systems, say in India, would be able to see the resemblance in the two settings regarding the challenges in delivering care.

--

--

People + AI Research @ Google
People + AI Research

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI.