Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

February 2024Vol. 25, No. 1Human-AI Partnerships in Child Welfare: Algorithmic Decision Support

A paper from the 2022 CHI Conference on Human Factors in Computing Systems examines decision support tools (ADS) based on artificial intelligence (AI) within the context of child welfare. Interviews were conducted with child maltreatment hotline workers to understand their current practices and challenges in working with an ADS—in particular, the Allegheny Family Screening Tool (AFST). The authors state that while ADS have the potential to promote more equitable decision outcomes, AI-based judgments come with their own biases and limitations. As such, an effective human-AI partnership has the potential to compensate for and build upon each other's limitations and strengths. However, the authors write that little is known about what specific factors support or hinder an effective partnership in real-world practice.

In "Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support," researchers explored the following questions:

  • How do workers decide when, whether, and how much to rely on algorithmic recommendations?
  • What limitations and future design opportunities do workers perceive for the AFST or future ADS tools?

According to the interviews, the researchers found that workers augmented some of the limitations of the AFST with their own contextual, qualitative knowledge about cases they were working on. These data were not included in the administrative data that the AFST model uses.

Researchers also found that most workers knew very little about how the AFST worked, what data it used, or how they could more effectively work with the tool. Workers were intentionally given limited information on how the model worked in order to prevent "gaming the system." However, workers developed their own beliefs about how the model worked and adjusted their reliance on the tool accordingly. The lack of transparency also limited the workers' trust in the model and contributed to them feeling unable to work with it effectively.

Access the full article as well as the presentation video for more information on how workers used and viewed the AFST and design implications for creating a more supportive and effective human-AI decision-making partnership for child welfare professionals.