Justice Dept Reportedly Interested in A.I. Tool That May Be Violating ADA by Targeting Parents with Disabilities

By B.N. Frank

Numerous reports have confirmed that artificial intelligence (A.I.) technology can be biased as well as inaccurate (see 1, 2, 3, 4).  There’s even an A.I. “Hall of Shame”!  Nevertheless, despite warnings and lawsuits, the use of A.I. by businesses, government agencies, and police departments remains popular.  The Justice Department reportedly being interested in a Pennsylvania county using an A.I. algorithm tool that may be discriminating against disabled parents, in violation of the Americans with Disabilities Act (ADA), should come as some relief.

From Ars Technica:


AI tool used to spot child abuse allegedly targets parents with disabilities

Pennsylvania county’s child welfare algorithm has inspired tools in other states.

Ashley Belanger

Since 2016, social workers in a Pennsylvania county have relied on an algorithm to help them determine which child welfare calls warrant further investigation. Now, the Justice Department is reportedly scrutinizing the controversial family-screening tool over concerns that using the algorithm may be violating the Americans with Disabilities Act by allegedly discriminating against families with disabilities, the Associated Press reported, including families with mental health issues.

Three anonymous sources broke their confidentiality agreements with the Justice Department, confirming to AP that civil rights attorneys have been fielding complaints since last fall and have grown increasingly concerned about alleged biases built into the Allegheny County Family Screening Tool. While the full scope of the Justice Department’s alleged scrutiny is currently unknown, the Civil Rights Division is seemingly interested in learning more about how using the data-driven tool could potentially be hardening historical systemic biases against people with disabilities.

The county describes its predictive risk modeling tool as a preferred resource to reduce human error for social workers benefiting from the algorithm’s rapid analysis of “hundreds of data elements for each person involved in an allegation of child maltreatment.” That includes “data points tied to disabilities in children, parents, and other members of local households,” Allegheny County told AP. Those data points contribute to an overall risk score that helps determine if a child should be removed from their home.

Although the county told AP that social workers can override the tool’s recommendations and that the algorithm has been updated “several times” to remove disabilities-related data points, critics worry that the screening tool may still be automating discrimination. This is particularly concerning because the Pennsylvania algorithm has inspired similar tools used in California and Colorado, AP reported. Oregon stopped using its family-screening tool over similar concerns that its algorithm may be exacerbating racial biases in its child welfare data.

The Justice Department has not yet commented on its alleged interest in the tool, but AP reported that the department’s scrutiny could possibly turn a moral argument against using child welfare algorithms into a legal argument.

A University of Minnesota expert on child welfare and disabilities, Traci LaLiberte, told AP that it’s unusual for the Justice Department to get involved with child welfare issues. “It really has to rise to the level of pretty significant concern to dedicate time and get involved,” LaLiberte told AP.

Ars could not immediately reach developers of the algorithm or the Allegheny County Department of Human Services for comment, but a county spokesperson, Mark Bertolet, told AP that the agency was unaware of the Justice Department’s interest in its screening tool.

Problems with predicting child maltreatment

On its website, Allegheny County said that the family-screening tool was developed in 2016 to “enhance our child welfare call screening decision making process with the singular goal of improving child safety.” That year, the county reported that prior to using the algorithm, human error led child protective services to investigate 48 percent of the lowest-risk cases, while overlooking 27 percent of the highest-risk cases. A 2016 external ethical analysis supported the county’s use of the algorithm as an “inevitably imperfect” but a comparatively more accurate and transparent method for assessing risk rather than relying on clinical judgment alone.

“We reasoned that by using technology to gather and weigh all available pertinent information we could improve the basis for these critical decisions and reduce variability in staff decision-making,” the county said on its website, promising to continue to refine the model as more analysis of the tool was conducted.

Although the county told AP that risk scores alone never trigger investigations, the county website still says that “when the score is at the highest levels, meeting the threshold for ‘mandatory screen in,’ the allegations in a call must be investigated.” Because disability-related data points contribute to that score, critics suggest that families with disabilities are more likely to be targeted for investigations.

The same year that the family-screening tool was introduced, the Christopher & Dana Reeve Foundation and the National Council on Disability released a toolkit to help parents with disabilities know their rights when fighting in the courts over child welfare concerns.

“For many of the 4.1 million parents with disabilities in the United States, courts have decided they aren’t good parents just because they have disabilities,” the organization wrote in the toolkit’s introduction. “In fact, as of 2016, 35 states still said that if you had a disability, you could lose your right to be a parent, even if you didn’t hurt or ignore your child.”

Allegheny County told AP that “it should come as no surprise that parents with disabilities… may also have a need for additional supports and services.” Neither the county’s ethical analysis nor its FAQ directly discusses how the tool could be disadvantaging these families, though.

Ars could not reach LaLiberte for additional comment, but she told AP that her research has also shown that parents with disabilities are already disproportionately targeted by the child welfare system. She suggested that incorporating disability-related data points into the algorithm is seemingly inappropriate because it directs social workers to consider “characteristics people can’t change,” instead of exclusively assessing problematic behavior.


Activist Post reports regularly about A.I. and other privacy-invasive and unsafe technologies.  For more information, visit our archives.

Image: Christopher and Dana Reeve Foundation

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Flote, Minds, MeWe, Twitter, Gab, What Really Happened and GETTR.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Justice Dept Reportedly Interested in A.I. Tool That May Be Violating ADA by Targeting Parents with Disabilities"

Leave a comment