By B.N. Frank
Artificial Intelligence (A.I.) is NOT fail-safe – hence the creation of an A.I. “Hall of Shame” with a backlog of submissions that still need to be processed. In fact, complaints about Artificial Intelligence (A.I.) algorithm inaccuracies and misuse are reported fairly often (see 1, 2, 3, 4).
Warnings about A.I. and robots replacing human jobs are also reported fairly often (see 1, 2, 3). Amazon has been criticized for using this technology to fire flex workers as well. Even though a 2019 survey already revealed that 82% of Americans believed that A.I. was more harmful than helpful, one agency wants to know how much Americans trust it. Maybe it’s because President Biden recently committed to incorporating more A.I. into our lives. Of course, “Without trust…adoption of AI will slow or halt.”
More from Wired:
This Agency Wants to Figure Out Exactly How Much You Trust AI
The National Institute of Standards and Technology measures how many photons pass through a chicken. Now, it wants to quantify transparency around algorithms.
The National Institute of Standards and Technology (NIST) is a federal agency best known for measuring things like time or the number of photons that pass through a chicken. Now NIST wants to put a number on a person’s trust in artificial intelligence.
Trust is part of how we judge the potential for danger, and it’s an important factor in the adoption of AI. As AI takes on more and more complicated tasks, officials at NIST say, trust is an essential part of the evolving relationship between people and machines.
In a research paper, creators of the attempt to quantify user trust in AI say they want to help businesses and developers who deploy AI systems make informed decisions and identify areas where people don’t trust AI. NIST views the AI initiative as an extension of its more traditional work establishing trust in measurement systems. Public comment is being accepted until July 30.
Brian Stanton is coauthor of the paper and a NIST cognitive psychologist who focuses on AI system trustworthiness. Without trust, Stanton says, adoption of AI will slow or halt. He says many factors may affect a person’s trust in AI, such as their exposure to science fiction or the presence of AI skeptics among friends and family.
NIST is a part of the US Department of Commerce that has grown in prominence in the age of artificial intelligence. Under an executive order by former president Trump, NIST in 2019 released a plan for engaging with private industry to create standards for the use of AI. In January, Congress directed NIST to create a framework for trustworthy AI to guide use of the technology. One problem area: Studies by academics and NIST itself have found that some facial-recognition systems misidentify Asian and Black people 100 times more often than white people.
The trust initiative comes amid increased government scrutiny of AI. The Office of Management and Budget has said acceptance and adoption of AI will depend on public trust. Mentions of AI in Congress are increasing, and historic antitrust cases continue against tech giants including Amazon, Facebook, and Google. In April, the Federal Trade Commission told businesses to tell the truth about AI they use and not exaggerate what’s possible. “Hold yourself accountable—or be ready for the FTC to do it for you,” the statement said.
NIST wants to measure trust in AI in two ways. A user trust potential score is meant to measure things about a person using an AI system, including their age, gender, cultural beliefs, and experience with other AI systems. The second score, the perceived system trustworthiness score, will cover more technical factors such as whether an outdated user interface makes people call AI into doubt. The proposed system score assigns weights to nine characteristics like accuracy and explainability. Factors that play into trusting AI and weights for factors like reliability and security are still being determined.
An AI system used by doctors to diagnose disease should be more accurate than one recommending music.
Activist Post reports regularly about A.I. and other unsafe technology. For more information, visit our archives.
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.