We Don’t Need a New Framework for AI Ethics
We Already Have the Doctrine of Fully Informed Consent
In June 2025, the newly mitred Pope Leo announced his support for the Rome Call for AI Ethics, issued by Vatican in 2020. The call received support in 2023 from representatives of Islam and Judaism. In July 2024 representatives from 20 global regions also signed on.
Will this document, with its six protective principles, allow us to happily run headlong into an AI future?
Ready or not, here we go AI.gov.
Ever since the appearance of ChatGPT in 2023, the phrase “AI Ethics” has peppered headlines, giving us the false impression that competent experts are working on the question. That fateful year, millions of Internet users test-drove a generative AI (aka a Large Language Model or LLM) for the first time, and many were terrified by the way the new and improved chatbot could mimic human behavior, and some (egged on by hype), began to worry that AI could soon take control of civilization somehow.
People began to ask for AI to be fitted with guardrails to prevent it from behaving unethically.
Most discussions of ethics are framed in hopelessly vague terms, such as the need to pursue the “common good” and preserve “human dignity.” Likewise, the Rome Call for AI Ethics is plagued with vagaries. While it proposes a “framework” for the “good of humanity and the environment,” it actually calls for an intensification of control over the world population by an “expert” class using AI as a tool to do so. Working toward the “common good” has been a cover story used by dictators throughout history.
All institutions, even religions, are run by humans and sometimes humans just get things wrong. I’m not criticizing Catholicism itself when I note that of the six goals enumerated in the Rome Call framework, three are impossible, two are undesirable, and one is based on a falsehood.
1. Transparency. The Vatican calls for AI tools to be required to reveal the “logic behind the algorithms used to make decisions.” However, LLMs do not use logic or even algorithms, not in the way that software programs do. LLMs are basically text prediction engines that imitate the sound of human speech. While most of the time, the imitation sounds reasonable, sometimes it doesn’t. Being transparent about the logic of AI is impossible because there isn’t any. LLMs intensify the faults of the systems they are based on, i.e., “deep learning” neural networks whose nodes become weighted (biased) according to the specific patterns they’ve been trained on. More on this below in #4.
2. Inclusion. The Vatican argues that we want to make sure that everyone receives an equally good (or bad) education using AI tools. They invoke the “no child left behind” policy as an example worth following. Here the Vatican is not reading the room because most people are aware that such programs in the U.S. promptly led to standardized curricula and treating teachers like trained monkeys, precipitating a massive decline in performance among U. S. school children. So-called “inclusion” results in lowering standards for all and making everyone conform.
3. Responsibility. The group that the Vatican supposes should have the responsibility of deciding what is “ethical” for everybody will be “political decision-makers, UN system agencies and other intergovernmental organizations, researchers, the world of academia and representatives of non-governmental organizations.” It is, of course, undesirable to make a so-called elite group “responsible” for choosing for everybody else, especially since, as per usual, they will not be held accountable if their decisions result in unspeakable harms. This isn’t “responsibility” at all. The right word is “control.”
4. Impartiality. Neural network AI is biased by nature. LLMs are stereotyping on steroids. (See my article, “AI, Stereotyping on Steroids and Alan Turing’s Biological Turn.”) The output of any neural net is based on the biases of the so-called “hidden layers.” For instance, a neural net could categorize an individual as a likely suspect in a crime based on some arbitrary details about the person, such as whether or not he graduated college or lived with his father or watched a certain Netflix series. Impartiality is impossible to achieve with any neural network or generative AI.
5. Reliability. LLMs are not accurate, reliable, or truthful. They can be, but they are not definitely so by nature. True reliability is impossible to achieve with AI, which not like a calculator. People may think that all knowledge can be programmed into a calculating machine in the way that arithmetic can, but this is not so.
6. Security and Privacy. There is no security or privacy online, currently. When the Vatican says security and privacy can be safeguarded with AI tools, they are either not telling the truth or they are hallucinating. Perhaps an LLM wrote up the Rome Call. The goal of AI technology is Total Informational Control™. The only way to protect the privacy (and consequently the security) of individuals is to treat privacy as a fundamental constitutional right. See my article, How to Escape the Panopticon.
“Ethical” Means Consent is Given
Although it didn’t make it into the six principles of AI ethics, there is one goal worth pursuing that is mentioned in the Vatican’s Rome Call: “Each person must be aware when he or she is interacting with a machine.” Proper labeling is the first step toward the principle of fully informed consent.
Pope Leo reiterated Pope Francis’ infallible opinion that AI is a just tool, not an agent. AI is not conscious or literally intelligent. It doesn’t make decisions, per se. Because it is just a tool, the potential harms can be controlled in the way it is used. Since the technology is relatively new, children should be excluded from the experiment. Allowing kids to use smart phones has already proved to be devastating. Phones plus more AI is sure to be worse.
Children cannot consent.
The new pope is also rightly “concerned for children and young people, and the possible consequences of the use of AI on their intellectual and neurological development. Our youth must be helped, and not hindered, in their journey towards maturity and true responsibility.”
I agree.
Here’s my AI Ethics framework:
- No data may be used to train AI without the fully informed consent of the person who produced that data or about whom the data was recorded.
- The user must have full control over the algorithm that filters what he/she sees online.
- The user has the responsibility of deciding what content is reliable.
- The user has full control over who may see his/her content.
- Children should not have unfettered access to the Internet or other digital programs. Special phones may be needed for underaged persons.
- It is the parents or guardians who must decide which programs their child can see or use, and with whom they can communicate. Strangers should not have access to children for private conversations.
My ethics guidelines would kill the profits of AI companies—and would necessitate the end of most governmental programs as they are currently run. AI companies would have to retrain all their LLMs on public data or pay and get releases from everyone whose data they used. All ID-linked citizen data profiles would have to be destroyed, including all those files that DOGE tapped into and sent to Palantir.
The government would have to end any programs and policies that require citizens to disclose potentially sensitive information. Government would have to find ways of serving the people without violating their privacy and putting them at risk.
AI the All-Seeing Nudger
AI development is on track to do away with fully informed consent. It is being deployed precisely for the purpose of getting people to agree to be nudged to go along with consensus. It is being deployed to give people the illusion of freedom, like video gamers who are presented with different options for creating different story lines, but in reality there is a finite number of pathways and they all lead to the same corral.
Father Paolo Benanti, technology ethics advisor to Pope Frances, has opined recently about AI nudging. I suspect that Benanti might have been one of the authors of the Rome Call, so it might be important to know what he thinks about this issue. He gets it right when he observes [emphasis added],
…one of the core elements of human dignity—is the ability to self-determine our trajectory in life. I think that’s the core element, for example, in the Declaration of Independence…
In that direction, we could have a problem … Every time a streaming platform suggests what you can watch next … that interaction between human beings and machines can produce behavior … that could interfere with our quality of life and pursuit of happiness. This is something that needs to be discussed.
…This is why we have to ask ourselves: do we need something like a cognitive right regarding this? That you are in a relationship with a machine that has the tendency to influence your behavior.”
But having identified the problem clearly, Benanti then uselessly argues that although bad nudging is bad, good nudging is good.
A nudge is not consent. It’s a little itty bit of coercion.
Benanti goes on,
Then you can accept it [good nudging]: ‘I have diabetes, I need something that helps me stick to insulin. Let’s go.’ It’s the same thing that happens with a smartwatch when you have to close the rings. The machine is pushing you to have healthy behavior, and we accept it…”
If we have the right to the “ability to self-determine our trajectory in life,” this means we should be more fully informed of different alternatives instead of being shown only a few and nudged toward one of them.
Benanti continues,
When you’re 65, you’re probably taking three different drugs per day. When you reach 68 to 70, you probably have one chronic disease. Chronic diseases depend on how well you stick to therapy. Think about the debate around insulin and diabetes. If you forget to take your medication, your quality of life deteriorates significantly. Imagine using this system to help people stick to their therapy. Is that bad? No, of course not. Or think about using it in the workplace to enhance workplace safety. Is that bad? No, of course not.”
And there we have it. The Vatican AI Ethics advisor’s advice is, Do what you are told by the experts. Do not question the inevitability that you will need to be on three different drugs at age 65. Do not question what caused the chronic disease you developed by 68 or 70. Do not wonder whether or not you would be healthier without your “therapy.”
Is it “bad” to nudge people to do the right thing? Yes, if it discourages them from thinking for themselves.
Seeking Informed Consent without Providing Information is Deception
Currently, too many AI ethicists pursue an Uber-Plan to be imposed on all. This is to come at the ethics question from the wrong end of the animal. At the heart of ethics is individual choice and the doctrine of free and fully informed consent.
After the Nuremberg trials, we began to apply this doctrine to protect subjects in scientific research. Unfortunately, as someone familiar with the matter has observed, the regulation has been much watered down of late, for example, with the “overuse of expedited review procedures for certain kinds of research involving no more than ‘minimal’ risk, and for minor changes in formerly approved research.” This allows the yet uncovered problems with previous research to freely propagate into new research.
The US Department of Health and Human Service (HHS) sets out informed consent regulations in its Common Rule, which seems to me to be shot through with holes. It does not spell out the procedures for determining “risks.” It mentions some nonsense about the possibility of waiving consent if there is “public benefit.”
It does not specify the consequences for the researchers who either do not fully disclose the risks or who do not make the effort to investigate possible risks. In some cases, it may be impossible to fully inform research subjects if the potential hazards of the experiment are unknown. It does not mention the need to inform subjects about what recourse they may have if harmed.
Sadly, informed consent has become more of a mere formality to protect the research institution from liability. This rule was in effect during the Covid-19 vaccine rollout. Signing a consent form now functions more as a waiver of rights. Likewise to “fully” inform someone now means padding out the crucial information with a lot of legal jargon resulting in multiple pages of micro-sized font.
What bitter irony it is when the tool designed to protect us has been weaponized against us.
A Big AI Experiment on the Population
Society has already been exposed to algorithms, AI chat bots, and deep fakes, which are just an elaboration of the marketing and propaganda techniques that have been used to manipulate people since the dawn of man. But we do not yet understand the potential further harms of the new technology which may be overloading our natural capacity to separate truth from lies.
We are all subjects in a global experiment.
Consequently, the public needs to be able to opt out, to withhold consent to be part of this experiment. Certainly, parents need to be able to control what programs, games, and apps their children have access to. Requiring everyone to have an Internet ID with proof of age is not how we protect children. “Protecting children” is another cover story for authoritarians to completely eliminate privacy online. (See Libre Solutions on the Internet ID question.)
I was disturbed but not surprised to see Roblox, a video game for children, listed on AI.gov as one of the providers of educational AI programs. In 2024, as reported on NBC, an adult woman named Tara Alexis Sykes used Roblox to tell a 10-year-old girl, who was in a foster home, how to kill an infant in the household. The girl made two attempts and the baby was seriously injured.
As reported by SFGate, multiple lawsuits have been filed against Roblox for acting as a means for pedophiles to groom young children and get them to meet in person to rape them.
Do not count on government or experts or religious leaders to protect your children. That’s your responsibility. No new framework can protect your child the way that you can.
In general, ethics frameworks tend to give the institutions that impose them a plausible out, an excuse to say, We were following the standard guidelines, so we aren’t responsible for the harms. Imagine if every platform, game or program came with a short warning that the product or service in question might be linked to neurological harms, leading to psychosis, depression, or aggression or may help facilitate fraud or sexual abuse.
None of the information on this platform has been verified and many of the actors on this platform might be chatbots propagating incorrect information or might be predators. Everything you do on this platform is being recorded for possible future use against you and you currently have no control over the situation because society is already well advanced on the path to dystopia. Not appropriate for people under 18, and neurological harms might be a significant risk for people in their early 20s, during which period crucial brain development occurs.
Labeling is the first step toward AI Ethics.