By B.N. Frank
Research has indicated that kids’ use and exposure to screens and social media can be more hurtful than helpful. In regard to the placement of artificial intelligence (A.I.) algorithms in technology – significant biases, inaccuracies, privacy violations, and vulnerabilities have been reported in social media platforms and other software (see 1, 2, 3, 4). Recently, a law was proposed in Minnesota to try to protect social media-using children from being targeted and exploited via algorithms.
From Ars Technica:
Proposed law in Minnesota would ban algorithms to protect the children
Bill approved by House committee requires disabling algorithms for kids under 18.
Minnesota state lawmakers are trying to prohibit social media platforms from using algorithms to recommend content to anyone under age 18. The bill was approved Tuesday by the House Commerce Finance and Policy Committee in a 15-1 vote. The potential state law goes next to the House Judiciary Finance and Civil Law Committee, which has put it on the docket for a hearing on March 22.
The algorithm ban applies to platforms with at least 1 million account holders and says those companies would be “prohibited from using a social media algorithm to target user-created content at an account holder under the age of 18.” There are exemptions for content created by federal, state, or local governments and by public or private schools.
“This bill prohibits a social media platform like Facebook, Instagram, YouTube, WhatsApp, TikTok, and others, from using algorithms to target children with specific types of content,” the bill summary says. “The bill would require anyone operating a social media platform with more than one million users to require that algorithm functions be turned off for accounts owned by anyone under the age of 18.” Social media companies would be “liable for damages and a civil penalty of $1,000 for each violation.”
Tech-industry lobbyists say the bill would violate the First Amendment, prevent companies from recommending useful content, and require them to collect more data on the ages and locations of users.
“Too many kids are struggling”
Rep. Kristin Robbins (R-Maple Grove) sponsored the bill, saying that “too many kids are struggling, or worse, dying,” according to an article about the committee vote on the Minnesota House website. Robbins said she was moved to act by Wall Street Journal articles about TikTok pointing minors to sex and drug videos and eating-disorder videos. Robbins “believes [the bill] could be a model for the rest of the country,” the article said. Online platforms’ use of algorithms has also come under fire in Congress.
Tech industry lobbying group NetChoice told Minnesota lawmakers that the bill is “well-intentioned” but “undermines parental choice, removes the access to beneficial technologies from young people, and is a clear violation of the First Amendment.” NetChoice members include Facebook, Google, TikTok, Twitter, and other tech companies.
“Clear violation of the First Amendment”
Florida and Texas laws that regulate social media in other ways have been blocked by federal judges who found that the laws violate the companies’ First Amendment rights to moderate user-submitted content. Florida’s law would have made it illegal for large social media sites like Facebook and Twitter to ban politicians, while the Texas law tries to ban “censorship.” Tech-industry groups won preliminary injunctions blocking both of them.
Democrats have a majority in the Minnesota House while Republicans have a majority in the state Senate. Commerce Finance and Policy Committee Chair Rep. Zack Stephenson (D-Coon Rapids) “was unimpressed by the argument that algorithms shouldn’t be barred because they are a vehicle companies use to send healthy, age-appropriate content. He compared it to saying you can’t ban cigarettes because it would prevent young people from the benefits of cigarette filters,” the article on the House website said.
“These companies are doing immense damage to our communities, to our children,” he said. “I am very determined to take some action before it is too late.”
NetChoice argued that the First Amendment case against the bill is strong, writing:
In Sorrell v. IMS, the Supreme Court ruled that information is speech and that a Vermont law could not prohibit the creation and dissemination of information including the selling of data to a database. Even more relevant here, multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment. This proposal would result in the government restraining the distribution of speech by platforms and Minnesotans access to information. Thus, HF 3724 will be deemed by courts as a violation of the First Amendment.
Bill defines social media broadly
NetChoice further argued that the bill’s impact would go well beyond sites like Facebook and YouTube. On the book review site GoodReads, “young people would be unable to receive recommendations done by algorithms that guide them to books based on their previous interests and reviews by similar readers,” the group said.
The bill’s broad definition of social media could also prevent newspapers from “recommend[ing] further related news stories by algorithm to a student doing research if comments are attached,” NetChoice said. The bill defines “social media platform” as “an electronic medium, including a browser-based or application-based interactive computer service, telephone network, or data network, that allows users to create, share, and view user-created content.”
The bill would force online platforms to “collect more information about users under 18″ because “a company would have to know the age of the user and that they were located in Minnesota to then disable any algorithmic recommendations. In order to do so, information that might not otherwise be collected regarding age and location would be needed,” NetChoice said.
The Chamber of Progress, a lobby group for tech companies including the major social networks, also spoke out against the bill. Group CEO Adam Kovacevich said that “online platforms use algorithms to prioritize healthy content and deprioritize unhealthy content so the bill would make the situation worse for teenagers,” according to an article by The Center Square. “He said YouTube Kids uses algorithms and manual curation to surface content appropriate for children and Twitter’s algorithms to help prioritize[d] users find relevant content.”
Algorithms make online services useful
TechDirt’s Mike Masnick slammed the bill in an article titled, “Minnesota pushing bill that says websites can no longer be useful for teenagers.”
“I get that for computer illiterate people the word ‘algorithm’ is scary,” Masnick wrote. “And that there’s some ridiculous belief among people who don’t know any better that recommendation algorithms are like mind control, but the point of an algorithm is… to recommend content. That is, to make a social media (or other kind of service) useful. Without it, you just get an undifferentiated mass of content, and that’s not very useful.”
For many years, American Academy of Pediatrics (AAP) and other health experts have been warning about children’s vulnerability to screens as well as wireless sources. Additionally, for many years, American tech insiders (aka “Silicon Valley Parents”) have taken considerable measures to limit their own kids’ use and exposure to screens including sending them to private low-tech schools. Nevertheless, the proposed and actual use of screens and wireless devices by American kids in public schools only seems to be increasing (see 1, 2, 3, 4, 5). It doesn’t make sense, does it?
Activist Post reports regularly about screens, social media, and unsafe technology. For more information, visit our archives and the following websites:
- Wireless Information Network
- Electromagnetic Radiation Safety
- Environmental Health Trust
- Physicians for Safe Technology
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.