Another Day, Another Facebook Scandal — Company Data Mined Email User Contacts From New Users

By Aaron Kesel

Facebook has another scandal; this time it has been revealed that the company’s password verification feature for new users collected contact data from users’ email accounts without their consent.

Business Insider revealed that Facebook had “harvested the email contacts of 1.5 million users without their knowledge or consent when they opened their accounts.” A security researcher e-sushi questioned why Facebook was asking for email passwords when new users signed up with the platform. Business Insider then discovered that if you did enter your email password, a message popped up saying it was “importing” your contacts, without asking for permission first.

Facebook has since admitted to the practice stating that it unintentionally” uploaded the address books of 1.5 million users without consent, and further stated it will delete the collected data as well as reach out to those affected.

“Last month we stopped offering email password verification as an option for people verifying their account when signing up for Facebook for the first time. When we looked into the steps people were going through to verify their accounts we found that in some cases people’s email contacts were also unintentionally uploaded to Facebook when they created their account,” the spokesperson said in a statement.

Many may remember this as the platform’s “you may know” feature. While fools thought this was Facebook’s algorithm scanning their friends for potential people they might know, in reality Facebook was doing this through data mining for emails of users on their platform.

The issue was first noticed in early April when The Daily Beast reported that Facebook was requiring new users to enter their email passwords to verify them. The feature was disguised as ease of access to allow Facebook to auto verify new user accounts and, according to the social media giant, was only introduced in 2016.

Facebook states that this feature was accidental, but I find that hard to believe from a company with so many questionable ethical practices for its user data. Since my explosive, deep dive into Facebook’s history of privacy violations, there has already been several new emerging scandals.

[RELATED: Deep Dive: FTC Negotiating Multi-Billion Dollar Fine For Facebook’s Privacy Scandals; Violating 2011 Accord]

It’s now even been revealed that Facebook will “deboost” posts. Something that all of us knew and is more commonly referred to in the industry as “shadow banning.” However, the claim holds more water coming from a former employee than just mere speculation, even if warranted with statistical analysis of posts/accounts as Project Veritas reported last month.

The information describes how Facebook engineers plan and go about policing political speech on the platform.

Screenshots from a Facebook workstation show the specific technical actions taken against political figures, as well as “[e]xisting strategies” taken to combat political speech.

Further, Facebook just recently updated an old blog post from last month thinking no one would notice, NBC Wave News reported.

A Facebook Newsroom post, by security VP Pedro Canahuati, initially stated the company stored millions of its users’ passwords in plain text for years, left accessible to its employees.

Canahuati had said in March the company would begin to notify hundreds of millions of Facebook Lite users, tens of millions of other Facebook users and tens of thousands of Facebook-owned Instagram users. That number of Instagram users just expanded to millions.

“Since this post was published, we discovered additional logs of Instagram passwords being stored in a readable format,” said the company in Thursday’s blog update. “We now estimate that this issue impacted millions of Instagram users. We will be notifying these users as we did the others.”

Facebook is being accused by the FTC (Federal Trade Commission) of privacy violations and is in the midst of negotiating over a multi-billion dollar fine that would settle the agency’s investigation into the social media giant’s privacy concerns.

These latest revelations of data mining user emails for existing users on the platform, storing passwords in clear text, and censorship, comes after a series of privacy scandals, such as Cambridge Analytica, that may have put the personal information of its users at risk, as well as numerous times the company has been caught spying on its users or slipping up with its overall security including storing passwords in clear text without encryption.

The FTC’s probe of Facebook began in March of last year in response to big social’s entanglement with Cambridge Analytica, a political consultancy firm connected to a U.S. subsidiary (SHELL COMPANY) of a UK defense contractor SCL Group, Strategic Communication Laboratories, that improperly accessed data on 87 million of the social site’s users to use for campaign targeting for U.S. President Donald Trump through his former adviser Steve Bannon. According to reports, Facebook knew for an entire three years that Cambridge Analytica was abusing and misusing user data but did absolutely nothing.

[READ: The TRUTH About The Cambridge Analytica Scandal Is Bigger Than Just Facebook #MyDataMyChoice]

The FTC’s investigation stems from whether Facebook’s conduct and lack of protection of users since then is in breach of an accord in 2011 that Facebook brokered with the FTC to improve its privacy practices. Facebook has stated it did not breach that accord, despite evidence on the contrary showing that the social media giant sold user data to third parties and may have even been recording users’ private messages with its Messenger app.

Although Facebook contends that it didn’t use the data, it openly admits that it scanned/scans all user data that is sent and received on the Messenger app and will review text you send if something is flagged, as Activist Post reported last year.

Facebook also got entangled in a bug in December of last year that gave app developers access to private user photos including those shared on Marketplace or Facebook Stories and unposted pictures — an absolute privacy nightmare. The Facebook blog states, “that some third-party apps may have had access to a broader set of photos than usual for 12 days between September 13 to September 25, 2018.” However, who’s to say the bug wasn’t preexisting for quite some time and this is just to save face for the company?

Facebook got caught for years giving tech giants access to user data as well, so it’s not just Cambridge Analytica and numerous other analytics companies.

The New York Times reported a bombshell in December of last year detailing the secret relationship that Facebook had with the tech companies including Amazon, Microsoft, Spotify, and Yahoo just to name a few. The Times report was backed by 50 former employees of the company and its partners, as well as documents for the deals.

The official corporate partnerships with Facebook totaled more than 150 companies, which The Times notes that the oldest deal dates back to 2010, one year prior to Facebook’s brokered deal with the FTC for its privacy practices. One has to wonder if the social giant disclosed these type of deals to the FTC one year later when it was under scrutiny — more than likely, probably not.

“For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.” The Times wrote.

“The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond,” The Times added.

The New York Times goes on to detail the level of access that a few companies were given to users’ profiles; and it’s quite shocking, including the ability to read and delete messages, as the Huffington Post highlighted.

Again, from The New York Times:

Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

Besides the 150 tech companies, Facebook gave 60 device makers themselves — including Apple, Amazon, BlackBerry, Microsoft, and Samsung — special access to Facebook data, according to another report by The Times. This special access allowed a reporter using a BlackBerry device (old model) to view private details of Facebook users despite their privacy settings, a shocking contention.

To make it clear, Facebook never asked for every specific user’s consent to send over their personal data to these other companies. Facebook claims that it didn’t need user consent since it considered these companies “service providers,” and “integration partners” which were acting in the interests of the social network. I am no lawyer, but it seems bluntly obvious that Facebook violated its 2011 agreement with the FTC.

In other words, the company used a loophole and said that users who logged into the aforementioned services were giving their consent to their partners.

It’s not known how far back the FTC is looking at Facebook’s privacy violations, but before 2011 it may come to the reader’s surprise that Facebook was embroiled in data scandal after scandal. To refresh the reader’s memory, in 2010 Facebook got caught giving advertisers its users’ names, ages, hometowns, and occupations simply from clicking an ad, Business Insider reported.

One year prior, in 2009, protests ensued against Facebook when the company decided to change its data retention policy for its users, ABC reported.

Two years before, in 2007, Facebook faced another scandal with its forced Beacon advertising software. Beacon would share users’ shopping experiences online or what websites a Facebook user was visiting if they were logged in.

It turns out that Beacon was tracking people’s Web activities outside the popular social networking site to other websites, PC World reported. Facebook was then sued in 2008 for violating the federal wiretap law when it began monitoring and publishing what Facebook users were doing on participating sites of Beacon.

Mark Zuckerberg himself even apologized for Beacon, explaining his thought process behind the system — of course, leaving out that it was profiting from this data. Nonetheless, Facebook finally allowed users to opt out of the system a month later.

“We were excited about Beacon because we believe a lot of information people want to share isn’t on Facebook, and if we found the right balance, Beacon would give people an easy and controlled way to share more of that information with their friends,” Zuckerberg wrote.

In 2009, Facebook finally shut down Beacon in an effort to settle the class action lawsuit against the social-networking site and donated $9.5 million to a foundation dedicated to exploring issues around online privacy and security, Telegraph reported.

Facebook said that it learned a great deal from Beacon in 2009, but then continued these type of deceptive marketing practices selling its users’ data years later. So how much did the company actually learn? Not much, as the company started partnering with data broker firms in 2013 after the 2011 FTC ordeal. For those unfamiliar, data brokers earn money by selling your consumer habits and monitoring your online and offline spending. Facebook’s partnership allows them to measure the correlation between the ads you see on Facebook and the purchases you make in-store — and determine whether you’re actually buying the things you’re seeing digitally while using Facebook.

“We learned a great deal from the Beacon experience,” said Barry Schnitt, a spokesman for Facebook. “For one, it underscored how critical it is to provide extensive user control over how information is shared. We also learned how to effectively communicate changes that we make to the user experience.”

However, there are two more of the biggest stories Mark Zuckerberg wishes were not forever archived on the Internet. The first is from 2004, when he was 19. When Mark was in college he used his newly created website, which was “TheFacebook.com” at the time, to hack into the email accounts of two Harvard Crimson journalists critical of him. The two journalists were allegedly working on a story that made claims that Zuckerberg stole the idea for Facebook from Cameron Winklevoss, Tyler Winklevoss, and Divvya Narenda who later founded a similar site ConnectU (HarvardConnection), Business Insider reported.

The second story is the most revealing and comes from an interview in 2009 in which Facebook is grilled by a BBC reporter, an event in Zuckerberg’s life that NSA whistleblower Edward Snowden highlighted in a tweet last year. The revealing exchange is from 2009, two years after Beacon advertising software successfully launched on the platform in 2007, and 1 year before Facebook gifted its advertisers its users’ names, ages, hometowns, and occupations simply from clicking on ads, selling their information. Something that Zuckerberg said he would never do to the BBC reporter.

One year later in 2010, a story in The New York Times reveals that Mark Zuckerberg had flip-flopped and changed his view against an individual’s privacy in a penned headline, “Facebook’s Zuckerberg Says The Age of Privacy Is Over.” 

Then the Zuck, Zucked us by changing everyone’s default privacy settings.

In February earlier last year, a German court echoed that previous ruling, stating that Facebook is breaching data protection rules with privacy settings that “over-share” by default and by requiring its users to give their real names.

The judges found that at least five different default privacy settings for Facebook were illegal, including sharing location data with its chat partners WhatsApp and Instagram or making user profiles available to external search engines, allowing anyone to search and find information on a person.

Facebook’s partners and subsidiaries collect data to enable what’s known as “hyper-targeted advertising” on its users.

Additionally, the court ruled that “eight paragraphs of Facebook’s terms of service were invalid,” while one of the most significant requires people to use their real names on the social network which the court deemed was illegal.

In 2015 the Belgian privacy commission study concluded Facebook’s use of user data violated privacy and data protection laws in the EU, Guardian reported.

All of this occurred despite another 2014 court decision where the judge ruled that Facebook must face a class action lawsuit accusing it of violating its users’ privacy by scanning the content of messages they send to other users for advertising purposes. Something that again was reminded to users last year.

A U.S. court in 2017 dismissed nationwide litigation accusing Facebook of tracking users’ Internet activity even after they logged out of the social media website. It was dismissed despite the practice previously being admitted that it would start using data from users’ Web browsing history to serve targeted advertisements, and use data from apps and websites users visited.

Although Facebook has had a plethora of scandals in its past, recently it feels like its downfall was with the emergence of a whistleblower, Christoper Wylie, thanks to the Cambridge Analytica scandal. Wylie appeared before a committee of British MPs, delivering bombshell testimony noting that Facebook has the ability to spy on all of its users in their homes and offices, something many people miss.

“There’s been various speculation about the fact that Facebook can, through the Facebook app on your smartphone, listen in to what people are talking about and discussing and using that to prioritize the advertising as well,” Collins said. “Other people would say, no, they don’t think it’s possible. It’s just that the Facebook system is just so good at predicting what you’re interested in that it can guess.”

“On a comment about using audio and processing audio, you can use it for, my understanding generally of how companies use it… not just Facebook, but generally other apps that pull audio, is for environmental context,” Wylie said. “So if, for example, you have a television playing versus if you’re in a busy place with a lot of people talking versus a work environment.” He clarified, “It’s not to say they’re listening to what you’re saying. It’s not natural language processing. That would be hard to scale. But to understand the environmental context of where you are to improve the contextual value of the ad itself” is possible.

Facebook itself has admitted in a 2016 blog post that:

Facebook does not use your phone’s microphone to inform ads or to change what you see in News Feed. Some recent articles have suggested that we must be listening to people’s conversations in order to show them relevant ads. This is not true. We show ads based on people’s interests and other profile information – not what you’re talking out loud about.

We only access your microphone if you have given our app permission and if you are actively using a specific feature that requires audio. This might include recording a video or using an optional feature we introduced two years ago to include music or other audio in your status updates.

Although Facebook claims they do not listen in on conversations, the catch here is that Facebook does have access to your phone’s microphone — as giving permission to access your microphone is a requirement to be able to download the site’s mobile app – thus giving the company the ability to access your phone’s mic at any time.

The app itself can listen to audio and collect audio information from users – while the two aren’t combined, and that no audio data is stored or correlated with advertising according to Facebook, after all these other lies one has to wonder.

Facebook admits it has a public feature that started in 2014 which will try to recognize any audio in the background, like music or TV— however, it’s only while you’re entering a status update, and only if you’ve opted in. So don’t worry they have required consent for everything else!

Forbes has also reported on the potential that Facebook was using its users’ audio information to target them with ads.

This is not the first time Facebook was accused of listening to conversations using smartphone microphones. Reddit user NewHoustonian started a discussion last year about whether the Facebook app was listening to conversations for advertising purposes. NewHoustonian started off the discussion with a post — which has since been removed — about how he suspects the Facebook app was listening to him because he started seeing pest control ads after talking to his girlfriend about killing a cockroach. That Reddit thread now has over 1,700 comments in regards to Facebook listening to conversations and several of those comments refer to similar experiences.

Mass communication professor at the University of South Florida, Kelli Burns, believes that Facebook is using the audio it gathers not simply to help out users, but might be doing so to listen in to discussions and serve them with relevant advertising. Burns tested an experiment talking about cat food with her phone out, then loading Facebook — to her surprise she saw cat food ads, The Independent reported.

Last year, Vice reported another bizarre story about Facebook using its microphone to listen in on users. The author wrote they were talking about Japan with a friend, then subsequently received ads for flights to Tokyo.

A couple years ago, something strange happened. A friend and I were sitting at a bar, iPhones in pockets, discussing our recent trips in Japan and how we’d like to go back. The very next day, we both received pop-up ads on Facebook about cheap return flights to Tokyo. It seemed like just a spooky coincidence, but then everyone seems to have a story about their smartphone listening to them.

The author then decides to do a series of tests, to see if they are being spied on.

Twice a day for five days, I tried saying a bunch of phrases that could theoretically be used as triggers. Phrases like I’m thinking about going back to uni and I need some cheap shirts for work. Then I carefully monitored the sponsored posts on Facebook for any changes.

To the writer’s horror, results came back immediately overnight.

The changes came literally overnight. Suddenly I was being told mid-semester courses at various universities, and how certain brands were offering cheap clothing. A private conversation with a friend about how I’d run out of data led to an ad about cheap 20 GB data plans. And although they were all good deals, the whole thing was eye-opening and utterly terrifying.

Before last year ended, in December the scandals still didn’t end for Facebook; 250 pages of emails and documents released by British Parliament as a part of its own investigation show conversations between Facebook and an app developer called Six4Three that developed Pikinis, which allowed people to find Facebook users’ bathing suit photos. The emails and documents were ordered sealed by a California court until the UK lawmakers got a hold of them. It’s highly likely that during the discovery phase of the lawsuit emails and documents unrelated to Pikinis were scooped up as well. The emails purported to show the social giant offering major advertisers special access to user data — deals many people view as a contradiction of Facebook’s promise not to sell user data according to Wired. Facebook, however, has stated the emails lack context and maintains that it never sold its user data, despite past scandals showing that they did.

Of course, this is also coming from a company that ran psychological experiments on at least 700,000 of its users in 2012. The experiment was revealed by a scientific paper published in the Proceedings of National Academy of Sciences. The test on users hid “a small percentage” of emotional words from people’s news feeds, without their knowledge, to test what effect that had on the statuses or “likes” that they then posted or reacted to after.

“This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated,” said Sandberg, Facebook’s chief operating officer while in New Delhi.“And for that communication we apologize. We never meant to upset you.”

Facebook has even in the past studied messages that its users typed but decided not to post for whatever reason. The study by a Facebook data analyst looked at the habits of 3.9 million English-speaking Facebook users to analyze how different users “self-censor” on Facebook. They then measured the frequency of deleted messages or status posts. Facebook stated that it studied this because it “loses value from the lack of content generation.”

Hilariously enough, Facebook’s first president, Sean Parker, accused the social media giant of exploiting human “vulnerability” using psychology in 2017 when he revealed all of the companies secrets.

Besides being a social validation feedback loop, Facebook has demonstrated itself to be an echo chamber by labeling people under political labels, as The New York Times reported.

The big social giant also hired a full list of liberal left-leaning fact checkers and has begun limiting the reach of sites like Activist Post; labeling alternative media, opinions, and editorials as “fake news.” And now they are set to target anti-vaccine information.

In fact, other former Facebook employees have confessed to the abhorrent censorship of conservative news and views. The nail in the coffin was actually placed in 2015 when Facebook admitted that they were censoring posts and comments about political corruption and content that some countries like Turkey and China don’t feel is appropriate for their citizens. Facebook is not new to censorship, and this will likely continue.

If all of that’s not enough, it was also revealed last year that Facebook has filed several patents with the U.S. Patent and Trademark Office for technology that is intended to predict your location by using your historical location data — and others’ — to determine where you will go next or when you will be offline to feed FB content, Activist Post reported.

Turns out that recently arrested, WikiLeaks founder Julian Assange was right: Facebook is “the most appalling spy machine that has ever been invented.” Or, as a CBS report written in 2011 stated, “Social Media Is a Tool of the CIA. Seriously.”

It’s not entirely your fault (although your responsible for your own data online) — in 2012, Carnegie Mellon researchers determined that it would take the average American 76 work days to read all the privacy policies they agreed to, so who knows how long it would take the average human to read all the privacy policies they agree to on the daily now.

That brings us back to present day with the FTC looming a decision over the social giant’s head, which could see it pay a record multi-billion dollar fine. Facebook’s 2011 consent decree says that the company could be fined as much as $16,000 per day for “each violation” or as much as up to $40,000 per affected user if it is found to have broken its 20-year agreement with the FTC. The truth is that no one knows how the FTC will calculate the fine but it’s reported to be historic.

Although it could be argued this is a way for the government to begin to regulate social media, it’s indisputable that Facebook has exploited its user base for several years for a profit, as this article documents. And not only its users — it recently came out last year that Facebook also allegedly lied to its advertisers about how well its video ads were performing and inflated its user count for several years.  Plaintiffs argue that Facebook knew about the miscalculated metrics all the way back in January 2015, Bloomberg reported.

Facebook says the allegations are “without merit” and that it told its advertisers about the issue, WSJ reported.

“We told our customers about the error when we discovered it—and updated our help center to explain the issue,” a spokeswoman said.

In this writer’s opinion, there is only one way to go from here for Facebook and its stocks — and that’s plunging down to the concrete pavement.

It seems like Internet users for some reason forget about these massive scandals and just continue using Facebook. I would say it’s high time to “#DeleteFacebook” and join a number of growing alternative social media networks like SoMee.SocialGab.aiMinds.comSteemit.com, where it’s even possible for you the reader and content producer to get paid for your comments and contributions to the platforms thanks to cryptocurrency.

Even a pre-existing option, Twitter, is better than Facebook. Jack Dorsey’s platform may have over-hypersensitive admins, but at least there hasn’t been as many privacy violations as Facebook. Although there have been some, it’s not nearly as much. On the bright side, Dorsey doesn’t seem to have a patent to spy on your current location, to keep track of your location data and predict where you are going next — Facebook does.

Let us move forward into the future to networks that don’t run in tandem with the U.S. government and other governments, and are not fueled by greed and selling harvested user data. But instead, completely decentralized and people-powered, incentive-based networks for sharing data you choose to share, where creators are rewarded rather than snubbed despite bringing value to these social platforms.

Keep all these privacy violations in mind when questioning whether Facebook intentionally or unintentionally data mined email contact data when asking for new users’ email passwords, (which is not only sketchy as hell but a huge privacy concern.)

Aaron Kesel writes for Activist Post. Support us at Patreon. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter. Ready for solutions? Subscribe to our premium newsletter Counter Markets.

Image credit: Anthony Freda Art

Provide, protect and profit from what is coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Another Day, Another Facebook Scandal — Company Data Mined Email User Contacts From New Users"

Leave a comment