Smartworld – Identity Ecosystem – Part 2: Identity Profiling

Julie Beal, Contributor
Activist Post

Even as the corporate giants position themselves for control of our identities, the minute details of our lives are already being scrutinised and analysed and completely and utterly bastardised. They can never truly know who we are, for we are forever changing, and self-understanding is a lifelong process.

Inside knowledge has always been sought by the power-hungry; it’s an age-old thing. Knowledge is power. So our profiles are built, and updated, and said to constitute who or what we are. Our personal data is used to feed super-brain computers in an attempt to control and enslave and profit from us.

They think they have our identities. But what can they do with what they’ve got? Well, we all know that companies are collecting information about our habits, and selling it to advertisers. This data forms a rude sketch of our identity (I am more than my ID profile!), as it is often erroneous and, above all, incomplete. However, surveillance techniques are becoming more and more sophisticated and widespread, as are the methods being employed to analyse and share the data being compiled. These techniques constitute the profiling of our identities on a global scale: by government and law enforcement, businesses, and academics.

Data-mining

Every time you go on the Internet you are creating a trail of data – crumbs of personal information which combine to become highly valuable. Each crumb is worth about two-fifths of a cent, and is collected and sold to advertisers. Information can be gathered in a number of ways; for instance, cookies which can track your every move as you click through webpages, and apps that can look at your contact list and even your location.

Other personal data such as credit card transaction details also offer insight into a person’s identity. This personal data is being bought and sold for marketing purposes, and anyone else who wants to try to predict what people’s future behaviour is going to be – many businesses use profiling on current and potential employees.

Even insurance companies are mining our personal data in order to “analyze risk”. Kevin Pledge, the boss of Insight Decision Solutions, an underwriting-technology consultancy based near Toronto, is developing a system that will analyse social media profiles to obtain a fuller picture of clients. He predicts that eventually insurance firms will analyse grocery purchases for further identity data – someone who smokes or eats junk food would have a higher premium to pay.

Joel Stein, a journalist writing for Time, asked several datamining companies to tell him what profile they had recorded for him. He found that not only were many of the details incorrect, but that some of them noted him as being Jewish – a company called Intellidyn actually listed him as a “highly assimilated” Jew. Noting the lack of regulation in the datamining industry, Stein quotes Senator John Kerry, who had argued,

There’s no code of conduct. There’s no standard. There’s nothing that safeguards privacy and establishes rules of the road.

A multitude of businesses are able to offer low-cost or ‘free’ services to internet users because they profit from the data they gather from their users, which can be used to target revenue-earning advertising and services at them:

This information may be about purchasing preferences, hobbies, geographical location, friends and family, political affiliation, entertainment interests and so on. In short, any and all com­ponents of a biographical identity may be discernible.

CIA Director David Petraeus, at a summit for In-Q-Tel, the CIA’s venture capital firm, praised the Internet of things, and quipped that the abundance of household spying devices now available will change “our notions of identity and secrecy.”

In recent years, there has been a discernible surge in tailored advertising on the Internet but Stein doesn’t feel threatened by it, and reasons it’s just an impersonal algorithm. He does, however, admit to feeling a little ‘creeped out’ by this:

Right after I e-mailed a friend in Texas that I might be coming to town, a suggestion for a restaurant in Houston popped up as a one-line all-text ad above my Gmail inbox.”

A huge amount of information can be gleaned simply from a mobile phone alone – contextual and behavioural data, for instance. This data can be enriched by including profiles of associates of an individual – especially when contact with these people is regular.

A study by MIT analysed mobile phone use within a cohesive group (i.e. the students at MIT), enabling the researchers to understand relationships within the group, from factors such as proximity to others and patterns of phone use.

The data can even be used to predict future movements.

In 2010, DARPA ran an experiment called the Network Challenge, which involved releasing ten red weather balloons (8 ft big), and offering a $40,000 prize to the first group to find and report on the location of the balloons.

The Human Dynamics Lab at MIT decided to offer a sophisticated incentive structure for their group, to encourage them to recruit more people. This was highly successful as in the two days leading up to the challenge, they managed to recruit another 100,000 people to help the team.

Hundreds of these recruits had sensors placed around their necks, and tracking software installed into their mobile phones, which recorded all their movements, who they interacted with, the particular way they moved their bodies, and voice properties. From this huge amount of data, they were able to figure out who was the real manager of the group, which member of the group was the most productive, and even which person tended to dominate conversations.

Universities are now training students in the art of data mining analytics.

Learning itself is even being analysed, and profiling is suggested as a way to improve the learning experience – this has already been implemented in Arizona, where students are continuously monitored in their learning; personal data, including Facebook activity, is tracked and analysed, and a computer algorithm decides which courses they should take, and how their learning should be structured. 

On Phorm

The most intrusive data mining company seems to be Phorm, which has been rebuked by many for their hugely intrusive collection of personal information. They have had to retreat from a scandal regarding three of the UK’s largest ISPs (Virgin Media, BT and TalkTalk) selling people’s private browsing histories. A website campaigning against ‘deep packet inspection’ has documented the company’s abuse of the Open Identity Exchange:

Phorm’s latest method of involuntary eavesdropping and surveillance involves intercepting and hijacking image requests, and redirecting them to Phorm’s OIX domain. Once redirected to the OIX domain, Phorm can then read the unique OIX user ID cookie to identify the subscriber. At this point Phorm have access to the subscriber’s IP address, OIX UID cookies, date/time of access, referring page content (& cookies) … giving them enough information to profile the subscriber. (Dec 2011)

Pre-crime detection

Data obtained from consumer behaviour analysis can be put to use by a variety of other groups, such as academics and researchers, but the most troubling use of the data comes from those who are meant to protect us: the profiling of individuals to predict crime has become highly sophisticated.

Jesus Mena, in his book Investigative Data Mining for Security and Criminal Detection, notes:

The same data mining technologies that have been used by marketers to provide ‘personalization’ which is the exact placement of the right offer, to the right person at the right time can be used for providing the right inquiry to the right perpetrator at the right time: before they commit the crime. Investigative data mining is the visualization, organization, sorting, clustering, segmenting and prediction of criminal behavior using data attributes such as age, previous arrests, modus operandi, type of building, household income, time of day, geo code, countries visited, housing type, auto make, length of residency, type of license, utility usage, IP address, VISA type, number of children, place of birth, average usage of ATM card, number of credit cards, etc., the data points can run into the hundreds. Pre-crime is the interactive process of predicting criminal behavior by mining this vast array of data using several artificial intelligence technologies, including:

Link Analysis– for creating graphical networks to view criminal associations and interactions.

Intelligent Agents – for retrieving, monitoring, organizing and acting on case related information.

Text Mining – for searching through gigabytes of documents in search of concepts and key words.

Neural Networks – for recognizing the patterns of criminal behavior and anticipating criminal activity

Machine Learning Algorithms – for extracting rules and graphical maps of criminal behavior and perpetrator profiles.

So there are computers that can ‘think’ like humans, by ‘understanding’ natural language and associations, as they are modelled upon the human brain. They are able to cope with huge amounts of data, which is constantly fed in so that the program can perform analytics ‘in real-time’. They can also ‘learn’, and thus update their ‘behaviour’ or decisions. These neural network models use powerful ‘fuzzy logic’, and genetic algorithms which can “rapid tune the model based on historical data as well as adaptive feedback from the model itself.”

This is also known as complexity modeling, as used by the DOD. One of the reasons for this development is the sheer volume of data now available to us. There has been such a huge explosion of information, that the amount of data being generated can only be effectively analysed (or ‘managed’) by a computer, and control of this data is dominated by corporate giants, especially IBM.

90% of data in the world today has been created in the last 2 years alone!

Mankind barely noticed when the concept of massively organized information quietly emerged to become a means of social control, a weapon of war, and a roadmap for group destruction … Hitler and his hatred of the Jews was the ironic driving force behind this intellectual turning point. But his quest was greatly enhanced and energized by the ingenuity and craving for profit of a single American company and its legendary, autocratic chairman. That company was International Business Machines, and its chairman was Thomas J. Watson.” (Edwin Black, IBM and the Holocaust, 2001)

IBM are still in the business of information control and identity classification, with the aid of a supercomputer called Watson which performs predictive analytics on a global scale, in order to ‘improve’ financial markets and healthcare systems, and help police predict crime. It uses natural language analytics to make complex decisions.

This video below was posted by IBM and boasts of the reduction in crime rates since police in the US began using IBM’s predictive analytics. This technology is the most significant development when it comes to surveillance; you could have a hundred cameras watching your every move but without someone to watch and interpret them the data is useless. It’s like having the perfect evidence but no way to find it. IBM have solved this problem, by sorting and organising the data so that it can be understood, enabling the software to analyse all incoming data, and make predictions.

IBM is a key player in the global identity ecosystem, and evidence the fact that identity management is about much more than verification: analytics intrudes into our lives, feeding supercomputers with our most intimate details. The company even has identity management software which assesses details of an individual’s relationships to give a fuller picture of their identity, and to facilitate identity analytics.

Surveillance efforts have been ramped up in the greedy scramble for data – the success of AI analytics increases exponentially as more data is fed in to the computer for it to learn and, well, develop. The more it learns, the better it gets. Not only that, but data has become so big, only a computer can handle it. The system is modeled on human intelligence: it needs a good all-round view to really get the picture. Even variables such as weather patterns, and ‘sentiment data’ culled from Twitter feeds, are factored into analyses, along with any other correlated information.

Trapwire is currently creating a real stir with reports that claim the surveillance program uses techniques which are more accurate than facial recognition technology; it has been used for several years now (e.g. 2005 and 2009) along with a variety of other data-gathering and sharing systems, many of which will now be using software programs which are far more advanced, especially with the capabilities of IBM’s suite of i2 products, which combine “rich analytical and visualization capabilities with real-time information sharing, providing an end-to-end solution that assists analysts and intelligence teams through all phases of the intelligence lifecycle.”

IBM is helping police to predict crime in a number of ways, from sharing their analytics tools with other software developers, to tailored programs like CRUSH (Crime Reduction Using Statistical History; the program began in 2010 under the name ‘Blue CRUSH’) and COPLINK, one of the biggest data-sharing programs in the world, which is able to “seamlessly” aggregate surveillance data from cameras, individuals, and police departments, and then perform deep analytics, which are capable of “revealing relationships up to eight levels deep among people, places and things.” It enables police officers on the beat to search for an individual’s profile with only the minimal information which is observable during an ‘incident’, helping them to “generate more actionable leads”.

Police departments can use IBM’s Analyst’s Notebook and iBase together with COPLINK, which “further enriches the capacity to not only generate leads in solving crimes, but also use existing data in making non-obvious connections between people, places and other entities, including mobile devices, phone records and vehicles.” COPLINK can also be used in conjunction with IBM’s Intelligent Operations Center for Smarter Cities, which aims to streamline local administration of services using complex modeling.

ECM Universe’s Rapid Content Analytics (RCA) for Law Enforcement was officially launched this year at GovSec – the Government Security Conference & Expo. This will provide content analytics, eDiscovery and document management solutions on the IBM FileNet platform to all federal, community and campus police agencies previously only available to ‘elite agencies’. IBM’s Natural Language Processing Engine generates ‘threat alerts’; the data can be used to predict crime, and to find statements which can be used as evidence in an investigation. The software can gather almost all data it needs – name, address, D.O.B., etc – to combine it with the social media data, although they claim not to be able to access geolocation information unless it is shared through social media. In the future, the software will include criminal psychological profiles to aid in surveillance.

RCA for Law Enforcement can do data mining on any textual data source; for example, storage media obtained from mobile phones, iPad®s, notebooks and other storage devices.

Various analytic views allow investigators to rapidly identify patterns and relationships, and perform investigative discovery on large amounts of data rapidly.

Scott Raimis of ECM described the use of this technology as a ‘just in case’ approach: “Think of it just as a street surveillance camera gathering evidence in the event that it’s needed.”

AISight® (pronounced ‘eyesight’), for instance, connects CCTV to an artificial neural network to enable advanced analytics, to counter crime. Behavioral Recognition Systems, Inc. (BRS Labs) invented the world’s first reason-based video surveillance behavior recognition software, i.e. a computer program which can learn and reason much like humans do. A security guard studying CCTV video will pick up on ‘unusual behaviour’ (an anomaly), and take action, in the hope of preventing a crime taking place. If not, the guard might be able to intervene, or at least evidence of the crime would be available. The software invented by BRS Labs is an artificial intelligence based technology that serves as the foundation for its AISight® 3.0 video surveillance software platform. It takes the place of the security guard because it can adapt and learn behaviour patterns in complex environments, and alert the relevant agents. According to the Daily Mail,

In its latest project BRS Labs is to install its devices on the transport system in San Francisco, which includes buses, trams and subways. The company says will put them in 12 stations with up to 22 cameras in each, bringing the total number to 288. The cameras will be able to track up to 150 people at a time in real time and will gradually build up a ‘memory’ of suspicious behaviour to work out what is suspicious.

Other monitoring systems use cameras which each have to be programmed individually for a designated task, such as what kind of anomalies to look out for, but they often raise false alarms. AISight® is unique because its software will learn for itself what constitutes an anomaly – it does not need to be told. After watching for a while, it ‘knows’. It stores the data it learns as memory – it can, for example, recognise a person it has seen before, such as their body mass and gait, and understand that they are not a threat. They can also be programmed to react to specific individuals or events.

BRS Labs have partnered with numerous security and ICT firms, (IBM, SAIC, UNISYS and Raytheon among them) and won an award for AISight® at London’s Counter Terror Expo in May. The danger is that AISight could be used in combination with‘identity screening’; where scanners detecting ID-chips, personal devices and biometrics can supplement the learning of the AISight program. AISight claims it is already being used by law enforcement agencies, and by the military (including successful deployment in a “challenging beachfront setting”).

The most disconcerting element of this capability is what it enables police to do with the information they gain – the video produced by BRS Labs to promote AISight points out that just “a fraction of 1%” of the world’s security forces actually monitor their cameras: the video on their home page affirms, “there are too many to watch, and the human attention span is too short to be reliable. Video analytics was created years ago to address this problem but has fallen short of expectations and needs …. “. So basically, for all who thought we we were being watched, it’s only now that the ability for Big Bro to ‘see’ us has finally come to fruition.

The system empowers every single camera in the network with a truly staggering power. Talk about artificial intelligence and the all seeing eye … each camera is able to learn and develop, and to understand what it is seeing. Like an ant colony, the cameras join together, and they form a formidable network of watchers.

The video goes on to describe how the systems currently in place are time consuming and inaccurate, because they use, “rule-based logic”, meaning that a rule must be created for each action or object that the user wants to catch: no rule, no alarm. This rule-based technique has created three very severe problems, which until now have never been addressed…

  1. the cost of installation – user, engineer, or computer programmer must define and set up each of these rules for each individual camera. This can take hours, to days, to set up each camera.
  2. maintainance costs – you cannot move the camera or significantly change the layout of the camera view. If the camera moves, even slightly, or the objects and background the camera is focused on changes, the installation process must be repeated for the affected cameras.
  3. (the most important) real-world situations cause lots of false and missed alarms. Small things such as a shadow moving over a trapwire, or the glare of a sunbeam, or a headlight from an automobile, can cause these systems to generate hundreds of false alarms per day . So real alarms are missed because operators are busy with false alarms.

Thus, AISight differs radically to the current surveillance camera networks, which simply supply footage; its ability to reasonmeans that AISight can boast its ability to deliver “actionable intelligence”. It can transform an organisation’s existing camera network into a many-eyed, all-knowing beast.

The Transportation Security Administration (TSA) has recently expanded the Pre✓™ program, which screens the identity of passengers who opt-in to the service, and is now available at several airports across America, with more to come. This might appeal to the public, who have been in uproar at the number of patdowns they have had to endure (even children), and who object to going through body scanners: by screening the identity of travelers, the TSA can cut down on these checks substantially; they only target those who are flagged by the system. Eventually, when most people have enrolled to become a Trusted Traveler, the people waiting in the queue to be scanned will be automatically viewed with suspicion.

So, just how much do they know? In an interview with Doug Wyllie, PoliceOne Editor in Chief, IBM’s Director of Public Safety, Mark Cleverly, insists that police forces are only able to predict ‘types’ of crime, and the time and place they will probably occur. Wylie questions Cleverly on the future of the technology, wondering how far it will be scaled for fighting crime:

Taking Cleverly’s statement about possible interest from America’s intelligence community one step further, a Watson-like supercomputer can easily be used by state, local, and federal law enforcement entities to analyze ‘open source’ information such as social networking sites and other areas of the Internet where known offenders tend to share details of their wrongdoing.

At the very least, you could apply a Watson-like solution to triage non-emergency calls to the dispatch center — essentially creating an automated, intelligent, sorting mechanism based on a series of data points that are collectable, measurable, and understandable and taken in sum point to specific situations or scenarios.

‘Let me make the caveat that this is all very early thinking — there’s nothing out there now that does that — but there’s enough capability in the Watson-like arena that you can see some of these things being possible,’ Cleverly said.

It would seem the ‘Watson-like arena’ would refer to all the other types of information being collected (and analysed) from BRS Labs, Geoeye, GRIP, COPLINK, RCA for Law Enforcement, Trapwire, etc, etc. – is it the aim of IBM to enable Watson to take them all on? Could it one day combine all of the data into one database for Watson to analyse, after it has ‘learnt’ about all the individual ‘nodes’, and modeled them using a program like that at Purdue? “One mainframe to rule ‘em all” indeed.

But we’re not there yet: the point has been made (see this, for example) that – for all the surveillance and pre-crime profiling and analytics – Watson failed to spot Holmes, the man accused of the ‘Dark Knight’ Colorado shootings. Though the data generated by false flags may well not be entered into the system!

Gotta be good

When people know they are being watched, it creates an effect much like Foucault’s panopticon; they will start to police themselves, and regulate their own behaviour according to the requirements of those in charge. Surveillance, and even news of surveillance techniques, can play a significant role in social control; people will try to ‘be good to get by’, or they can go one stage further and establish their identity as someone who is actively seeking to make society ‘better’. Numerous initiatives exist which claim that people can become good little citizens by spying on their neighbours (such as the ‘See Something, Say Something’ program in America).

Another resource debuting at GovSec is Spypedia, “the world’s most comprehensive and informative resource on spies and terrorists”. The video below makes a head-scratching claim, repeated at the end:

There is a meta-narrative to history and espionage, ‘a story about a story’ – SPYPEDIA gives you that story…. History is repeating itself. You are living through it.

Spypedia is hosted by the Centre for Counterintelligence and Security Studies (CICENTRE) and constitutes an online encyclopaedia of information related to spies and terrorists. It is a site which can be used by businesses looking to counter economic espionage, as well as hobbyists and patriots who take an interest, sanctioning surveillance and keeping the whole terrorist yarn alive. A large part of the website is devoted to kids and educators, with games, a handbook of ‘spying words’, and even a Spy-in-Training Program!

The resource for educators on 9/11 lists specific activities for students to do, together with specific expected outcomes of a student’s ‘understanding’. The activities involve analysing supposed ‘intelligence’ put before President Bush, and for students to imagine what action they would take if they had been in his position. This encourages the students to, a) believe they are being presented with facts, b) form emotional responses to the data, c) exercise higher analytical faculties, d) form beliefs about 9/11 ‘truthers’, e) feel that more ‘intelligence’ should be gathered to make the nation secure, and f) allocate all activists/dissenters to the modern genre of ‘radicals’.

The resource also claims that intelligence agencies were not able to efficiently share all information with other agencies, implying this could be remedied by not only increasing the amount of intelligence collected (of which identity plays a crucial part), but of creating, essentially, a huge database of profiles and ‘facts’ which can be used for pre-crime analytics. It even goes so far as to propose students should, “Design and implement an education campaign in which they provide accurate information about the perpetrators if they notice a trend toward misunderstanding or doubt on the part of the public” in regards to 9/11.

It would appear that the introduction of 9/11 ‘education’ into the school curriculum is about much more than adding credibility to the official version of that day; it may herald a new attack on ‘conspiracy theorists’, and labeling activists like Julian Assange as terrorists. Infowars reported on this issue, citing several cases where public figures and the media have discredited those with non-mainstream views.

Encouraging students to spy and report on ‘radicals’ takes on a whole new meaning when the definition of radical in America has broadened to:

  • Americans who believe their “way of life” is under attack;
  • Americans who are “fiercely nationalistic (as opposed to universal and international in orientation)”;
  • People who consider themselves “anti-global” (presumably those who are wary of the loss of American sovereignty);
  • Americans who are “suspicious of centralized federal authority”;
  • Americans who are “reverent of individual liberty”;
  • People who “believe in conspiracy theories that involve grave threat to national sovereignty and/or personal liberty.”

This was documented by Paul Joseph Watson and highlights the sinister turn profiling has taken.

When I was young the newsreaders spoke about freedom fighters; now dissenters are labled as extremists or terrorists. We can even ‘be radicalised’ – a newspeak way of implying it can be ‘done to us’, against our will, even. That those who think for themselves, who question the status quo, are ‘crazies’, instantly dismissed.

The SpyMuseum hosted a presentation entitled ‘9/11, False Flags, and Black Ops’, with speakers David Frum, Jonathan Kay, and Webster Tarpley. In this YouTube video, Jonathan Kay, author of Among the Truthers: A Journey Through America’s Growing Conspiracist Underground, remarks,

Once they see the world through distrustful eyes, and they think they’re being lied to, that colours their perception of all subsequent events . . . Real life conspiracies tend to be small and grubby and primarily about money and sex.

Webster Tarpley was then invited to give “his version of reality”, and was gently but resolutely mocked by the other speakers. It would seem he was being used as an example of ‘one of them’, not to be taken seriously. The message being, domestic dissenters are to be humoured. They just have “theories” about “nefarious plots”; a less harmful sub-set of radicals.

An older resource document for educators, ‘The Enemy Within’, characterises radicalism as “early 20th Century anarchism”, and suggests one of the solutions to the threat of terrorism could be to, “adopt profiling as standard practice at airports, transportation centers, and other key public spaces”. The ultimate aim of the exercise is to figure out how to identify ‘the enemy’.

In the UK, several offences are being re-labeled as ‘hate crimes’. The offenders are the enemy within; the scourge of society because they ‘go too far’. And now, a ‘hate crime app’ has been invented, which will encourage citizen-spies to report ‘suspicious’ (out of the ordinary!) behaviour to the authorities. If someone has information on an incident which might constitute, or lead to, a hate crime, they can upload geolocation data, as well as audio or video data, using smartphone capabilities.

No-one likes a snitcher. Or a nosey parker. But with the media hype, social values are being changed.

Fostering the managed transition from a world of private sovereign individuals, to one of global digital citizen do-gooders, UNESCO and the Rockefeller Foundation have helped support the ‘Global Cyber Ambassadors for Peace’ (GCAP) initiative, launched in 2011. The twisted communitarian ethos being foisted upon us from all angles is, after all, virtual and electronic.

This is the age of ‘do the right thing’ – how to behave has been decided. We are the enemy who must be controlled. Dissent is a thought crime, and hate crimes mean jail time.

Do not deviate from the norm.

Maintain the average.

Do the right thing.

… Or rather,

Remember there is no norm.

Averaging is just maths.

The ‘right’ thing to do is a daily personal choice which makes us human and free. The context is infinitely complex.

Those choices are our identity.

Read Part 1 HERE

This article first appeared at Get Mind Smart

Julie Beal is a UK-based independent researcher who has been studying the globalist agenda for more than 20 years. Please visit her website, Get Mind Smart, for a wide range of information about Agenda 21, Communitarianism, Ethics, Bioscience, and much more.

You can support this information by voting on Reddit HERE

var linkwithin_site_id = 557381;

linkwithin_text=’Related Articles:’


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Smartworld – Identity Ecosystem – Part 2: Identity Profiling"

Leave a comment