The Potential Problems With Artificial Intelligence

AI_problemsBy Brian Berletic

Artificial intelligence (AI) is, simply put, intelligence exhibited by software and machines. Intelligence itself could be defined as the ability to learn and solve problems. In nature, evolution has endowed many species with intelligence, and human beings in particular with a relatively formidable ability to learn and solve problems.

ibm-watsonIBM’s Watson can be posed questions in natural human language to which it can answer by “reading” amassed knowledge such as encyclopedias.

Human intelligence has allowed our species to diverge from evolutionary and natural environmental constraints, giving us mastery for better or worse over the planet and all other life upon it. We have done this through technology which includes the simplest forms of tool-making to the most complex machines we use to ply the seas, skies, and even outer space.

Our natural, human intelligence has given rise to exponential technological progress; and, amid that progress, we have begun to create an artificial intelligence through computer science unconfined by natural evolution, biological limitations, and thus able to accelerate exponentially faster than our own intelligence has developed.

An example of where AI is today is IBM’s Watson computer. It was able to answer questions posed in natural language using four terabytes worth of content stored to memory and win a game of Jeopardy, an American TV quiz show. That might not sound like a big deal; but, essentially, it was posed a question, and was able to learn enough to answer it, which some could argue is the essence of intelligence.

As systems surpass Watson and their ability to ‘answer questions’ improves, it won’t be long until the questions they are answering are questions we ourselves cannot answer. It will be at this point when AI will surpass human abilities and where real danger lies. But danger might lie even beforehand.

Human-Human Disparity

An AI that is even marginally intelligent and useful, even if it is not on or above the same level as a human, can still benefit those who created it, giving them advantages over those who do not have access to it. There already is intelligent software being developed and deployed around the world by corporations to procure and leverage vast amounts of information that the average human being cannot. There are AI systems that play the stocks for investment companies, creating profits by subtly manipulating trading at a speed no human trader or investor could match.

This gives rise to technologically enabled socioeconomic disparity that simply working harder or smarter just won’t be able to even out. If these forms of AI are not opensource or accessible to at least a somewhat large population, the possibility of unwarranted power accumulating in the hands of those that do possess it increases greatly.

There is also the increasing possibility of human beings directly augmenting their own intelligence with peripheral or integrated Artificial Intelligence. It may sound far-fetched, but remember that technology is expanding exponentially and that human-machine interfaces are increasingly showing up in tech headlines around the world.

_85573351_gettyimages-486505062AI already “plays” the stocks, and often wins more than humans.

The famous US defense research imperative, DARPA, has already developed memory implants to expand human memory, while biomedical researchers have created implants that allow animals and people to control robotic appendages with only their brains. This means that not only is augmented/integrated intelligence and physical strength possible, it is inevitable.

Human-human disparity means that, at a certain point when this technology is sufficiently matured, someone, or a group of people, somewhere, will exist with intelligence unmatched by unaltered human beings. They will become a technologically-conceived divergent ‘species’ with capabilities we could no more match or even understand than a chimpanzee could when confronted with human intellect and technology.

double-amputee-controls-two-prosthetic-arms-just-his-mindRemember that what may seem cumbersome today, was impossible yesterday, and like the Internet in its early days, will soon give way to something ubiquitous and virtually seamless within our daily existence.

And because technology progresses exponentially, the speed at which an unfair advantage becomes an inconceivable and unbridgeable chasm will be so fast there will be nothing we could do to reverse this disparity once it manifests itself.

Machine-Human Disparity

But what about a machine itself that is capable of learning and eventually matches or exceeds human intelligence? Likewise, it will exhibit motivations, behaviors, and possess capabilities we not only could not match, but likely would be incapable of understanding.

A machine with with an IQ of 500 or 1,000 versus humans who would not generally exceed 100-145 would be to us as we are to a single-celled organism. What would it do? How would it perceive us? Would it even notice or bother with us at all? Would the fact that we were able to create it, thus be capable of creating an equal to it that could pose a threat to it, warrant from its perspective the complete eradication of humanity?

A Future with Superintelligence

Could the likelihood of AI of this kind arising explain why, with our expanding knowledge of the galaxy, we have failed to find other intelligent species to communicate with? Could it be that the Fermi Paradox is solved by considering the possibility that when your intelligence exceeds natural constraints you are no longer interested (or even capable of) communicating with biological lifeforms? Or even interested in existing on the same plane of existence with them?

setiCould the reason we find no intelligent life in the galaxy be that civilizations like ourselves exist for such a short time before transcending into something entirely disinterested in communicating with other biological species?

These are all posed speculatively because we simply have no way of knowing. From what we understand of nature on Earth, there is an inherent need to dominate in order to control an organism’s environment and ensure its own survival above that of all others. Whether such “laws of nature” are universal (meaning, beyond Earth if life exists elsewhere) is one question. Whether such laws could transcend biological life and into machines possessing superior artificial intelligence is another question.

And even if it does transcend into artificial intelligence, how it manifests itself would still be unpredictable.

Organisms with limited intelligence pursue self-preservation in a very selfish, short-sighted, and predictable manner. Under unnatural or adverse conditions, these instincts could even be counterproductive to self-preservation (overpopulation, depleting resources, etc). Human beings are capable of the former, but also of a latter more rational means of ensuring self-preservation. We can consider our inherent instincts and choose willfully to either avoid them when they become a detriment, or innovate to create the ideal conditions under which our inherent instincts would be conducive once again to our self-preservation. Would AI do something similar?

Not Just Another Invention, Maybe the Last Invention

Clearly, AI is not just another invention. It is the invention of a new form of existence, exceeding the fundamental intellectual parameters of those who have made it. Nothing like it has ever been done before by humanity and no suitable analogy exists with which to compare it.

Once we create something superior to ourselves it would be irreversible. Our “putting it into a cage” would be as likely as a mouse putting a human in a cage. There would be no defense against it if it turned out unpredictable, and there is no way to predict what it will turn out to be. And at the same time, it is inevitable, because human curiosity and innovation will ensure that eventually, no matter what is done to try and prevent it, someone, somewhere will eventually create an artificial intelligence beyond our own.

We can only hope that when the day comes, whatever we create will be uninterested in us and our plane of existence, and move on to something more fitting for its expanding abilities. We can also only hope that the process it takes going from as intelligent as humans to a superintelligence is one that happens quickly and minus the dystopian scenarios depicted by science fiction.

kurzweilFuturist Ray Kurzweil is optimistic about technology, believing that it will lead to an increasingly decentralized world. He predicts that AI on par with human intelligence can arrive as early as 2029. However, no human, however intelligent, can predict what will happen after AI exceeds human intelligence.

As for AI within a comparable spectrum of human intelligence, we must make sure that it is not dominated by any one person or group of people. The temptation to abuse such power would be no different than existing forms of disparity already clearly being exploited and abused. Some may argue that isn’t the case, but we argue because even as we endeavor to create a new form of intelligence, we still do not fully understand our own, nor the motivations and mechanisms that influence and utilize it.

No matter what, the days when AI was a far-off topic of a science fiction future are over. With some experts predicting human-level AI appearing as early as 2029, and with AI already helping some amass power, wealth, and influence today, AI is now a topic of immediate concern that affects everyone.

Brian Berletic writes for Progress TH.org. Follow on Facebook here or on Twitter here

Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

7 Comments on "The Potential Problems With Artificial Intelligence"

  1. Remember. the military and PTB are 25-100 years ahead in technology. In 1995 DARPA released the internet to the public, yet held computing speeds then, which we only have use of now.
    The PTB are using AI, in terms of self replicating nano-bots, neuron synapse replacers, nano assemblers for mind control, Nanofood engineering and checkmate control of humanity.

    Here is a look at how advanced AI technology is broken down into 9 chapters in the show notes. Bottom line is we are all now synthetic beings in Phase 1 mind control with beta testing in Phase 2 using randomly and not so randomly selected people to take over their minds and souls using AI technology, supercomputers, Geoengineering, HAARP and nano technology….truth be told.
    https://www.youtube.com/watch?v=NOuMcx8p-7Y

  2. I don’t think the problem is “intelligence” as it was defined in this article. The problem would be be the lack of morality.

  3. This is rubbish an mass, an machine cant be intellegent, reciting crapp(comon core) and stack it (webboting) based upon verbal sytaxes dont tell me jackshit, and this isnt intellegence, its just ordinary programming, but the bucket of transistores will allways do what its been told to do, period.
    The sole diffenence is as far I can judge the amount of info this program is able to handle.
    You know, Moores law.
    And the Quantum babbeling about comps, is even wurse.

    Its vitually impossible with the present level of programing to make that bucket go Outside its core, witch is based upon human verbal syntaxes, says nothing about how this intellegence is derived, other than what I wrote above.

    I know the problem is stil fundamentall, and have something to do with how we think, and even there they are clueless, and I am not impressed.

    peace

    • I seriously suggest you type in “deep learning” into YouTube and watch a few demonstrations and talks of what they are doing with them. The AI’s use neural nets and are capable of learning. Your summary accurately describes the state of AI up until just very recently.

  4. Since when is has evolution been a dogma and not a theory?

    Somebody with an IQ of 130 has an ‘unfair’ advantage over someone with am IQ of 70.

    Should he go for a lobotomy to even things out?

    Does this author happen to think that there is ‘equality’ in this world?

    As one poster here states: it’s a matter of morals.

    • Unless you sit at the table of a Fortune 500 corporation, you’re going to be a victim of your own love of disparity. No someone shouldn’t go for a lobotomy because they are smarter, but people who are less educated shouldn’t try to become more educated? We shouldn’t try to help out people who are at a disadvantage? You’d let a guy in a wheelchair struggle with a curb because its just natural things aren’t equal?

      It is a matter of morals… try getting some.

  5. I used to experiment with AI as a computer programmer in the 1980s. The basic idea is to write programs that can modify the program based on future inputs. This self modifying program then has the ability to “learn” from these future inputs – hence Artificial Intelligence. As the predictive SiFy writer Issiac Asimov stated – robots must be programmed not to harm humans. However now DARPA is developing terminator type mechanical warriers who’s basic purpose is to kill humans. Now, hopefully, these robots are properly programmed to recognized friend from foe and to not turn on their human leaders. However, the problem with AI software is that the programs are self modifying based on the mechanical sensory inputs and with microprocessors that work in the megacycle range the modifications can happen quite quickly. Herein lays the danger. Computers have no human type ethics or compassion and the original programs could rapidly change to where the terminator robots could concieveably turn on their masters.

Leave a comment