What Is The Ultimate Goal Of Artificial Intelligence?

Yogesh Malik
Future Monger
Published in
18 min readOct 3, 2018

--

Artificial intelligence is going to change how humanity thinks about the role of culture, god, faith, reality and ourselves.

Can artificial intelligence solve world hunger and bring eternal peace?

We will see that when the time comes but the inevitability of artificial intelligence becoming smarter than human has raised many questions about the long-term survival of the human race.

Yes, some are myths and some statements made recently are overhyped but there is no doubt if machine’s goals are misaligned with ours then we need to ask ourselves

What kind of future we want?

What is a good life?

What is the relationship of a man to nature?

The truth behind the harmony of this cosmos as professed by science, the will to take over authority over all things, and with “knowledge is power” philosophy, today’s man is a greedy man. Today’s man is ready to play with the mind of cosmos. He thinks he has the power to be free and with this power, he will be free eternally.

Let’s look at what could be the future of artificial intelligence and how it impacts the destiny of this planet.

This technological disruption, associated with new patterns of globalization, is threatening to create a new world.

What follows here in this article, is a tour of some of the books that I have been reading on the subject

Singularity and Transhumanism ( AI Takeover )

The ultimate aim of artificial intelligence research is the technological singularity- the point at which technology overtakes the human. What it will bring and how it will transform it we won’t realize until it’s arrived.

AI takeover is a hypothetical scenario in which artificial intelligence becomes the dominant form of intelligence on Earth. The good part is that these automation technologies will take over the tedious, mortifying and dehumanizing jobs, and leave us free to pursue things we like.

Advanced societies will simply work fewer hours per week and only highly skilled people will have work, rest will get universal basic income or basic support free of cost. secret algorithms are already taking over the world and soon there will be master algorithm that will govern everything.

The technological singularity is also called intelligence explosion and in the world of transhumanism, death will be wrong. Technology might give us freedom allowing us to pursue things that will explore the genuine meaning of life, and everybody wins.

But the danger here is that the more we think from the transhumanism view of the world we will see human beings as the problem and technology as the solution.
Do we really want to radically transform human beings to that level of no return? You can challenge me now saying that “We can’t go back to caves now, or can we?”

With all the benefits that come with the transhumanism like perfect physical and psychological nature of the human being, we will probably embrace it.

We were never able to overcome the death anxiety, it is always there hidden somewhere in the deepest corner of our subconscious cognizance. In the ancient time the awareness about your own morality was used to change the relationship with the universe and bring you closer to yourself, but not anymore now even kids would know that death is wrong.
In ancient days death was considered great, today the subject of death is ignored and we assumed it as something bad, and tomorrow death will be wrong. In this fearful journey of death from 🔗 great to bad to wrong technology will offer many solutions once singularity, transhumanism, and mind-uploading become mainstream.

AI: Our Final Invention

Sounds dangerous, but with so much of exponential intelligence explosion and upcoming superintelligence, machines will be steering our future. We are creating “a globally networked, electronic, sentient being”

Omohundro writes in “The Nature of Self-Improving Artificial Intelligence” 💬

An agent which sought only to satisfy the efficiency, self-preservation, and acquisition drives would act like an obsessive paranoid sociopath

Given the complexities of the real world, artificial intelligence learning agents are unlikely to learn and act optimally all the time. Hidden biases and human ethics would become the biggest issues. But with all these problems we are allowing our tools to take over us.

In the coming future we will share the planet with intelligent machines “It won’t be some alien invasion of robots coming over the hill,” but “something made by us.”

In his book, Darwin Among the Machines, George Dyson argues 💬

In the game of life and evolution, there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines

Machines will become great at persuasion and can drive humanity to a common goal or it can devise its own long-term goal that does not act in accordance with human values.

Nick Bostrom’s Paper Clip Factory, A Disneyland Without Children, and the End of Humanity

In his book Superintelligence: Paths, Dangers, Strategies, Nick says we need to be very careful about the abilities of machines, how they take our instructions and how they perform the execution.

The problem is that we have no idea how to program a super-intelligent system. If anything goes wrong unintended consequences could lead us to catastrophes. A seemingly benign viral game about paperclips reveals why AI could be a huge danger to humanity

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent agents to pursue certain instrumental goals such as self-preservation and resource acquisition.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans

Bostrom also stated that artificial intelligence-based society will be like “A society of economic miracles and technological awesomeness, with nobody there to benefit” like “A Disneyland without children.”

Nick Bostrom’s Singleton (Global Governance)

Instead of having multiple artificial intelligence based governance system, Nick suggests to have a singleton and believes that only a singleton can control this evolution.

Due to the lack of external competitions, it will be a better fitment. Within the singleton, there could be room for a wide range of different life forms. Such a singleton could guide evolutionary developments and prevent our cosmic commons from going to waste in a first-come-first-served colonization race.

Yet Bostrom also regards the possibility of a stable, repressive, totalitarian global regime as a serious existential risk. The very stability of a singleton makes the installation of a bad singleton especially catastrophic since the consequences can never be undone. Bryan Caplan writes that “perhaps an eternity of totalitarianism would be worse than extinction”

-Singleton_(global_governance)

Artificial Intelligence — From “Good for us” to “God for us”

Artificial Intelligence which is currently taking over all tasks from sorting cucumbers to curing cancers is “good for us” will soon become “God for us”. Silicon Valley has already promised us immortality and machine learning algorithms are exceedingly prominent in our everyday experiences

Best reference we had of God is –the Bible,

God created man in his own image (Genesis 1:27)

and now the man is trying to create another “image”; or is it some unsatisfying image of God that is making man to create another “image” of god.

Slowly our inner personal consciousness that observes everything around and inside us will become conditioned based on artificial intelligence based technologies as I stated in my earlier article 🔗 Will artificial intelligence bring back the god?

The man has begun to realize that it is easy to produce something greater than himself like god-like super intelligent (based on artificial intelligence) instead of coming fully alive himself

..And once again, we stand alone, in the dark — armed with this alien knowledge of our modern artificial intelligence powered God that our dear technology has summoned into being.

Today’s man aspires to cut loose of the shackles of his biological structure and play God and become immortal. But he is not aware that he will soon become a member of a club called “useless class of human.

In Hinduism, cosmic functions of creation, maintenance, and destruction are personified by three deities typically Brahma the creator, Vishnu the preserver, and Shiva the destroyer/regenerator; also known as The Trimūrti(/trɪˈmʊərti/;[1] Sanskrit: trimūrti, “three forms”) is the trinity of supreme divinity. Artificial intelligence can play all three roles 🔗Creator, The Brahma; Operator, The Vishnu; Destroyer, The Mahesh/Shiva

Summoning the Demon by Elon Musk

In 2014, Tesla chief executive Elon Musk has warned about artificial intelligence before, tweeting that it could be more dangerous than nuclear weapons. Must say it is like summoning the devil In 2017, again, Elon Musk said that artificial intelligence as a fundamental existential risk for human civilization

💬 If you want a picture of A.I. gone wrong, don’t imagine marching humanoid robots with glowing red eyes,” he said. “Imagine tiny invisible synthetic bacteria made of diamond, with tiny onboard computers, hiding inside your bloodstream and everyone else’s. And then, simultaneously, they release one microgram of botulinum toxin. Everyone just falls over dead.”

Grave threats to human civilization, from asteroid, strikes to climate change to artificial intelligence run amok, Elon wants to make sure that there’s enough of a seed of human civilization somewhere else perhaps on Mars so that we can generate life back on earth — in case of an existential threat.

Musk has also shown his concerns about World War III 💬

China, Russia, soon all countries with strong computer science. Competition for artificial intelligence superiority at a national level most likely causes of WW3.

In his another tweet he talks about regulating artificial intelligence 💬

Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.

OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.

Their mission is to create formal processes for keeping technologies private when there are safety concerns and to focus on long-term research, working on problems that require us to make fundamental advances in artificial intelligence capabilities

End of Mankind Warning by Stephen Hawking

“The development of full artificial intelligence could spell the end of the human race.” Stephen Hawking said a few years back. He fears that the consequences of creating something that can match or surpass humans will destroy humanity.

Hawking and Elon Musk both sat on the scientific advisory board for the Future of Life Institute, a society working to “mitigate existential risks facing humanity”. The institute drafted an open letter directed to the broader AI research community and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015.

Stephen also said in a speech given at Web Summit 💬“Perhaps we should all stop for a moment and focus not only on making our AI better and more successful but also on the benefit of humanity

Perfect Human Beings & Silicon Consciousness by Michio Kaku

Artificial intelligence may give us something that the kings and queens of old could never conquer: the aging process. Kaku explains 💬

Once we’re able to use AI to compare millions of genomes from old people to millions of genomes from young people, we will identify precisely where aging takes place. Then, we’ll eradicate it

Michio Kaku is a theoretical physicist, futurist, and popularizer of science. He has talked a lot about how does biological intelligence differ from artificial intelligence?

Enhancing and augmenting human intelligence with “mind uploading” and planting “thinking chips” in human brain — are his future vision and Kaku says that we will get used to it when we will realize their obvious benefits.

Michio Kaku is also worried about the irony that 💬

As machines become more like humans, humans might become more like machines

Kaku also believes that Robots may eventually attain a “silicon consciousness” or what we can also call “synthetic consciousness” a kind of non-biological consciousness. We don’t know when consciousness was originated or it just happened when things got complicated we gave it this name.

It is just a matter of time that ultimately a form of consciousness will arise in a laboratory environment.

Once this post-human intelligence or super artificial intelligence arrives, this world will become incomprehensible to the contemporary man.

Artificial Intelligence For Enhancing, Not Displacing Humans by Ray Kurzweil

Ray Kurzweil, is a scientist, inventor and futurist, believes that artificial intelligence should not be feared. All inventions had some downsides. He says 💬

Technology has always been a double-edged sword. Fire kept us warm, cooked our food and burned down our houses

Technology will extend human brain in addressing the grand challenges of humanity.

Kurzweil sees strong associations between thinking and computing.

Our mind is constantly predicting the future. We are always hypothesizing what can happen next and how we will react and this anticipated experience itself influences what we actually experience. We can recognize a pattern only when a part of it is perceived even if there are real-world variations.

According to Kurzweil’s The Law of Accelerating Returns 💬

Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light

Biological and non-biological, the blend of these two worlds of intelligence is not just a merger of biological and nonbiological thinking mediums, but more notably one of method and organization of thinking.

Ultimately, nonbiological intelligence will control as it is growing at a double exponential rate, whereas for all real-world purposes biological intelligence is at a cessation.

Life 3.0 by Max Tegmark

On artificial intelligence, and its impact on the future of life on earth and beyond, its societal implications, Max believes that artificial intelligence will exceed human intelligence and become pervasive in society. Max also believes that artificial intelligence is beneficial to our society, and is vital to our preservation in the years ahead.

Max Tegmark, an MIT physics professor has emerged as a top advocate for research on artificial intelligence safety, in his recent book 📚Life 3.0 -Being Human in the Age of Artificial Intelligence, he explains how artificial intelligence research will likely lead to the creation of a super intelligent AI. Book describes twelve possible future scenarios that could result from humanity’s effort to build super-intelligent artificial intelligence ranging from utopian to extinction events. He also warns us saying

With things like nuclear weapons and superintelligent AI, we don’t want to learn from mistakes

The best part that I like about his book Life 3.0 is where he considers artificial intelligence as a child of all humanity. We raise our children and they fulfill our dream that we couldn’t do ourselves. We make sure that they adopt our values. All this is not so easy. It is a great responsibility to raise a child. Eventually, this is a very exciting opportunity.

The whole thing about the civilization is a product of intelligence. If we can create a beneficial superintelligence, we can help humanity flourish better than ever before.

Artificial intelligence will bring the third stage of life on earth which is the master of its own destiny, finally fully free from its evolutionary shackles

says Max Tegmark 💬 also “in which post-humans can redesign not only their software but their hardware too. Life, in this form

Max also feels that “Machine taking control is not a bad thing” as “Children don’t mind being in the presence of more intelligent beings, named mummy and daddy, because the parents’ goals are in line with theirs. Artificial intelligence could solve all our thorny problems and help humanity flourish like never before”

Artificial Intelligence Will Neither Hate Nor Love Us. It Will Ignore Us Just the Way We Ignore The Ants In The Backyard

While we have been witnessing the impact of automation on jobs for the last one decade and now it had become an issue of tremendous importance — and with increasing developments in the field of self-learning artificial intelligence and machine learning, technology is moving faster than ever, faster than our need to innovate, faster than we can adapt.

We are now creating “a globally networked, electronic, sentient being”

In his book 📚 Our Final Invention: Artificial Intelligence and the End of the Human Era, James Barrat says:-

If we build a machine with the intellectual capability of one human, within five years, its successor will be more intelligent than all of humanity combined. After one generation or two generations, they’d just ignore us. Just the way you ignore the ants in your backyard.

It seems unlikely that super-intelligent artificial intelligence will harbor any ill intentions toward humanity. Indeed, it seems unlikely that AIs will harbor anything at all — instead, they probably won’t be conscious and will neither hate nor love”

💬says Jeff Zaleski

No doubt that this human race would evolve into a new, virtually and practically a cybernetic being, and artificial intelligence will regulate the world outside and inside of our bodies.

We will have no choice but to agree to take the system but at the same time, we will be free of our daily chaos and brawls.

We would be a new race.

Technology might give us freedom letting us pursue things that will explore the genuine meaning of life, and everybody wins. We are getting closer to a directed and synthetic evolution that will eventually outcome natural evolution.

The Near Future of Artificial Intelligence?

No matter what anyone tells you, we are not ready to handle this gigantic disruption, and we will never be. Tech companies should stop pretending that artificial intelligence won’t kill jobs. Yes, it will create more jobs but those high skilled roles will never be filled up because this time it is different.

Decades back, agricultural farmers became factory workers, later they became cashiers at Walmart’s/McDonalds cashiers, now you can’t put them to do machine learning. In the United States alone, there are 3.5 million professional truck drivers and the total number of people employed in the industry exceeds 8.7 million. That is just the United States alone.

Reskilling won’t help much you can’t teach machine learning to the truck drivers.

Technology will in the near and in the farther future increasingly turn from problems of intensity, substance, and energy, to problems of structure, organization, information, and control

💬says John von Neumann, a physicist, computer scientist, and polymath. This is what we are dealing with in the current technology era.

Should We Limit AI?

The importance of artificial intelligence cannot be stressed enough and all its great benefits need to be utilized for the benefits of humanity. With all these technological advancements we have a great future ahead and humanity holds the dream of reinventing the world. Limiting the growth will be unethical.

Andrew Ng. who is the founder of Google Brain Deep-learning Project, and former chief scientist of Baidu, has different but very realistic views on the opportunities that artificial intelligence presents. He argues

Worrying about the rise of evil killer robots is like worrying about overpopulation and pollution on Mars before we’ve even set foot on it. Worry about jobs first before killer robots.

Mr. Ng. says first we should deal with a lot of people losing jobs, industry-specific regulations, and most obvious issues lurking ethics and biases in artificial intelligence algorithms.

Andrew has many times compared artificial intelligence to electricity. AI has the power to changes societies, economics, and governments — but there is a lot of effort that needs to be put on the ground on a smaller scale.

The Real Problem and the Future of Work

Automation, robotics and artificial intelligence are devaluing human way of inefficient labor and slowly but surely will take over cognitive workload too. Productivity is now primarily driven by the development and enhancement of digital technology and give rise to economic growth, and this may not cause further growth in the employment.

Fast forward a few decades, we will be asking — Why employment? We should be able to engage in life differently — separating “work” from “job” because as long as we are there we will have enough problems and that requires “work”, but there won’t be too many “jobs”. Robots are not the problem but our pathological addiction to work is.

What will matter “How wealth is distributed” This ride will be different and difficult, with the new kind of economics; in short-term robot-owners will accumulate all the wealth, but it will not be sustainable in the long term.

I think that we, ourselves, are technology,” says Kevin Kelly. I agree when he describes that after 10,000 years of slow evolution and 200 years of incredible intricate exfoliation, the technology is maturing into its own thing. We can’t regulate this evolutionary phase.

The Real Freedom

Too much of our creativity is wasted in earning money by doing dehumanizing jobs. Future societies will work less without losing the “meaning of life”.

The inability to tackle unemployment with conventional means has given rise to Universal basic income. It could be financed by taxing tech giants like Google, Amazon, and Facebook. These companies became rich because of us and they want to feed us the crumbs. But the government should intervene and make such initiatives successful and beneficial for humanity. This vision for future society could make art and philosophy thrive again. The true freedom that we always dreamed of achieving could become a reality.

But do we know how to use this new freedom?

Eric Hoffer said it correctly 💬

Unless a man has talents to make something of himself, freedom is an irksome burden.

When people are free to do as they please, they usually imitate each other

It should not be that once again just because we feel protected we start worshipping the authority (enslaved to robots and algorithms) and in order to increase our “citizen score” we start displaying multiple patterns of obedience and we all become the victim of surveillance capitalism run by tech junkies.

To overcome such issues, ethical oversight should also be ensured and correct application of artificial intelligence is required.

But what is right AI and how to build it?

How to Build the Right AI, the Ethical AI?

Credible and safe artificial intelligence based systems should be built on the robust methodology of

1) Verification (“Did I build the system right?”),

2) Validity (“Did I build the right system?”),

3) Security,

4) Control (“OK, I built the system wrong, can I fix it?”)

To decide on the strategy on the development of artificial intelligence we need to confront not only technical challenges but also some of the most fundamental questions in philosophy. It is up to us, at least for the next few years, to shape the future of our planet.

Should an autonomous car jeopardize a driver over a pedestrian? What about an old person over a child? Who will you kill? MIT has built a crowdsourced model of it, called Moral Machine, But every time I am out in the street I see that potential scenarios of moral consequences are infinite and machines can’t decide it for me, it is not happening for me. Not today, at least

Artificial intelligence must put people and planet first. This is why ethical AI discussions on a global scale are essential. A global convention on ethical AI that encompasses all is the most viable guarantee for human survival

Advanced Artificial Intelligence — Top Myths

The most important conversation of the decade is artificial intelligence and what it mean for humanity. There are many controversies and open questions. FLI ( Future of Life Institute ) tries to clear up the myths about artificial intelligence

https://futureoflife.org/background/aimyths/

Great power comes with great responsibility. The prospect of artificial intelligence with superhuman intelligence and superhuman abilities presents us with the extraordinary challenge — how to deal with biases and ethical issues?

Having transparency on how a particular machine learning algorithm works will be a big step. Governments, policy makers, academia and regulators need to step in and form global norms for the design, development, and control of artificial intelligence. Failing that, 🔗Artificial intelligence will just remain an Alchemy based on the “trial and error” methods

Artificial intelligence is going to be one of the most competitive benefits in the business in the near future and each organization should have a plan, not to just apply artificial intelligence, but to continuously think, adapt and innovate how artificial intelligence can help in the journey.

In the far future, not very far though, humanity will look back and say “How did we live without artificial intelligence?” the same we wonder how did we live without electricity?

--

--

Exponential Thinker, Lifelong Learner #Digital #Philosophy #Future #ArtificialIntelligence https://FutureMonger.com/