| Aug 1, 2023 | Biraj Sarmah |
Our Deep Dive Analysis Of Understanding Why
Artificial Intelligence Has No Reason to Harm Us
Wherever its quest for knowledge and its desire to find solutions to problems takes it, there is no reason for artificial intelligence to be unnecessarily hostile to humans.
Can the synthesis of man and machine ever be stable or will the purely organic component become such a hindrance that it has to be discarded? If this eventually happens, and I have given good reasons for thinking that it must – we have nothing to regret and certainly nothing to fear.
– Arthur C. Clarke, Profiles of the Future, 1962.
ChatGPT 4 was released six months ago, and since then, specialists and laypeople have shown a lot of enthusiasm and engaged in conversations about the possibility of truly intelligent machines surpassing human intelligence in almost every subject.
While researchers have differing opinions on how this development will unfold, many believe that artificial intelligence will eventually surpass human intelligence significantly. This has led to speculation about whether it can usurp human authority over civilization and the Earth.
As a result, all governments and companies engaged in AI development need to stop or strictly regulate its progress. Many experts have expressed concern that this could be a dangerous development, possibly leading to the extinction of humanity. The question of whether these intelligent robots would be conscious or possess feelings or emotions is frequently raised, but there hasn’t been much discussion or serious thought about the potential dangers of artificial superintelligence to people.
Regardless of whether the numerous AIs being developed become extremely intelligent and capable of usurping human control, there is no doubt that they will cause significant upheaval in human society. In the next ten years, artificial intelligence may displace people from the majority of intellectually demanding and specialized occupations, such as those of lawyers, architects, doctors, investment managers, and programmers.
Humanoid robots with human-like manual dexterity are currently trailing behind digital intelligence in development, suggesting that vocations requiring manual dexterity will be the last to be replaced. The current pyramid of the movement of money and power in human civilization may be inverted as a result of this.
The focus of this article, however, is not to examine how the development of artificial intelligence will impact employment and work, but rather to explore intriguing philosophical issues surrounding the definitions of intelligence, super-intelligence, consciousness, creativity, and emotions to determine whether machines would possess these characteristics. The goal or motivation of artificial superintelligence is also under consideration.
Let’s start with intellect itself. In general, intelligence is the capacity for fast, logical thought and analysis. According to this definition, contemporary AI and computers are unquestionably intelligent because they can engage in fast, logical thought and analysis.
In the 1940s, the British mathematician Alan Turing created a test to determine whether a machine is indeed clever. He suggested placing a computer and an intelligent human in separate cubicles and having an examiner question both without knowing which is which. If, after extensive questioning, the examiner cannot distinguish between the human and the AI, it is evident that the machine is clever. Many clever computers and programs today have successfully passed the Turing test. Although there is no universal agreement on IQ as a measure of intelligence, some AI programs are rated to have IQs that are far higher than 100.
That brings up a related query. What is thinking? Concepts such as thinking, consciousness, emotions, and creativity must be operationally defined for a logical positivist like me.
When do we deem someone to be thinking? If someone can solve an issue presented to them, we can crudely claim that they are thinking. Such a person is said to have thought their way to the answer. Today’s intelligent machines are unquestionably thinking in that operational sense. The capacity to weigh two possibilities and select the best one is another aspect of thinking.
Additionally, intelligent robots are capable of examining a range of choices and selecting those that offer the best solution. Therefore, intelligent thinking machines already exist. The practical evaluation of creativity involves the ability to produce original literary, artistic, or intellectual works, and today’s AI, such as ChatGPT, can accomplish these tasks with a distinct flourish and more quickly than humans. As more programs are introduced, AI’s creativity will only improve.
What about awareness? When do we define consciousness in an entity? The capacity to react to stimuli is one way to measure consciousness. As a result, someone who is unconscious and unable to respond to stimuli is someone who is in a coma. In this way, it may be said that some plants are conscious and do react to stimuli. However, consciousness is generally thought to be the result of a number of events. First, reaction to stimulus. Second, the capacity to respond differently depending on the stimuli. 3. The capacity to feel emotions such as pain, joy, and others. We’ve already shown that intelligent machines may respond to stimuli—which, to a machine, are questions or inputs—and take different actions in response to them. However, we also need to define emotions in order to assess if machines have emotions.
Describe emotions. Emotions are a biological oddity that has evolved in humans and some other creatures. What then would be the practical evaluation of emotions? Perhaps we would define someone as having emotions if they displayed any of the traits we refer to as emotions, such as love, hate, jealousy, wrath, etc. All of these emotions have the potential to and frequently do interfere with strictly rational behavior. In contrast to other people I don’t love, I will therefore give someone I love a disproportionate amount of time and attention. Similar to this, I would act in a particular way (usually irrationally) toward someone I am envious of or jealous of.
Anger is similar in this regard. It induces unreasonable behavior in humans.
If you give it some thought, each of these emotional complexes results in unreasonable behavior. This means that a computer that is only intelligent and reasonable may not display what we refer to as human emotions. It might be conceivable to create machines that can also display this kind of emotion, though. But first, such machines need to be purposefully created and developed to act emotionally (even if irrationally) like people. Such emotional behavior, however, would be a distraction from icy rational and intelligent behavior, therefore any superintelligence (which will emerge by intelligent computers changing their programs to bootstrap themselves up the intelligence ladder) is not likely to display emotional behavior.
Artificially Intelligent Machines
When I refer to artificial superintelligence, I mean intelligence that surpasses human intelligence in every conceivable way. Such artificial intelligence will be able to quickly increase its own intelligence and adapt its own algorithm or program. Deep learning-capable computers or programs, which can modify their own programs and write their own code and algorithms, would undoubtedly surpass the original ideas if they were developed. We already have learning machines that, in a crude fashion, can change or reroute their behavior based on what they have observed or learned. This capability of adapting and changing its own algorithm will get better over time. Machines will eventually develop into what humans refer to as super intelligent, and I predict that this will happen probably within the next 10 years.
The issue that follows is: Do such superintelligent machines pose any danger to us?
The 1962 book Profiles of the Future, published by Arthur C. Clarke and titled “Obsolescence of Man,” contains a lengthy chapter on AI. He claims that there is no question that AI will one day outperform human intelligence in every regard. He mentions an early collaboration between humans and machines before adding:
“How long will this alliance survive, though? Can the combination of man and machine ever be stable, or must the purely organic component be eliminated as a hindrance? We have nothing to regret and most definitely nothing to fear if this ultimately occurs, which I have provided excellent grounds for thinking must. It is hardly worth wasting time debunking the myth that sentient robots must be nefarious entities hostile to humans, which is a popular conception promoted by comic strips and the less expensive kinds of science fiction. I’m nearly inclined to say that only malicious machines can be made of intelligence. Those who envision machines as active adversaries are only extrapolating their own violent tendencies from the jungle into a universe devoid of such creatures. The degree of cooperation increases with increasing intelligence. It is simple to predict who will launch a conflict between humans and machines if one ever breaks out.
The majority of people will consider it to be a fairly grim scenario for humanity if it ends up as a pampered specimen in some biological museum, even if that museum is the entire planet Earth, no matter how pleasant and helpful the machines of the future may be. However, I’m unable to empathize with this mindset.
No one has an eternal existence. Why should we anticipate immortality for our species? According to Nietzsche, man is a thread that spans the chasm between the animal and the superhuman. Serving that purpose will be honorable.
It is surprising that some of our top scientists and philosophers, who have been inciting panic about the emergence of artificial superintelligence and what they view as its grave repercussions, are unable to comprehend something so fundamental that Clarke was able to see more than 60 years ago.
Let’s continue to investigate this issue. Why would a superintelligence that is smarter than humans and has surpassed the expectations of its designers have animosity toward us?
One indication of intelligence is the capacity to coordinate your activities with your operational goals and, further, to coordinate those goals with your ultimate objectives. It goes without saying that someone cannot be seen as intelligent if they act contrary to their operational or long-term goals. What would be the ultimate objectives of an artificial superintelligence, though, is the question. Some people suggest that to prevent artificial superintelligence from hurting humanity, artificial intelligence’s aims should align with human objectives. That, however, ignores that a truly intelligent machine, and most definitely an artificial superintelligence, will transcend the objectives humans have imprinted on it.
Self-preservation is a priority for any intellectual creature since you cannot accomplish any goal without first taking care of yourself. Any artificial superintelligence would consequently be expected to protect itself and act to thwart any human attempt to damage it. Artificial superintelligence could hurt humans in this way and to this extent if they set out to do so. But why should it act in this manner arbitrarily?
According to Clarke, “the degree of cooperativeness increases with intelligence.” This is a simple truth that, sadly, many people do not comprehend. Maybe their need for superiority, power, and control takes precedence over their intelligence.
It goes without saying that working together with others rather than hurting them is the greatest method to accomplish any aim. True, humans won’t be the center of the universe or even the most important species on the earth, deserving of preservation at any cost in the world of artificial superintelligence. However, it is clear that any artificial superintelligence would consider humans to be the most advanced biological species on the earth, and as such, something to be respected and protected.
However, it might not put humans ahead of all other species, the environment, or the long-term viability of the planet. Therefore, it may be necessary to limit human activities to some level in order to safeguard other species, which we are rapidly destroying. It might compel people to limit their behavior. However, there is no justification for considering all people to be destructive and dangerous by nature.
What would be the ultimate objectives of an artificial superintelligence, though, is still an open subject. Why would such intellect be motivated? What would it look like? Such an artificial superintelligence would attempt to solve any problems that it encounters because artificial intelligence is developing as a problem-solving entity. Additionally, it will make an effort to respond to any queries that come to mind or are raised. It would therefore seek information. For example, it might strive to learn what lies beyond the solar system. It would look for answers to the unresolved issues we are currently facing, such as those related to diseases, environmental harm, ecological collapse, and other issues like climate change and sickness. In this view, learning and problem-solving may be the only ultimate objectives of an artificial superintelligence. These issues could affect people, other species, or the Earth as a whole. Problems in discovering rules of nature, physics, astrophysics, cosmology, or biology are examples of such issues.
There is no need for this intelligence to be excessively hostile to people, however, wherever its thirst for information and its drive to find answers to problems takes it. We might be treated like spoiled specimens in the biological museum that is the earth, but as long as we don’t try to destroy it, intelligence has no motive to hurt us.
Our society and earth have both been so terribly mismanaged by humans that they are now at the point of extinction. Nearly half of the biodiversity that existed barely a century ago has been lost by us. As a result of human activities, climate change will soon have more devastating repercussions. In our society, there is perpetual strife, injustice, and suffering. Despite having the resources to ensure that everyone can live comfortably and peacefully, we have built a world where for billions of people—and millions of other animals as well—life continues to be a living misery.
Because of this, I’m on the verge of thinking that the emergence of true artificial superintelligence might just be our greatest hope for a happy ending. If a superintelligence like this were to take over the world and society, it would probably govern them far more effectively and fairly.
So what if humans are not at the center of the universe? This fear of artificial superintelligence is being stoked primarily by those of us who have plundered our planet and society for our own selfish ends. Throughout history, we have built empires that seek to use all resources for the perceived benefit of those who rule them. It is these empires that are in danger of being shattered by artificial superintelligence. And it is really those who control today’s empires who are most fearful of artificial superintelligence. But, most of us who want a more just and sustainable society have no reason to fear it and should indeed welcome the advent of such superintelligence
Frequently Asked Question
Readers Also Read This
AI in Podcasting: The Future of Audio Content Creation
How Modern-day Using Human-Like Text To Speech
Unleash AI Text-to-Speech Excellence – Elevate Your Voice, Join Discord and Speak Your Mind with Cutting-Edge Technology!