Will computers eventually be smarter than humans?
Publish Date
24 MAY 2020
Overview
Everyone is talking about artificial intelligence (AI) – in the media, at conferences and in product brochures. Yet the technology is still in its infancy. Applications that would have been dismissed as science fiction not long ago could become reality within a few years.
Will computers eventually be smarter than humans?
Everyone is talking about artificial intelligence (AI) – in the media, at conferences and in product brochures. Yet the technology is still in its infancy. Applications that would have been dismissed as science fiction not long ago could become reality within a few years. With its specialty materials, the Electronics business sector of Merck KGaA, Darmstadt, Germany is contributing to the development of AI.
From theory to practice
Since the mid-20th century, scientists have been fascinated by the idea of machines that are capable of independent thinking and learning. In an article published in 1950, British mathematician Alan Turing raised the question of whether a machine might someday achieve the same level of intelligence as a human being. In 1969, the Stanford Research Institute introduced the first robot that was able to move about and respond to commands, with the help of cameras and sensors. And in 1997, a computer program defeated a world chess champion for the first time when IBM’s Deep Blue computer conquered the reigning world champion, Garry Kasparov, 4–2.
Since then, advances in AI have accelerated considerably. It has already found its way into numerous aspects of our day-to-day lives, including text, image and speech recognition. Many of you are probably regular users of translation tools, own a smartphone with a facial recognition feature or communicate with chatbots or intelligent voice assistants such as Siri and Alexa. Personalized online advertising is also based on AI, as is Google’s algorithm for determining which search results appear at the top of the screen. The first semi-autonomous vehicles are already on the roads. Thanks to a steady increase in available data, continuously improving algorithms and increasingly efficient computers, AI has become more and more powerful in recent decades.
A specialist rather than a generalist
However, even advanced systems such as voice assistants, autonomous vehicles and robots still have a long way to go to compete with the human brain. Intelligence is often defined as the ability to achieve goals in a wide range of environments. Today’s AI applications, however, always specialize in a given task. They solve problems based on rules established specifically for that task. So although a chess program may be able to continually optimize its game strategy, it would not be capable of driving a car.
Humans are creative, curious and endowed with social skills, all of which continues to set us apart from even the most intelligent computer. This is why experts in the field, unlike the marketing departments of many companies, make a distinction between weak (or narrow) and strong (or general) AI.
The next evolutionary stage of AI
Today’s AI technologies are all categorized as weak AI – which is not to diminish what technology has achieved. In many areas, weak AI has already surpassed the capabilities of human beings. Strong AI is differentiated by the ability to transfer knowledge and skills from one environment to another and to make decisions in a variety of contexts, even unfamiliar ones. By definition, strong AI is capable of acting on its own and adapting flexibly to many different problems. It is also able to interact proactively with other machines and with human beings. A virtual assistant with strong AI would be able to predict our needs without first receiving instructions.
An essential feature of strong AI is the ability to learn independently, which is familiar to us today mainly in the context of machine learning. Machine learning requires not only a sufficient amount of data and problems to solve, but also – and most importantly – specialized algorithms that can recognize relevant patterns within the data. These algorithms must be dynamic and capable of learning – in other words, they must be able to adapt continually to changing conditions. In addition, AI must have the ability to apply the correct algorithms to the given problem. These are, in a sense, precisely the things that the human brain learns throughout our lives. For us, it takes 18 years to reach an acceptable level of maturity, at least as defined by law. It is only by making appropriate use of self-optimizing algorithms capable of learning and interacting that computers can make predictions or decisions without being explicitly programmed to do so. AI requires not only powerful algorithms, but also the knowledge and experience – accumulated as data – to determine which one is optimal for solving the given problem. So far, however, researchers have not succeeded in developing strong AI with such self-optimizing capabilities. Advances in the field of machine learning are therefore essential for making the transition from weak to strong AI. Most scientists agree that this is possible; for me personally, the only question is when.
Between science fiction and science
Raymond Kurzweil, an American author and Director of Engineering at Google, made a much-cited prediction that computers would have human-level intelligence by 2030. A crucial hurdle in achieving that goal is the development of sufficiently powerful computers – and this is where our company comes in. Conventional computer architectures are already reaching the limits of their capabilities. Accordingly, research institutions and companies all over the world are working on entirely new computing technologies, such as quantum computers and neuromorphic computers, that could bring AI to the next level. We are developing cutting-edge specialty materials for precisely these technologies.
Yet some scientists are convinced not only that AI will catch up with human beings, but that after computers reach our level of intelligence, they will soon surpass us. After all, they would immediately have enormous advantages when it comes to certain resources – including memory, the ability to multitask and a knowledge base that theoretically includes all of the information the Internet has to offer. Because of these advantages, computers will be able to produce much more in-depth decision-making heuristics and statistics than the human brain. In other words, our brains are limited by biology, whereas the computing systems of the future are not subject to such limitations, at least in theory. Google’s Kurzweil anticipates that we will arrive at the point that futurologists and science fiction refer to as the singularity by the year 2045. Experts disagree, however, about whether the stage of development known as superintelligence is achievable and whether AI will eventually acquire consciousness.
Understandably, many people are following these developments with some apprehension. Whether or not something akin to superintelligence can or will be achieved, the advances in AI that have already been made, or that can be expected in the foreseeable future, will surely lead to one of the greatest societal changes in human history.
It will be critically important to ensure that these developments are handled in a way that benefits everyone and to deal responsibly with the relevant new technologies. This also means establishing strict ethical standards and creating models accepted throughout the world for managing these technologies. This view is echoed in the 2020 Digitalization Monitor recently published by the consulting firm Bearing Point: A survey revealed that 62% of executives say it is important or very important to consider the ethical implications of AI. And this is appropriate, because ultimately, intelligent machines should serve the needs of human beings, and not the other way around.