HomeTechnologyArtificial intelligenceWhy A.I. will never rule the world

Why A.I. will never rule the world

Call it the Skynet hypothesis, artificial general intelligence or the advent of the singularity – for years AI experts and non-experts alike have feared (and, for a small group, celebrated) the idea that artificial intelligence could one day become smarter than humans.

According to the theory, advances in AI — especially the type of machine learning that is able to take in new information and rewrite the code accordingly — will eventually catch up with the wetware of the biological brain. In this interpretation of events, any AI progress from Danger-winning IBM machines to the huge AI language model GPT-3 brings humanity one step closer to an existential threat. We are literally building our upcoming successors.

Except it never will. At least, according to the authors of the new book Why machines will never rule the world: artificial intelligence without fear.

Co-authors University at Buffalo Professor of Philosophy Barry Smith and Jobst LandgrebeCognotekt, founder of German AI company Cognotekt, claims that human intelligence will not be overtaken by “an immortal dictator” anytime soon — or ever. They said Digital trends their reasons why.

Digital Trends (DT): How did this topic get on your radar?

Jobst Landgrebe (JL): I am a doctor and biochemist by training. When I started my career, I did experiments that yielded a lot of data. I started studying mathematics to interpret this data and saw how difficult it is to model biological systems using mathematics. There was always a mismatch between the mathematical methods and the biological data.

In my mid-thirties, I left academia and became a business consultant and entrepreneur working in artificial intelligence software systems. I tried to build AI systems to mimic what humans can do. I realized I was running into the same problem I had in biology years earlier.

Customers said to me, ‘why don’t you build chatbots?’ I said, ‘because they won’t work; we cannot properly model this type of system.’ That ultimately led me to write this book.

Professor Barry Smith (BS): I thought it was a very interesting problem. I already suspected similar issues with AI, but I never thought about it. Initially we wrote a paper called ‘Making artificial intelligence meaningful again.’ (This was in the Trump era.) It was about why neural networks fail for language modeling. Then we decided to expand the paper into a book that explores this topic in greater depth.

DT: Your book expresses skepticism about how neural networks, which are critical to modern deep learning, mimic the human brain. They are approximations, rather than accurate models of how the biological brain works. But do you accept the core premise that it is possible that, if we understood the brain in enough detail, it could be replicated artificially – and that this would give rise to intelligence or feeling?

Must Read
7 Ways To Better Sell Artificial Intelligence To The Business

JL: The name “neural network” is a complete misnomer. The neural networks we have today, even the most advanced ones, have nothing to do with the way the brain works. The view that the brain is a series of interconnected nodes in the way neural networks are built is completely naive.

If you look at the most primitive bacterial cell, we still don’t even understand how it works. We understand some aspects of it, but we don’t have a model of how it works — let alone a neuron, which is much more complicated, or billions of neurons interconnected. I believe it is scientific impossible to understand how the brain works. We can only understand and deal with certain aspects. We don’t have a full understanding of how the brain works, and we won’t get it.

If we had a perfect understanding of how every molecule of the brain works, we could probably replicate it. That would mean putting everything into math equations. Then you could replicate this using a computer. The problem is, we can’t write those equations down and create them.

profile of head on computer chip artificial intelligence
Graphic digital trends

BS: Many of the most interesting things in the world happen at a level that we cannot approach. We just don’t have the imaging equipment, and we probably never will have the imaging equipment to capture most of what’s going on at the very fine levels of the brain.

This means that, for example, we do not know what is responsible for consciousness. There are in fact a series of quite interesting philosophical problems that, by the method we follow, will always be unsolvable – and so we just have to ignore them.

Another is the freedom of the will. We are a big believer in the idea that humans have wills; we can have intentions, goals, and so on. But we don’t know if it’s free will. That is an issue related to the physics of the brain. As for the evidence available to us, computers cannot have wills.

DT: The book’s subtitle is “Artificial Intelligence Without Fear.” What is the specific fear you are referring to?

BS: That was provoked by the literature on the singularity, which I know you know. Nick Bostrom, David Chalmers, Elon Musk and the like. When we spoke to our colleagues in the real world, it became clear to us that there was indeed a certain fear among the population that AI would eventually take over and change the world to the detriment of humans.

Must Read
Can Luminar Neo Supersharp AI sharpen your blurry photos?

We have quite a bit in the book on Bostrum-type arguments. The core argument against them is that if the machine cannot have will, it cannot have ill will. Without ill will there is nothing to be afraid of. Now, of course, we can still be afraid of machines, just as we can be afraid of weapons.

But that’s because the machines are run by people with bad intentions. But then it’s not AI that’s bad; it’s the people who build and program the AI

DT: Why does this idea of ​​the singularity or artificial general intelligence interest people so much? Whether they fear it or are fascinated by it, there is something about this idea that appeals to people on a broad level.

JL: There is an idea, begun at the beginning of the 19th century, and then declared by Nietzsche at the end of that century, that God is dead. Since the elites of our society are no longer Christians, they needed a replacement. Max Stirner, who like Karl Marx was a student of Hegel, wrote a book about this and said, “I am my own god.”

If you are God, you also want to be a creator. If you could create a super intelligence, then you are like God. I think it has to do with the hyper-narcissistic tendencies in our culture. We don’t talk about this in the book, but that explains to me why this idea is so appealing in our time when there is no transcendent entity to return to.

brain with computer text scrolling artificial intelligence
Chris DeGraw/Digital Trends, Getty Images

DT: Interesting. So to follow that is the idea that creating AI – or the goal of creating AI – is a narcissistic act. In that case, the concept that these creations would somehow become more powerful than us is a nightmarish twist on that. It is the child that kills the parent.

JL: Pretty much, yes.

DT: What would be the final outcome of your book for you if everyone was convinced by your arguments? What would that mean for the future of AI development?

JL: It’s a very good question. I can tell you exactly what I think would happen – and will happen. I think in the medium term people will accept our arguments, and this will lead to better applied mathematics.

Must Read
Nvidia teams up with Rescale to simplify cloud-based AI projects

Something that all the great mathematicians and physicists are fully aware of was the limitations of what they could achieve mathematically. Being aware of this, they only focus on certain issues. If you know the limitations well, then you go around the world looking for these problems and solving them. This is how Einstein found the equations for Brownian motion; how he came up with his theories of relativity; how Planck solved blackbody radiation and thus initiated the quantum theory of matter. They had a good instinct for which math problems can be solved and which cannot.

If people learn the message of our book, we think they will be able to develop better systems because they will focus on what is really achievable – and stop wasting money and effort on something that cannot be done. reaches.

BS: I think part of the message is already getting through, not because of what we say, but because of the experiences people have when they give large amounts of money to AI projects, and then the AI ​​projects fail. I assume you know about the Joint Artificial Intelligence Center. I can’t remember the exact amount, but I think it was about $10 billion, which they gave to a famous contractor. In the end they got nothing from it. They canceled the contract.

(Editor’s note: JAIC, a unit of the United States Armed Forces, aimed to accelerate the “delivery and adoption of AI to achieve mission impact at scale.” , with two more offices in June this year. JAIC ceased to exist as its own entity.)

DT: What do you think is the most compelling argument you make in the book?

BS: Every AI system is mathematical in nature. Since we cannot mathematically model consciousness, will or intelligence, they cannot be imitated with machines. Therefore, machines will not become intelligent, let alone super intelligent.

JL: The structure of our brains allows only limited models of nature. In physics, we choose a subset of reality that fits our mathematical modeling capabilities. This is how Newton, Maxwell, Einstein or Schrödinger came up with their famous and beautiful models. But these can only describe or predict a small number of systems. Our best models are the ones we use to develop technology. We are unable to make a complete mathematical model of living nature.

This interview has been edited for length and clarity.

Editor’s Recommendations






RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments