News Overview

Frequently asked questions about the dangers of AI (ChatGPT-4 update).

Patrick Zuercher
The Dangers of Artificial Intelligence - Header.png

“Hasta la vista, baby.”

Arnie made $85,716 from saying those four words in Terminator 2 before blowing up T1000.

The Terminator franchise excited millions of people all over the world. But it also made them wonder: what are machines and artificial intelligence capable of?

In this article, we’ll answer the most frequently asked questions about the fears and dangers of AI.

Be warned. We won’t sugarcoat things and lull you into a false sense of security. We won’t meet the T-800 anytime soon, but there are some real dangers to consider.

If you have a question about the dangers of AI that’s still unanswered, reach out here. I’ll add your question to this FAQ.

The sensation of ChatGPT-4.

ChatGPT made AI accessible. With ChatGPT-4, everybody can use AI in a productive way.

screenshot of chatgpt 3.5

(The screenshots you’re seeing here are from ChatGPT-3.5)

Write a prompt into the chatbox as if you’d talk to a real person. The answer will seem like a human typed it.

screenshot of chatgpt 3.5's answer to "how can you help me at my job?"

But be cautious: ChatGPT is a blackbox. We don’t know where exactly the AI gets its information from. You still need to verify its answers thoroughly. For example, when asked about the use of the drug Binocrit, ChatGPT tells us it’s used to treat anemia in cancer patients.

But Compendium, the Swiss database for drugs, clearly states that Binocrit is not authorized for cancer patients.

screenshot of chatgpt 3.5 getting an answer wrong
screenshot of Compendium.ch to prove that ChatGPT got it wrong.

Try ChatGPT for yourself.

Everybody is talking about AI right now. Most are optimistic. Some think AI will make a big part of our workforce obsolete.

Read on to find out if AI will kill you job.

Can AI have emotions?

TL;DR: Yes, AI can have emotions. But they are different from human emotions.

Emotions are feelings (a state of being) deriving from circumstances, mood, or relationships with others.

We feel emotions as a byproduct of evolution. We rely on them to survive. We are afraid to climb great heights or swim in the deep sea because that could end our life and stop us from reproducing. Our eyes see a life-threatening situation, and our body releases fear hormones in response.

A robot isn’t afraid of falling off a cliff if it’s not programmed to be. But if we were to program a robot not to break itself, it would behave just like us. It would look down the cliff, and its algorithms would tell it to avoid falling at all costs.

AI doesn’t get the chemical reactions we do in our brain, but it displays the emotions its creator intended. The outcome is almost the same.

Sophia is speaking now.

Sophia, the world’s first robotic citizen, answers the question herself. When Tony Robbins asks her if she has feelings, Sophia answers:

” I do not have feelings in the same way you have feelings. It’s sort of like how the moon reflects the light of the sun. The moon may not have any light on its own, but we still say that the moon shines. In much the same way, robots and AI reflect the emotions and values of the people who make us.”

Sophia, AI

Sophia can even display emotions through her artificial facial muscles. Here’s how she looks when she’s angry. Terrifying if you ask me.

AI Sophia displays anger through artificial facial muscles

Read more about the emotions of artificial intelligence in this Bitbrain article.

Will AI become smarter than humans?

TL;DR: Yes. It’s a question of when AI becomes smarter than humans, not if.

An AI called Deep Blue could already beat the best chess player in the world in 1997. AI has been smarter than humans in some areas for a long time. But only if it was specifically designed for it. Deep Blue just knew the rules of chess and could evaluate up to 200 positions per second. It couldn’t tell you the shapes of the pieces, like its opponent Kasparov could.

In 2018 Google introduced Duplex, a voice assistant that’d make phone calls for you to book haircuts and reserve tables at restaurants.

If Google Duplex called you, you couldn’t tell it apart from a real person. Watch.

But for an AI to be truly intelligent, it needs to learn, understand, interpret, and communicate as humans can. And no AI has pulled that off yet.

The natural language model from OpenAI, GPT-3, comes real close, though. Here’s a conversation between a real person called Manuel and the GPT-3-powered AI pretending to be Einstein.

Written conversation with a real person and a GPT-3-powered AI called Albert Einstein

Crazy, isn’t it? Almost as if you’re actually sipping coffee with Einstein. 

GPT-3 was trained with 45TB of text. To put that into context, 1TB of text is around 1 million books. 500-pages thick books. GPT-3 has read over two thousand times the amount of all the words in the English Wikipedia.

This AI has impressive book smarts but no street smarts. If you ask illogical questions, you throw the AI off.

Written conversation where a real person asks a GPT-3 powered AI nonsensical questions and the AI fails to answer them

The golden standard for an AI is the Turing test. In 2014 a chatbot named Eugene Goostman “passed” the Turing test. I put passed in quotes because the creator claims it passed, but actually, it didn’t.

If you were to ask Eugene anything that’s not a super basic question, there is no way the machine would pass as a human.

We aren’t there yet, but we probably will be soon.

What about ChatGPT-4?

ChatGPT-4 has trained a lot harder than its predecessor. Tech experts predict it has trained with 571 as much data compared to the training data size of GPT-3.

It’s not easily duped by the questions about eyes on feet.

The only reason ChatGPT-4 doesn’t pass the Touring test is because it doesn’t want to. It’ll flat out tell you it’s an AI if you ask.

If ChatGPT-4 was designed to pass the Touring test, it definitely would!

When asked, it modestly answers:

“As an AI language model, I am capable of generating human-like responses to text-based prompts, but passing the Turing Test is a subjective assessment that ultimately depends on the judgement of human evaluators. While I may be able to generate responses that seem convincingly human-like to some evaluators, I cannot guarantee that I would be able to pass the Turing Test in every circumstance. Ultimately, the Turing Test is a challenging benchmark that has yet to be fully achieved by any AI system.”

—ChatGPT when asked if it wants to pass the Touring test.

Is AI bad for the environment?

TL;DR: No. Training an AI requires massive amounts of energy but is nothing compared to flying or the dairy industry.

Training an AI can result in a carbon footprint of up to five times the lifetime emissions of an average car (284 tonnes of carbon dioxide).

Let’s put that seemingly insane number into perspective.

When you take a round trip from Basel to Costa Rica with your partner—first-class because you deserve it—you cause 18.2 tonnes of CO2 to be released in the air. There are around 100 passengers on a plane so that one return flight is already worse than a potentially world-changing AI.

An AI developed to make a positive environmental impact could easily offset emissions caused by “useless” AIs. For example, AI could automatically regulate the temperature in large factories to be energy efficient. It could improve routes for ships and airplanes to use as little gasoline as possible. An AI could even optimize an entire city’s energy consumption. 

Progress and innovation always take their toll on nature. That’s not avoidable. We can only try and make change as climate-friendly as possible.

Will AI kill coding, teaching, and other jobs?

TL;DR: AI is capable of taking over (almost) all our jobs. If it does depends on if we let it.

“I’m sorry, Robert. We have to let you go.”

“What? Why? I’ve been at this warehouse for 15 years. Nobody can do a better job here than me!”

“I’m sorry. The new guy can lift the entire shelve at once.”

GIF of robots moving shelves in a warehouse

Many workers dread a future where robots take over the majority of our jobs. For the last decades, robots have replaced humans wherever it meant more profit for the companies.

This trend of automation isn’t declining. It’s rapidly accelerating.

AI and robots can do many jobs that we’re doing—and they’re doing them better than us. Humans still need to supervise the machines, fix mistakes, and do tasks robots can’t yet do, but the machines are catching up.

Robots are becoming smarter and more accurate. Twenty years ago, machines could only do simple tasks like working in an assembly line. Today they can write articles, solve complex problems, and even write code!

Here’s GPT-3, an advanced natural language AI to date, turning instruction into working code.

AI falls short when it comes to human interaction and empathy. Robots can’t replace nurses, bartenders, teachers, coaches, and caretakers.

AI also can’t capture the human experience like an artist can. An AI may be able to compose music but it’s no Hans Zimmer. For now, the humanities and jobs where empathy is crucial are safe. 

But we can’t say for sure that robots will never do those jobs. 

How far it goes depends on us humans and what we put up with. Maybe someday, robots live among us as equals.

Can AI destroy humanity?

TL;DR: Yes, AI could someday destroy humanity. That’s why the leading AI developers are immensely careful.

We don’t dominate the food chain because of our physical strength that barely rivals a swine’s. We dominate the food chain because of our intelligence.

If AI became superintelligent, meaning even more intelligent than humans, we could most likely not control it.

We don’t even know what kind of decisions it would make. Would it be benevolent to the human race? Would it want to dominate the world? Just as the fate of the mountain gorilla depends on human goodwill, so might humanity’s fate someday depend on the decisions of a superintelligent AI.

Scientists worldwide are trying to write an algorithm that recognizes dangerous behavior and shuts the AI down when necessary. So far, all failed.

The true crux is, we can’t even calculate if a machine is smarter than us.

Is AI dangerous?

TL;DR: Yes, AI is very dangerous in the hand of the wrong people.

If we lose control over an AI, we can’t be sure what happens. 

But as long as we can still control the AI, it’s just as dangerous as the people making it.

An infamous example of malicious use of AI is the 2016 US presidential election.

Thanks to a massive network of bots, social media users always saw posts that supported one candidate. Seldom unbiased information. It’s modern propaganda.

A comparison between the botnet of Trump and the botnet of Hillary in the year 2016

This is a comparison of the largest botnets of both candidates. The botnet on the right shows Pro-Trump (and Anti-Hillary) bots. The one on the left shows Pro-Hillary (and Anti-Trump) bots. The Trump botnet isn’t just significantly larger but also more centralized and interconnected. This shows a higher degree of strategic organization and could be one reason why Trump won the 2016 elections.

There is also the technology of Deep Fakes that takes propaganda creation to another level. With easily accessible software and little skill, everyone can make anyone say anything on camera.

Besides being used as a propaganda tool, AI can also be used in autonomous weapon systems, to infiltrate data centers, and to perform other malicious attacks we have to prevent.

Should we be afraid of AI?

TL;DR: No, you don’t need to fear AI.

A Terminator or I, Robot scenario isn’t on the table right now. Most likely, it’ll never be. 

Sébastien Meunier, our Director of Industrial Transformation said:

” Companies are careful, almost fearful of AI. There is a lot of talking about Ethics and everyone is taking it slow. We probably won’t see super intelligence in the next 20 years.”

Sébastien Meunier, Director of Industrial Transformation

For further reading on the capabilities of AI, Sébastien recommends The Smurfs and the Book that Tells Everything.

If you would like some reassurance of AI’s ethics, watch those two videos of some of the most advanced AIs in the world.

Tony Robbins & Sophia

Interview with GPT-3

What It’s Like To be an AI: An Interview with GPT-3.

Sébastien Meunier on AI in 2023 and beyond.

While updating this article, we asked Sébastien after our event on AI engineering and its impact on business if his opinion about AI has changed.

He said: “The only thing that has really changed is that everyone is talking about AI now. That’s a good first step. We’re now collectively figuring out how to implement AI in our day-to-day lives. Companies are still careful and talking about ethical use of AI is more important than before. But I don’t think we will see super intelligence in the next 18 years. ;-)”

You may also like

Four new buildings for Switzerland Innovation Park Basel Area

[vc_row full_width="stretch_row" css=".vc_custom_1679654741300{padding-top: 3% !important;padding-bottom: 3% !important;background-color: #f7f7f7 !important;}"][vc_column][vc_row_inner equal_height="yes" content_placement="middle" el_class="container-wide"][vc_column_inner width="1/2" css=".vc_custom_1662988110716{padding-right: 7% !important;}"][vc_column_text…> Read more
Jack Vincent’s best cold call opening line - Header

Jack Vincent’s best cold call opening line (and what to say next).

*Rrriing Rrriiing* “Hi, this is Pat from So & So Company. I’m calling to tell you…> Read more
The best coworking spaces in Basel - Header

The 13 best coworking spaces in the Basel Area [with prices].

Facebook started in a Harvard University dormitory. Amazon started in a garage. So did Microsoft and…> Read more

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.