AI : Professor Stuart Russell, founder of the Center for Human-Compatible Artificial Intelligence, at the University of California, Berkeley, presents this year’s Re Lectures.
His four speeches, Living With Artificial Intelligence, deal with the current threat from man-made machines – and provide a way forward.
Last month, he spoke to BBC News technology reporter Rory Cellan-Jones about what to expect.
How did you set up the talks?
The original texts I posted were straightforward, with a strong emphasis on AI’s intricate roots and various explanations of logic and how they emerged in history and things like that.
So I redesigned – and we have one talk that introduces AI and future prospects for both good and bad.
And then, we talk about weapons and we talk about activities.
And then, the fourth will be: “Well, here’s how we avoid losing control of AI systems in the future.”
Do you have a formula, a definition, of what artificial intelligence is?
Yes, it is the machines that see and do something and hopefully choose actions that will achieve their goals.
All of these other things you learn, like deep reading and so on, are all special situations for that.
But can’t a dishwasher fit that definition?
Thermostats detect and operate and, in a sense, have one small rule: “If the temperature is below this, turn on the heat.
“If the temperature is higher than this, turn off the heat.”
So that is a useless program and a completely written program by man, so there was no reading involved.
Reith Lectures all goes up the other side – you have self-driving cars, where decision-making is very complicated, where a lot of learning was involved in reaching that level of decision making.
But there is no solid and fast line.
We cannot say anything below this does not count as AI and anything more than this is important.
And is it fair to say that there has been a lot of progress especially over the last decade?
In honor of something, for example, which has been one of the things we have been trying to do since the 1960s, we have gone from being sad to being superhuman, in some cases.
And in translation of Reith Lectures , we also went from bad to worse.
So what is the location of AI?
If you look at what the founders of the industry say their goal was General-purpose AI, which means it is not really a good game in Go play or a very good program in machine translation but something that can do anything a person can do. which I can do and maybe more other than that because machines have a greater bandwidth and memory benefits than humans.
Just say we need a new school.
Robots would emerge.
Robot trucks, construction robots, construction management software will be able to build, can get permits, can talk to the school district and the principal to get the right school design and more. – and a week later, you have a school.
And where are we about that trip?
I would say we are moving slowly.
Clearly, there is still much to be done.
And I think the biggest one is about making complex decisions.
So when you think about the example of building a school – how do we start with the goal of looking for a school, then all the negotiations happen, and then all the construction happens, how do people do that?
After all, humans are capable of reasoning on many levels of thought.
So we might say: “Well, the first thing we need to find is where to put it. And how big should it be?”
We don’t start to think that I should remove my left toe first or right foot first, focusing on the high-level decisions that need to be made.
Paint a picture that shows that AI has made great progress – but not as it thinks. However, are we in a dangerous situation?
I think so, yes.
There are two issues of why we should pay attention.
One is that even though our algorithms are not yet close to normal human capabilities, if you have their billions of active ones they can still have a huge impact on the world.
Another reason for concern is that it is completely reasonable – and many experts think it is possible – that we will have a common AI purpose for the rest of our lives or the lives of our children.
I think if a common purpose AI was created in the current context of high power competition – you know, whoever rules AI that rules the world, that kind of idea – then I think the results could be as bad as possible.
Your second talk is about military use of AI and the dangers involved. Why should that be fully taught?
Because I think it’s really important and it’s really urgent.
And the reason for the urgency is that the weapons. We talked about six or seven years ago are now being built and sold.
So, for example, in 2017, we produced a film called Slaughterbots with a quadcopter about 3in [8cm] wide that carries an explosive and can kill people by approaching them to detonate them.
We showed this for the first time at Geneva’s official meetings. And I remember the Russian ambassador basically mocking and sniffing: “Well, you know, this is just a science fiction, we should not worry about these things for 25 or 30 years. “
I explained what the robotics colleagues said that is no they could combine a weapon like this in a few months with a few graduate students.
And the following month, so three weeks later, Turkish producer STM [Savunma Teknolojileri Mühendislik ve Ticaret AŞ] actually announced the Kargu drone, which is more of a Slaughterbot.
What do you believe about the reaction to these Reith Lectures.