AI and people have the same problem that there were people with Western black rhinoceros — the mismatch of goals

22 октября, 2021 от Kinok Выкл

An extremely interesting conversation of one of the aware of the most aware of the experts — Max Tegmark with one of the best interviewers YouTube Lexes Friedman.

0:00 — Introduction

2:49 — Artificial intelligence and physics

16:07 — Can I discover new laws of physics?

24:57 — Security

42:33 — Mailing of the human type

volume_up

1:15:05 — Autonomous weapon

1:30:28 — a man who prevented a nuclear war

1:40:36 — Искиш

1:54:14 — Alignment AI

2:00:16 — Consciousness

2:09:20 — Richard Feynman

2:13:30 — Machine training and computing physics

2:24:28 — AI and Creativity

2:35:42 — Aliens

2:51:25 — Mortality

[00:02:08]

I believe that algorithms that manage our interaction on social networks have already possess intelligence and power, much superior intellect of the power of any person. Now it is indeed one time to think about it, to determine the trajectory of the interaction of technology and man in our society. I think that the future of human civilization may well be put on the card precisely because of this question about the role of artificial intelligence in our society.

[00:30:31]

The only thing that scares me to death is that we are just going to create increasingly large systems that we still do not understand until they become the same smart as people. After all, something can go wrong, right? And therefore I think it’s just a reckless way. After all, unfortunately, if we really succeed in creating a common AI, we will create it without understanding how it works.

[00:33:37]

An alternative is the Intelligible Intelligence Approach — an approach to the construction of the AI understandable to us.

[00:57:09]

It is possible that someone will be able to create a man-made pandemic that spreads as easily as Cupid, but at the same time kills, as at one time, a pack was killed — a third of those infected. How to be with it? …

We must be prepared for this and act instantly. As South Korea made it with Kovid, who made a lesson from the last epidemic of atypical pneumonia and now lost only 500 lives at 50 million people.

[01:02:57]

Propaganda is not new, and incentives to manipulate people are also not new. What about this new? New is that machine learning is found with propaganda. That is why everything has become much worse. You know, some people love to blame certain people, as in my liberal university bubble. Many people accuse Donald Trump and say it was his wines. I see it differently. I think that Donald Trump simply turned out to be the first influential person in the era of machine learning algorithms, which possess an extremely high skill of a manipulative game.

[01:03:36]

I do not want to scold them. I have many friends who work in these companies. Good people who have implemented machine learning algorithms to simply increase profits a little, maximizing the time that people spend on watching advertising. Then they completely underestimated how effective their algorithms will be. It was, again, a black box that is not amenable to understanding the mind. And therefore it took a long time to understand why, as well as it is harmful to society. Because, of course, machine learning found out that the most effective way to glue your attention to a small advertising rectangle — show you things that cause strong emotions, anger, insult, etc. And the truth is or not, in fact, not matter.

[01:05:44]

Machine training and technology as a whole is not evil, but also not good. This is just a tool that you can use so that good things are bad. And as it happens, machine learning in the news is mainly used by large players, big technologies to manipulate people, forcing them to watch as much advertising as possible, but who have this unintended consequence to really spoil our democracy and crush it on the filter bubbles.

[01:21:48]

The first is the complete destruction of democracy when our information flow is manipulated by machine learning. The second is the beginning of the arms race of autonomous deadly weapons.

[01:49:02]

He thinks a longer prospect. He is much clearer than most people, that what we do with this Russian tape measure in which we continue to play with our nuclear charges is a really bad strategy, really a reckless strategy, and that we just create these increasingly powerful systems AI, which we do not understand … He is such a positive provider. And the whole reason he warns about it is that it is better than most people aware of what the alternative value fails that in the future there is so much awesome that we can, and our descendants can enjoy if we do not approve. And I am so infuriated when people are trying to put it with some kind of technophobium Luddite. He just understands the risk that we create incredibly competent systems, which means that they will always achieve their goals, even if they conflict with our goals …


Thanks for watching! Put husky I.