The algorithm was taught to imitate the bias

22 октября, 2021 от Kinok Выкл

So they waited for sales of the awesome prediction of Harari, that machine learning algorithms will know more about us than their mother’s mother. It remains to understand — what did you know about us a new algorithm?

The price was extremely high. We paid the fact that in our consciousness automatically loaded

One of the most important «standard programs» in this library is

New study

The result of the publication of the results of the study in Nature became a colossal scandal. Because of the criticism, who fell by the editor, they had to make a sought that

Modern AI is a set of diverse machine learning algorithms. They study everything better and better to perform all sorts of intellectual tasks in the course of millions of attempts, analyzing the gigantic amounts of data that accumulates or experience the experience of people, or the experience of other algorithms imitating mass actions of people.

Walking this path managed to achieve a lot, surpassing a person in a wide range

In attempts to improve the solution of such tasks, the logic of their formulation has so far been alone — first teach the algorithm to the human level, and then surpass this level. Moreover, the algorithms are taught — by definition, some useful people action. For example, determine the level of risk of non-repayment of the loan issued to a specific client.

The main problem with which the developers of such algorithms faced — their

But in the study

In other words, the algorithm should maximally reproduce the result of the work in the «standard program» of the definition, as if confident in the confidence of someone else’s person.

This algorithm studied in photos. The reliability of people depicted on them was previously evaluated by other people. As a result of learning, the algorithm was subject to the same prejudices as most people:

In short, remember the face of Hollywood negative types (the same gangsters) and compare them with the persons of positive heroes (type from Woody Allen to Brad Pitt), and many of our cockroaches biased evaluation of appearance will be obvious to you.

Trained algorithm also mastered the faithful bias of people, such as

And this is an example of a comparison of the rating assessment by the photo of Byyden and Trump from Tweet Nicolas Bamara (do not look for the Ecunt, it is already erased). Assessing the reliability of the photo of a smiling Baiden 1.39, and the photo of the tripped Trump -0.63.

About the second and main goal of the study

To achieve this goal, the anti-imitation algorithm in the reliability assessment was used as a measurement tool for

Testing was conducted at 4106 portraits of 1360-1918, including work from the 19 Western European countries presented in the online gallery of the arts. To verify the assumption that the interpretation of mimic signals on persons depicted on portraits of people really reflects real changes in social trust, and not how people are posing for portraits, the algorithm was then applied to 2277 Selfie from six cities placed on social networks.

Researchers came to the conclusion

Alas, but this conclusion seems to me not justified. This is a typical example of the rapid interpretation of the results of psychological and social studies, analyzing correlations in large data sets.

These two drawback are obvious to many professionals.

Build Butch —

A

However, Bomari, I suppose, do not get used to such a crushing criticism of its mathematical conclusions. In 2014, he published a fairly sensational study, in which substantiated the emergence of religions of large gods (moralizing behavior of people) by a sharp increase in the welfare of societies in the period of the so-called. «Axial time» (VIII-II BB BC) and later (for more details, see

Well, here you need to talk a little about the terrible scandal that happened in the network as a result of the publication of the study «

Naturally, not the scientific article of Bomar and CO, served as a scandal and CO, — the mass audience does not read this. But Bomar decided (and most likely intentionally) to advertise the study on his tweet.

As a result, this tweet he stepped on a huge rake. Having received 2k likethers on your tweet, he immediately rugged 163k husky on tweet

Well, then, you know, it rushed Mr. along the pipes. And although the study of Bomara and with almost no one really read, he was accused of all sins attributed to physiognomy, phrenology and racism, multiplied by the fact that he «enshrines the privileges of white men, including its own» (forcing algorithms above to evaluate reliability people resembling him yourself).

As a result, the scandal worked out in social networks, the authors were accused of creating a racist algorithm and preaching human-native ideologies. Sports details can be viewed, for example,

Alas, the overwhelming number of critics (in most of the condemned research, reading only its conclusions) carried out their critical verdict, without silent.

They mixed two types of correlations, which are dedicated to the study.

The 2nd of these correlation species was demonstrated in the study unconvincing, primarily due to dubious regression analysis.

However, most of the critics of the study, not noticing the unconvincing of the 2nd type of correlation, collapsed the shaft of indignation on the 1st appearance, and at the same time treating it mistakenly — as they did not think to interpret the authors of the study

Alas, but critics do not take into account that the search for correlations:

Unfortunately, this is misunderstanding, together with a broken scandal on the network, did not allow most of the important conclusion about the study.


If you like the post: