Have people built prejudices?

Artificial intelligence learns prejudices

Artificial intelligence adopts cultural stereotypes or prejudices when learning from data sets. Researchers from Princeton (USA) and Bath (UK) presented this conclusion in the journal Science (primary source). They checked whether and, if so, which prejudices a standard AI program practically learns for the analysis of semantic relationships. In doing so, they used a psychological method to capture prejudices and adapted them for the investigation of the AI. The result shows that the GloVe program learns prejudices in the process. For example, it interpreted male first names common in Afro-American circles as rather unpleasant; Names common among whites rather than pleasant. It also linked female names more with art and masculine names more with mathematics. The researchers see in their results an indication of the risk that AI, if used unreflectively, learns cultural stereotypes or prejudices and delivers corresponding results. They very briefly outline possible procedures that are intended to protect against this.

 

Overview

  • Dr. Damian Borth, Director Deep Learning Competence Center, German Research Center for Artificial Intelligence GmbH (DFKI), Kaiserslautern
  • Prof. Dr. Joachim Scharloth, Professor of Applied Linguistics, Technical University of Dresden
  • Prof. Dr. Christian Bauckhage, Professor for Media Informatics / Pattern Recognition, Fraunhofer Institute for Intelligent Analysis and Information Systems, Sankt Augustin
  • Prof. Dr. Michael Strube, head of the Natural Language Processing (NLP) group, HITS - Heidelberg Institute for Theoretical Studies, Heidelberg

Statements

Dr. Damian Borth

Director Deep Learning Competence Center, German Research Center for Artificial Intelligence GmbH (DFKI), Kaiserslautern

“This study shows us that with the standard methods of supervised learning, the bias of the data set is also learned. This just means that the underlying data sets have already contained these cultural stereotypes or prejudices in their core. Here the learning algorithm only adapts to the data situation. This shows us how important a balanced selection of training data is and that we have to introduce quality standards when creating training data - a kind of ethics committee for training data. "

“Removing stereotypes from the data base is a more sensible addition than incorporating algorithmic barriers. These systems are only as efficient as they can or are allowed to learn from the data. If you influence this learning, the performance of the systems will likely suffer. "

“And we're only talking about text data and the investigation of this in relation to AI systems, which learn from it cultural stereotypes or prejudices. If we keep in mind that there is still image and video data, it becomes even more difficult to filter out such cultural stereotypes or prejudices from the visual. "

Prof. Dr. Joachim Scharloth

Professor of Applied Linguistics, Technical University of Dresden

“The study comes to an unsurprising result: The same stereotypes can be found in a large number of texts as can also be measured by implicit association tests - a method for measuring attitudes. And these stereotypes are reproduced by standard methods in the field of machine learning. "

“This is hardly surprising because texts are written by people who, of course, are not free of prejudice. And when writing texts, they use linguistic expressions that are not neutral, but always have a typical use value. For example, an expression such as 'southern appearance' immediately makes us think of a description of the perpetrator or a search for a man. It has become a fixed pattern. "

“Machine learning looks for patterns - in text analysis for so-called co-occurrences, i.e. the common occurrence of words. The fact that machine learning can uncover stereotypes is initially beneficial for understanding societies. It becomes problematic when the learned models are used without reflection in the automation of processes in our everyday life, be it for control purposes (e.g. displaying messages in social networks) or for decision-making aids (e.g. machine language analysis during job interviews). "

“You can certainly reduce the stereotypes in the models by choosing the training data and suitable procedures. Ultimately, however, it is a social question in which situations we want to grant these models how much agency. "

Prof. Dr. Christian Bauckhage

Professor for Media Informatics / Pattern Recognition, Fraunhofer Institute for Intelligent Analysis and Information Systems, Sankt Augustin

“To understand the results of this study, you have to know that machine learning often uses complicated statistical methods. Statistical results are only as good as the data on which they are calculated. To put it bluntly, the rule of thumb applies: garbage in, garbage out. This has been well known since the 'surprising' Brexit or the 'surprising' election victory of Donald Trump; in both cases there were incorrect forecasts because of one-sided data. So when AI systems are trained with one-sided data, it is not surprising that they learn a one-sided view of the world. Last year there were examples of the Microsoft chatbot 'Tay', which internet trolls taught racist language, or the app 'Google Photos', which believed that dark-skinned users were gorillas. "

“In order to avoid one-sided worldviews or stereotypes, it is important that AI systems are trained on balanced data. It is just as conceivable to let AI systems not only learn purely statistically, but to combine the training with rule bases that map expert knowledge; these would of course also have to be free of prejudice. As long as artificial intelligence does not have self-reflection (and that is not yet the case), the responsibility rests with the people who develop AI systems; this is actually the same as in bringing up children; so maybe we need something like AI pedagogy. "

“Hate speech and fake news are a big topic at the moment. If there were prejudice-free or value-neutral AI systems, they could be used to automatically recognize such content. If these AI systems were really value-neutral, neither side of the political spectrum could complain about censorship. "

Prof. Dr. Michael Strube

Head of the Natural Language Processing (NLP) group, HITS - Heidelberg Institute for Theoretical Studies, Heidelberg

“Current methods of computational linguistics represent the meaning of words as coordinates as points in a high-dimensional space. The position of the coordinates is determined by the use of words in their context. Very large amounts of text - the larger the better - from the Internet with up to a trillion (10 to the power of 12) words serve as the data basis. The semantic proximity of two words is expressed by the distance between their coordinates, and complex semantic relationships can be calculated using simple arithmetic. In an original application of well-known standard procedures from computational linguistics, Caliskan, Bryson and Narayanan show that this applies not only to the harmless example [king - man + woman = queen], but also to the discriminating [man - technology + art = woman]. "

“The computational linguistic processes on which many applications - from speech recognition to web search to machine translation - are based, faithfully reproduce all discriminatory and racist prejudices that can be found on the Internet. Since the prejudices are already contained in the data and are very complex, computational linguistic algorithms will reproduce prejudices for as long as people put them on the Internet. Computational linguistic algorithms and machine learning processes are unbiased, data is not. 'As you call into the forest, it also echoes out. ""

Potential Conflicts of Interest

All: No information received.

Primary source

Caliskan, A et al. (2017): Semantics derived automatically from language corpora contain human-like biases. Science. DOI: 10.1126 / science.aal4230.