While artificial intelligence (AI) is becoming a part of everyone’s life whether that is at the office, at school or at home, few are those who question and care about the sources and possible impact of AI on our future generations. It is not the amount of blockbusters displaying apocalyptic scenes of humanoid robots taking over the world that is lacking, though none of them seem to tackle the possible challenge machines might represent in our struggle toward a more equal and humanist society. Recent studies however suggest the danger is real as they highlight AI tend to be sexist and racist and besides the way robots act, the way they look and what they are created for also reflects and reinforces a stereotypical civilization.
Information processing programs at the basis of AI use algorithms to connect pieces of information and extract requested data. We are now all acquainted with the most common predictive search engines which use the same technique to answer all our questions. A very simple test in Google for example might give you an idea of how close-minded algorithms can be: when looking for images of a CEO, a Manager or a Doctor, the white male is obviously prominent while the role of teacher and secretary, according to Google, seems to be reserved to the white, good-looking female – not rarely with quite sexual features. When AI is constructed based on data which reflects our own unequal and stereotypical society, you basically program machines to learn our own biases. While in some cases researchers can no longer deny the problem and try to compensate this phenomenon, one could expect for most actors in this highly competitive industry not to care about the consequences of their programs as it would cost more time and money to do so and, especially, as most of the engineers (typically benefiting from socially dominant norms) developing those robots, do not feel concerned by potential biases which will de facto reinforce their own privileges. As machines learn and adapt from their original coding, they become less transparent and predictable which makes it difficult to keep track of the complex interactions of the original algorithms and data. You thus lose control pretty fast on the output and related actions of your revolutionary Android. Robots can’t learn instinctively and follow ethical rules as long as those are not programmed. They do not question themselves or remain open to change their “mental” constructions based on personal experiences. You can’t educate robots to obtain mentality changes but, at the same time, it seems like robots are increasingly going to “educate” future generations of humans and progressively replace them in decision making processes. If one does not pay attention on the potential biases AI assimilates by feeding it with stereotypical data, vicious circles are created in which human users receive the information, possibly without questioning it, and simply copy-paste it into the next generation of robots. When this is the case, you maintain – and through potential uncontrolled mechanisms – reinforce sexist ideas and misconceptions about all kinds of minorities which are not or much less present in the conception of AI. While we strive for a more equal society, computer sciences might consequently catch us up by creating a new generation of decision makers and masters which – instead of smoothing out stereotypes – enforces them.
Besides its sexist way of thinking, our humanoid robot-friend might as well become an enemy to women more then to men by the role he will be given. Potential competition of AI in the marketplace is a well-addressed issue so far and it is known that the risk is clearly higher for women. We might expect that the first industries, i.e. the service industries, to successfully embrace the use of robotics to replace their human resources will be those mainly employing women and, in Western countries, especially women with foreign backgrounds facing higher risks of precariousness.
In order to make robots more acceptable, they need to be more human-like. In our gendered societies this unfortunately means robots need to be gendered. Today, the typical gynoid (a humanoid robot that is gendered female) exhibits the most stereotypical characteristics of women one can imagine. And as this wasn’t enough, a new generation of “hot lady robots” are now becoming a serious trend. Just google it and you might well be surprised of the lack of creativity of our so-called computer engineering geniuses. The integration of human characteristics in robotics, disclosed a research paradigm known as the Computer Are Social Actors (CASA) theory which suggests that people react to computers and robots in a similar way they interact with other social entities. In fact, several research studies have shown female robots are generally expected to be compliant, having communal personality traits, being more focused on humans then on themselves and thus, being at the service of their owner. Male robots, on the other hand, are more generally considered being able to control the environment, having, sometimes, more power than human beings. People tend to trust male robots (even if this is limited to a male voice) more when it comes to technical support as they are considered to be more intelligent and autonomous. It needs no further explanation to imagine the damaging effect of enforcing these stereotypical concepts within computer engineering. Unfortunately, it seems like this is exactly what is happening. When developing service oriented programs and robots, female features (physical traits, voice,…) are often preferred while guard robots are typically represented through male features. Before we know, the robot community will bring us back to the ‘50s.
As daily interactions with humanoid robots are no longer fiction and we expect for those interactions to shape the way humans gather and process information becoming increasingly dependent of AI, it is time for us to give this issue more weight on the feminist agenda. Concretely, one should make sure to address the ethical risks of programming AI within standard training modules of computer science and engineering. Women should be increasingly represented within the area in order to assure a gender balance among the creators of our future. The same applies to minorities and less-privileged groups of our society. Laws and control-instances should be set up for AI to be screened and corrected for sexist, racist and other biases. All these measures should be self-evident as much as the need for awareness-raising among next generations for them to remain critical toward AI. The more humans will support equality, the more humanistic Robocop and his friends will be. However, time is running short as the first humanoid robots are standing at the door and might as well surpass our good intentions…
 See: “Bad News: Artificial Intelligence Is Racist, Too. By Stephanie Pappas In: Live Science Contributor. April 13, 2017.” and “Just like humans, artificial intelligence can be sexist and racist. On: http://www.wired.co.uk/article/machine-learning-bias-prejudice”
 Tay B.T.C., Park, T., Jung Y., Wong A.H.Y. (2013). When Sterotypes Meet Robots: The Effect of Gender Stereotypes on People’s Acceptance of a Security Robot. In: Harris D. (eds) Engineering Psychology and Cognitive Egonomics. Understanding Human Cognition. EPCE 2013. Lecture Notes in Computer Science, vol 8019. Springer, Berlin, Heidelberg.