we are in the process of using machine learning, which is just glorified statistics, to analyse more and more aspects of our society and our very beings. But what if we don't like the outcome? Reflections on oneself can be painful. Sure, we can always deny the results we don't like and it sure doesn't hurt to be sceptical. We can argue that machine learning learns from human data and is thus bound to acquire human biases. However, what if these biases are reminiscent of truth? Will humanity be comfortable looking into the mirror it just invented?
Comic transcript
Panel 1:
P and O are standing in front of a laptop that is connected to a GPU cluster.
P: ...and using this artificial neural network, it scans millions of posts from social networks to determine the most controversially discussed question of our society.
Panel 2:
Laptop: If you stack one lasagna onto another lasagna, is the result considered one lasagna or two lasagnas?
Panel 3:
O: Looks like your algorithm needs a bit more work.
P: I’m afraid the algorithm is not the problem here...