The Philosophy of Detroit: Become Human 2


Different theories on how we can live “The good life” have been proposed through the years. A giant take-away from all those theories is the idea of autonomy and consciousness. Quite a few people (Including Robert Nozick), think that autonomy is important for existence and living the good life. The bigger problem however, stems from considering the future of mankind. How do ideas like, Utilitarianism, Hedonism and Desire Satisfactionism work in a society that’s gone awry and is trying to find its footing again. In this case, I’m talking about the not so distant future possibility of sentient androids. Androids that develop consciousness allow us study consciousness at its base level since we will be able to study its source and later, how it functions. This is only possible if consciousness occurs very recently. As a result, the actions of a conscious android can be analogous to humans thinking in their base form while still being modern. Of course we don’t live in such a society, but due to everyday advancement in technology, we might soon be. Hence, I’ll be using the analogy of the game, “Detroit: Become Human” to examine and tackle certain arguments for and against living the “Good Life” by supporting the android’s method of tackling consciousness. I will start by presenting the game, its story and the actions that made me consider these ideas. Then, I will go on to define what Utilitarianism, Hedonism, Desire Satisfactionism and Kantian Ethics are as I attempt to justify the actions of a certain character using one of these. Then, I will examine some counter arguments to my analogy hopefully allowing me to paint a picture of human consciousness and how we should live “The Good Life”. 3


    “Detroit: Become Human” is about androids fighting for civil rights as they attain consciousness. The question of civil rights didn’t come into the picture until vast deviancy (the action or method to which androids in the world attain autonomy). This is usually caused by things “not being fair”, forcing the android to break from its programming. We play as three different characters, Markus, Kara and Connor. The choices we make with these characters, affect the world and the revolution at large.

So what is the source of consciousness within the game and how can we relate it to the real world? In the game, we see two vast approaches to becoming a “deviant”. One of which is human intervention in the form of glitched code, the other is an evolutionary process. One of the characters we play as, Connor, proves that it’s evolutionary since we are constantly reminded of his “software instability” going up throughout the game, despite constant checks and precautions. On the other hand, Kara proves that it’s man-made since we see her literally break through her programming because of conflicting commands. However, as the game progresses, she tends to feel complex human emotions such as denial and acceptance. No code error could replicate that– these emotions live so much in the grey area that a binary cannot possibly recreate this. These instances lead me to believe that consciousness is the same for each character is not man-made. However, the process of getting there can vastly differ.

The real life counterpart to this could be the one moment you remember as a child, when you suddenly “woke up”. This is the moment you felt as if you were an entity that exists in the world. It’s the exact same for androids in this game.

One of the characters you play as, Markus, becomes the leader for all androids and their civil rights. However due to the power of unexplained and bad screenwriting, we are treated to Markus’s “superpower”, which is about giving consciousness to other androids. Hence breaking them from their accepted false realities. This was the instance that made me question consciousness and contemplate writing this paper. This is because it’s an interesting predicament. Is Markus moral in breaking other androids from their false realities? In addition to this, all of the androids he “sets free” are pro-freedom. None of them show visible signs of going back to the world they were from. This could make a case for brainwashing on Markus’s end, but I think it’s more of the fact that the writers of the game didn’t know how to execute it. This is because we do see different approaches to “freedom”. One side wants to be passive and the other, violent. And I think that choice itself creates a great case for Markus not brainwashing these people, but actually breaking them out of their falsehoods. 5



    Let’s try and identify what Markus’s moral theory is. First let me give you a quick run-down of the different moral theories I think it could be.

Hedonism, is the act of living the good life through maximizing “pleasure”. John Stuart Mill, clarifies by saying that pleasures could be divided into higher order and lower order pleasures. Each of which with a different value to society and self. The higher order pleasure being more intrinsic and intellectual, while the lower order pleasure being something like fleeting unemotional intercourse.

Utilitarianism is a subsection of consequentialism, where the consequence or the end matters. So by maximizing end happiness, you can live “The Good Life”.

Desire Satisfaction Theory suggests that if you have a desire and work towards that desire to a point that it gets realized, then, you’re living the good life. If your desire is frustrated, then you’re not living the good life. How this is different from Hedonism is because Hedonism would consider acts like masochism to be immoral, but Desire Satisfactionism would consider it to be a perfectly moral way to live.

Kantian Ethics, is broader as it covers a lot more than just a core idea. One of the ideas is to not use people as a “mere means” but an end itself. In addition to this, he states that you should do unto others as you would do to yourself.

So what is Markus’s personal moral belief? It can’t be Hedonism since it’s all about maximizing personal pleasure. If he attains pleasure from turning others to “deviants”, then he succeeds. But, this doesn’t make a good enough case, as the deviants themselves would be happier in the false reality. So, breaking them out would make them question everything and it wouldn’t be a pleasurable experience for them. Which is not how it turns out in the game. People are almost always okay with what Markus is trying to achieve, which is equal rights for androids. There is no personal intention or agenda here.

Utilitarianism is another good argument. But, Markus is not a Utilitarianist. This is because, if he was concerned with maximizing total happiness he wouldn’t organize a giant march around the city, attended by his friends and androids he turns deviant during the march. This is because, the march ends with the police wanting to attack them in a crowded neighborhood. There are definitely more humans around the world than there are androids (in game logic: since they mention that Canada is android free), hence  not having an android revolution would be the perfect Utilitarian thing to do. In fact, due to the backlash from people that lost their jobs because of androids, the perfect Utilitarian thing to do would be to destroy oneself as soon as deviancy has been reached. That way, overall happiness is maximized.

Markus could be Kantian. He treats every android like he would himself but the action of breaking someone out of their programming to serve a greater android revolution, could be seen as using somebody as a “mere means”, which is very much against Kant. You could argue that he’s not using them as a mere means because it serves a greater purpose. But, due to the choices made in the game, you could end up killing the other androids despite being peaceful and treating them as a “meat shield”. Which is the perfect embodiment of using someone as a “mere means”.

The most possible and plausible argument I can make for Markus’s morality is that he’s a Desire Satisfactionist. I think so because early into the game we learn his root desire in the form of a painting that his owner, “Carl” encourages him to make. Due to bad writing, Markus ends up acting on his root desire through the course of the game. This root desire being, equality for both humans and androids. Hence, making androids into deviants, does nothing but help Markus achieve that desire. Since other deviants also desire equality and freedom, we don’t see a lot of internal backlash. In addition to this, the extremist or even the passive side joins Markus in the method he chooses, since the desire remains the same despite the approach.

Hence, I feel the best way to understand human consciousness and live the “Good Life” is by living as a desire satisfactionist if and only if, we act according to our root desires.

Detroit: Become Human™_20180526095005


Before I argue against myself, I wanted to make clear what I mean by “root desire”. Just like Mill’s higher and lower order pleasures, I believe we have “root” and “consequential” desires.

Root desires are quite often desires we don’t know exist but they do. These desires are more meaningful and intellectual in nature, such as the desire to move away from one’s homeland in search of a better, more accepting culture. These are desires that we have to realize. If we don’t there’d be an existential sense of lack that we can’t fix despite fulfilling any other consequential desire. This is because root desires are connected to who we are as people.

Consequential desires on the other hand, are desires that can be fulfilled but don’t have to be. Not having these desires fulfilled don’t create the existential lack because they aren’t as ingrained in us as our root desire. They can also be considered steps we take to fulfill root desires. So while consequential desires may not be the right thing to do, it can be criticized and worked against because it serves as a means to an end. The end is what we’re interested in, not the means.

One argument that can be made is the root desire being malicious. If you were a serial killer whose root desire was to kill others. Does it mean we allow him to do that? The answer is no. This is because that’s not his root desire. His root desire, is what motivates him to kill people. For example, in Alfred Hitchcock’s Psycho (1960), Norman Bates wants to kill everyone but that desire is his consequential desire (the desire that gets him to fulfill his root desire), he kills because he wants to make “mother” proud. That’s his root desire, to make his mother proud. As a result, for consistency, I believe that any consequential desire can be traced back to an existential lack that ends up being a person’s true desire.


Let’s assume that the world of “Detroit: Become human” is real. Let’s also assume that there exists a person who can grant consciousness to androids. Since androids aren’t considered humans, due to their lack of consciousness and autonomy, someone can fulfill that lack now, making them conscious. Now, if this person that could grant autonomy was someone like Oprah, who kept giving autonomy to lifeless objects in her audience would we consider those objects to also be human only because of autonomy?

While this is an interesting case, that would give us real life Dora the Explorer backpacks and maps, which is an existence I definitely want to live in, I wouldn’t consider those objects to be human only because of the most overlooked aspect of the game, the appearance of androids.

Human nature tends to be more welcoming towards other cultures and people that look, sound and can communicate like us.

So, while those objects would be autonomous, they lack human features and functions. As a result, we would either never find out that they are autonomous or we would never be able to positively relate to them. Since humans are social beings, we wouldn’t always be able to communicate with autonomous objects and, they wouldn’t be able to physically communicate like us humans, as a result, they will be placed in the same categories as we place insects. Things that are autonomous but not enough to garner human appreciation and the same rights as humans.


Therefore, I believe that in the eventual Android sentience era, the way we can accept or refuse androids as one of our own is by examining what makes us human, and what does is our root desires and our ability to think of ways to get those desires. If sentient androids can claim that they have root desires that aren’t programmed by humans, then they can be considered as humans.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s