Nudgy Controls Part III: How the Last Guardian Turned Gameplay into Story

-by Nathan Randall, Featured Author.

Introduction

In the first two parts of Nudgy Controls, I defined an important way that a game’s controls can preserve narrative consistency in a game: through “nudges.” A nudge is an instance of player input X, which usually yields output Y, instead yielding output Z, where Y would potentially undermine narrative consistency and Z maintains narrative consistency. In the Part I I defined exactly what a nudge is, and discussed a variety of types of games that maintain narrative consistency through a lack of nudges. In Part II I defined two different types of nudges: player aids and player hindrances. Player aids are instances in which the player is assisted in accomplishing tasks that she potentially could not accomplish without assistance. Player hindrances are instances in which the player’s actions are disrupted, forcing the player to fail where they otherwise likely could have succeeded. All of these ideas are covered in depth in the previous two articles in the series, and so I do not focus on them here. For the remainder of the article I will assume the reader is familiar with the previous two articles, so I would suggest reading those first if you have yet to do so.

In this article I consider the case of The Last Guardian, which pushes the idea of a nudge beyond what our current model can explain. The game is about a young boy (to whom I refer as “the boy” and “the avatar”) who wakes up in a mysterious place away from home, and must escape with the help of a giant beast (Trico) whom he tames throughout the course of the story. Many reviewers, such as IGN and Game Informer, have claimed that this game suffers from a clunky control scheme, and that “platforming as the boy is occasionally spotty, but Trico’s inability to consistently follow your commands drags the experience down more than anything else.” [1]

It is true that the boy often hesitates in situations that surprise the player, leading to failure, and also that Trico is relatively difficult to control. However, I think this highly critical review of the game’s controls is misguided, since both the boy’s and Trico’s behavior can actually be explained by nudgy controls, once we add a few new ideas to the model. The nudgy behavior is a good thing as opposed to a detractor from the game overall because the behavior establishes and reinforces the overall narrative. Criticising The Last Guardian for having frustrating controls while praising its narrative does not make sense because the frustrating controls help form and reinforce the narrative of the game. In this article I explain how we can view the boy’s hesitancy as instances of nudges that are sometimes player hindrances and sometimes player aids. I will also show how the difficulty of directing Trico is the direct result of trying to control a character while there are many nudges taking place. In the end we will see that control schemes should not be judged solely on how “tight” the controls are, but rather on how well the control scheme reinforces or even helps establish the narrative of the game.

The Boy’s Hesitancy

Let’s consider two aspects of the gameplay in The Last Guardian, and how we can make sense of them using nudgy controls. There are two particularly noticeable moments where an input X shifts some usual output Y to a different output Z instead. One occurs when the player attempts to give an input that would ordinarily make the avatar run over a ledge. In these moments, the avatar stops short at the edge. So instead of the expected output of the avatar continuing to run and then running off the ledge occurring, the output is shifted to the avatar stopping at the edge. Importantly, it’s not as if the avatar is incapable of falling. If the player makes the avatar jump off the edge as opposed to running, there is no invisible wall in the game engine that stops the avatar’s movement, and he will fall off the side.

Screen Shot 2017-07-27 at 11.54.47 AM.png

The boy stops himself at a ledge.

The second bit of unexpected behavior occurs when the avatar is falling. Whenever the boy gets close to something stable he can grab, he reaches out to attempt to cease his fall, and succeeds so long as the object is within reach. The player is supposed to be able to stop the boy from doing doing this by holding a particular button, allowing him to instead just continue to fall.

Screen Shot 2017-07-27 at 11.57.10 AM.png

The boy reaches out to grab a ledge as he falls.

But even while the player is holding the button down, the boy will often still grab things close to him while falling, especially if they are very close to him, or a part of Trico he can hold onto (an indication through gameplay of the boy’s trust and care for Trico). In this way, when the player is holding the relevant button, the usual output of continuing to fall is sometimes shifted to grabbing on to something to cease the fall.

But is the nudge of the boy staying away from ledges a player aid or a player hindrance? And what about the nudge of the boy breaking his fall? Upon reflection it becomes apparent that these behaviors sometimes act like player aids and sometimes act like player hindrances.

Initially, one might be tempted to declare that stopping at the edge of a platform is a player aid, since stopping at the edge of a platform would prevent an untimely death in the form of a lethal fall for the boy. But the answer is not so simple, as evidenced by the fact that many reviewers were frustrated by the nudges “messing them up” in some way. Game Informer in particular says that “the imprecise controls make the journey rough.” [2] For example, if the boy gets to a ledge right as the player attempts to jump, then the boy will stop his momentum entirely, messing up the player and frequently leading to accidentally falling off of a ledge as the player frantically adjusts her plan for the situation. Is this not an instance of a player hindrance?

Similarly, ceasing a fall while the player is attempting to prevent that action might initially seem to simply be a player hindrance, since the player did not want that action to occur. If there are many things for the boy to grab during his fall, dropping down can take quite a bit of time and effort if he grabs every ledge, which is potentially very bad for the player when there is some time-limited objective to complete. And if an enemy is approaching the player, then delay in getting to the ground could lead to the enemy capturing the boy. So an instance of the boy breaking his fall when the player is trying to make him fall seems like it must certainly be a hindrance. But what if the player misjudged the distance? Then the boy grabbing a ledge before landing on the unforgiving ground could also potentially save the boy’s life—certainly an example of a player aid. At times, the boy’s caution makes execution of the player’s goals more difficult, even though the same caution often prevents the player from making careless errors.

So it appears that at times these are player aids and at times they are player hindrances. In the rest of the analysis, I will refer to such nudges as mixed nudges. But I get ahead of myself, as there is still one more important aspect to consider before declaring that these are nudges. I must show that they preserve narrative consistency in some way. In order to do so I will introduce one more idea into our model, which I will term avatar perspective.

Avatar Perspective and Mixed Nudges

Just as the player has the capacity for perception, so too does the avatar within the fiction of a game. [3] The ability to perceive gives rise to a consistent way of viewing what is perceived that is unique to the individual because every person has a unique set of perceptions. I will call these consistent ways of viewing perceptions perspectives. One aspect of a perspective is someone with a given perspective will view certain things as belonging to the same category, such as things that square-shaped, certain things that are scary or not scary, or certain actions being moral or immoral. There are a nearly infinite number of possible categories, and exactly which items make up a particular category. Players and avatars all have the capacity for perception, and thus they all have a unique perspective, and thus unique ways of categorizing what they perceive. This includes the boy in The Last Guardian, whose actions in response to player input reveal various aspects of his perspective.

In general, the player and the avatar’s perspectives will not align with each other, simply because perception is unique to an individual. But the amount that the perspectives differ is not consistent: the player and the avatar may have very similar perspectives, but they may also have incredibly different perspectives. The way in which perspectives differ is not consistent, either. The avatar may lack a moral compass and have no issue with the murdering of children, even though most players view such an action to be repugnant. It’s possible to have a player that is color blind and an avatar that is not. And lest you think that vast differences in player and avatar perspectives are uncommon, consider any game with a third-person camera, in which the visual perception of the player and the avatar differs greatly just because of an offset in camera placement within the game engine.

Differences in perceptions and ensuing perspectives between the player and the avatar can be crucial in analyzing mixed nudges. The relevant difference in perspective in The Last Guardian has to do with which sets of objects are viewed as being within the same category. There are many possible categories to consider. For instance, let’s consider the category of corgis that look the same to an individual. For the sake of the example let’s say that I am not familiar with corgis, and that you, the reader, are. In that case, most corgis will look alike to me, even though you’d be able to discriminate between the dogs with relative ease. A similar situation arises between the player and the avatar in The Last Guardian.

Screen Shot 2017-07-27 at 8.07.51 AM.png

Above is how I see four corgis versus how you see four corgis. Notice that to me, all the dogs are look the same, whereas to you, each dog looks at least slightly different.

Specifically, there are many situations that the avatar of The Last Guardian sees as belonging to the category of “situations that are dangerous for the boy” that the player does not see as belonging to that category. The avatar has very simple perceptive rules in this regard: all situations of falling and being close to a stable object to grab onto are dangerous and so demand the same response. Likewise, all situations of running toward a ledge are dangerous and so demand the same response. The player, in contrast, likely does not see all of these situations as belonging to the same category. Specifically, when the avatar is already close to the ground upon starting to fall, the player would not see this as a dangerous situation for the avatar, even though the avatar would see it as dangerous. And when the avatar is running toward a ledge and the player is preparing to make the avatar jump at the ledge, the player likely does not consider this situation to be as dangerous as the avatar considers it to be.

The existence of nudges in conjunction with avatar perspective ends up being surprisingly rich in its ability to endow a character in a narrative with clear desires. The consistent way that the avatar acts in response to situations she views as belonging to the same relevant category imply that there is some consistent desire that the avatar is acting upon. These desires form the basis of personality traits. The example of the mixed nudges in The Last Guardian serve as clear examples of the creation of personality from avatar perspective.

The boy views a set of situations as equivalently dangerous. These situations are any in which he is running toward a ledge, and any situations in which he is falling and has something he can grab onto to cease his fall. From these situations we learn that the boy has a desire to avoid injury and death—a fairly sensible desire in general, but also one that makes a lot of sense for a young boy in the dangerous situations he finds himself in. Sometimes this desire is helpful for the boy in that he avoids dangerous situations, and other times the same desire leads to distraction and clumsiness that makes it harder to achieve his goals.

The boy climbing over a ledge.

The mixed nudges in The Last Guardian preserve the consistency of the boy being young and afraid. By having the nudges sometimes be player aids, the player can see that the nudges are not present to show that the boy is clumsy, and by having the nudges sometimes be player hindrances, the player learns that the aids do not arise out of training or a high degree of innate competence. Rather, the mixed nudges preserve the character of the boy as being someone trying not to hurt himself while doing dangerous things, but not always reading the situation correctly because he is young and inexperienced. His category of situations that are dangerous is too broad.

By taking into account avatar perspective, we can explain how what initially seem to be fairly clunky controls are actually instances of nudges that are sometimes player aids and sometimes player hindrances. These mixed nudges do a lot of work in preserving the consistency of the boy being young, afraid, and in a dangerous situation that he does not always navigate perfectly or elegantly, even with the help of a very experienced or skillful player, even though he will not be goaded into reckless action by an incompetent or non-cooperative player. [4] This suggests that the reviews mentioned at the beginning of the article were misguided in criticizing The Last Guardian for the clunky control scheme for the boy, since the controls in fact make the character of the boy more vivid.

Player-Controlled Entities

Reviewers who criticized The Last Guardian spoke not only of difficulty controlling the boy, but of difficulty controlling Trico as well. Polygon reviewer Philip Kollar points out that Trico’s behavior “makes for a realistic depiction of my favorite house pet [a cat], but it’s terrible gameplay.” So at this point I will switch gears to discuss the other half of the duo featured in The Last Guardian. I disagree with Kollar’s claim that Trico’s behavior is terrible gameplay: the gameplay may be frustrating, but that does not make it terrible. The gameplay is actually highly effective at building the character of Trico. The difficulty of controlling Trico can be explained by the presence of a large number of mixed nudges in the actions of Trico that actually reinforce Trico’s character rather than detract from it.

Note that in order for this analysis to work we may need to consider nudges that apply to things the player has control over generally, rather than specifically avatars. While Trico is not necessarily an avatar, he is a character in the game over which the player has at least a degree of control.

Intuitively there is a distinction between avatars, defined roughly as the entity that the player controls as an entry point into a game, and entities in the game that the player controls through the avatar, which belong to a larger category of player-controlled entities. [5] While most players would likely disagree with the claim that Trico is the player’s avatar, he is definitely a player-controlled entity.

There are many games that have a character that is not necessarily an avatar, but is definitely controlled by the player through the intermediary of the avatar. Super Smash Brothers is one notable example, since it has two examples of playable “characters” that consist of multiple entities. One of these is the Ice Climbers: the player directly controls Popo, canonically the climber wearing blue; Nana, canonically the climber wearing pink, does the same actions as the climber wearing blue, but slightly delayed in time. The other is Rosalina and Luma, a space princess and a sentient, star-shaped creature that she commands, respectively. These two can move as a unit or separate themselves and perform the same actions while standing apart from each other.

Rosalina and Luma.

The Ice Climbers in action. The one in blue is Popo and the one in pink is Nana.

In the case of the Ice Climbers, what narratively justifies this gameplay is the tight bond of friendship and trust between the climbers. The two characters have climbed dangerous mountains together, and have presumably gotten to the point where they can communicate so quickly and effectively that it is as if they were reading each other’s minds, and so can coordinate actions in a way that initially seems to be impossible. In the case of Rosalina and Luma, Rosalina is casting spells on Luma that get him to take the same actions as Rosalina instantaneously.

I will define the unit of two player-controlled entities where one is definitely an avatar of the player and the other is an entity being controlled by the player through the avatar to be a partnership. I will mostly not be focusing on the entity that is definitely an avatar (which I will just call the avatar), because we have already discussed that entity in detail in this series. Instead our attention will be on the other entity in the partnership (which I will call the partner). In general across the examples we will look at, the control players have over the avatar when also controlling the partner does not contain nudges. This is not necessarily a rule that must be followed, but examples of that sort would be very difficult to analyze, and so we will not be considering them in the scope of this article.

Within most game narratives, if a partnership exists, there is some dynamic relationship between the characters in the partnership. It turns out that this relationship can be defined and enforced by gameplay. This will prove to be a crucial idea when considering the example of Trico in The Last Guardian. So let’s consider more generally how gameplay can enforce various aspects about the relationship between the partners in a partnership. In this section we will consider two relational aspects in particular, both of which will be important in analyzing Trico’s behavior: how well an avatar and partner are able to communicate with each other, and whether a partner intends to cooperate with an avatar.

The gameplay for the Ice Climbers describes both of those relational aspects quite simply. The nearly simultaneous actions of the climbers show how these two characters can communicate quickly and effectively with ease. And since the climbers never act antagonistically toward each other, they clearly determined long ago that they intend to cooperate with each other.

The Ice Climbers are just one example, however. There is no reason that a partner needs to be able to communicate well with the avatar or intend to cooperate with the avatar. Both of these factors are at play in the example of Trico. Let’s consider two examples of partners that speak in important ways to how the avatar and partner in in The Last Guardian do or do not communicate.

For our first example, let’s say that a developer would like to create a game with a partner who is a femme fatale. While she is incredibly sharp and picks up on everything that the player commands her to do, sometimes she acts mischievously based on a set of intentions that the player is unaware of. Through gameplay that has her usually be responsive to player input except in certain circumstances where she acts against player direction, the developer could maintain this sort of characterization very effectively in the narrative. So the extent to which a partner is responsive to player input can give insight into the level of cooperation between the avatar and the partner. Note again that this analysis only works if the relevant gameplay is not nudgy in terms of controlling the avatar as opposed to the partner.

One particular manifestation of the archetype of femme fatale is Kainé from Nier. She sometimes assists Nier, the titular character and player’s avatar, in various combat situations. It might surprise some people who have played the game, but it is in fact possible to give Kainé a small set of specific commands.

Kaine1.png

The menu screen for issuing commands to Kainé (1/2).

Kaine2.png

The menu screen for issuing commands to Kainé (2/2).

However, Kainé’s behavior does not change much when issued these commands, hence why few people use the feature at all. Even though she is clearly aware of the command issued to her, she apparently has no desire to heed the requests made of her, evidenced by the fact that she literally does not act upon the requests. This is all fitting to her character as a perpetually angry, foul-mouthed warrior.

Kainé killing a monster, but probably not listening to the player.

Now consider a game where the avatar’s partner is someone who is only slightly conversant in the language that the avatar speaks. In this case, that partner, who is player controlled, is slow to respond to player input, or doesn’t respond at all, simply because that message cannot be efficiently communicated, if at all. Unlike the previous example, there is no malevolence or masking of intentions: the gameplay speaks specifically to the inability of these two characters to communicate with one another. A very frustrating example of this is Hey You, Pikachu, a 1998 game in which the player communicates with Pikachu on-screen, attempting (almost always unsuccessfully) to get Pikachu to perform a variety of actions.

Pikachu almost certainly misinterpreting the player’s input.

While Pikachu is intuitively does not appear to be the player’s avatar, because the avatar is apparently the character from whose perspective we are seeing Pikachu, Pikachu certainly is controllable by the player. [6] [7] But the player usually has such difficulty communicating with Pikachu that it is as if Pikachu were not controllable at all. On the level of literary criticism, the issue with Hey You, Pikachu is that Pikachu is so difficult to communicate with that it appears as if he is actually very stupid, as opposed to simply being an animal. This shows the power of gameplay in characterizing a player-controlled entity.

Moving forward I will use these two examples of inter-partner communication to think about Trico’s response to the player’s actions through the intermediary of the avatar. The lack of ability of communicate generally, and not intending to cooperate even if the message is understood, are important aspects of the relationship between boy and his beast that the gameplay highlights and reinforces.

Trico’s Behavior

We now have the groundwork necessary to analyze how Trico’s behavior preserves the narrative consistency in The Last Guardian. To see how this is the case, I will first define one of Trico’s behaviors in question. From there I will show how Trico’s behavior can be seen as mixed nudges and that those mixed nudges arise from Trico’s perspective differing from the player’s in one of the two ways mentioned in the previous section. Trico either does not understand the message, or Trico has an intention that differs from that of the player’s.

One primary way of communicating with Trico is to give him a visual cue of where to move. As anyone who’s played The Last Guardian knows, getting Trico to actually do this is often a long and frustrating process, as he often does not notice what the player is asking him to do, does not understand, or just refuses to do it. This leads to a situation where the player input can yield a wide variety of responses from Trico, some of which help the player, some of which are neutral, and the rest of which hinder the player in some way.

In this way, we can see that the output-shifting required for a nudge exists: the player input can yield any of several outputs from Trico. I remind the reader that the gameplay for controlling the avatar in these circumstances of directing Trico is nudgeless, and so we do not need to worry about compound nudges. Since the nudges can be hindrances the player in some circumstances and be helpful in others, the nudges are in fact mixed nudges. But what of preservation of narrative consistency? What does this gameplay accomplish in terms of that?

Interpreting Trico’s Behavior

Since Trico is a sentient being, he, like the player and the avatar, has a unique perspective. The problem is that since Trico is a beast, his perspective frequently differs from that of the player, who is human. Trico’s larger size means that he looks at the navigation of physical space differently from the smaller human avatar. There are certain things out in the world that scare Trico, especially stained glass images of eyes, that do not have the same impact on the player or the avatar.

The stained glass eyes that frighten Trico.

And Trico is uncontrollably attracted to certain scents that do not seem to have any impact on the avatar. This is all evidence for Trico having a consistent perspective based on his non-human sense modalities.

The difficulty of communicating with Trico arises from the inherent difficulty of bridging the divide between avatar and partner in terms of language and species, such that the player can communicate what she wants to Trico through the avatar, and the player can understand what Trico needs in return. When the player gives a command to Trico, if he sees it and understands it, Trico then responds by performing the desired action, and we can view his behavior as a player aid. If Trico does not see the command or is unable to understand, his lack of action ends up being a player hindrance. The mixed nudges present in this case preserve the narrative that Trico does not have an easy communication channel with the boy at the start of the game, and may not be able to understand what he is being asked to do. This is similar to the example of Pikachu from Hey You, Pikachu: he often literally does not understand the commands he is given, and thus cannot act upon them in a logical way. The mixed nudges further drive this lack of ability to communicate expediently home.

Trico not understanding his commands is not the only source of nudges in his behavior, however. There are times when Trico understands what the player is asking him to do, but does not want to perform the action, similar to Kainé’s reactions to commands in Nier. One clear example of this is when the player is asking Trico to jump into the water. It takes a while to goad Trico into jumping in the water in the first place, and he is quick to get out whenever given the chance. Apparently he does not like getting wet. These player hindrances—moments when Trico does not quickly perform an action even when he understands it, because he has different intentions and desires—preserve the narrative that Trico is a being with feelings and desires, as opposed to just a robot that processes inputs from the player and acts if he understands the command. The usual output of Trico performing the output when he understands it shifts to Trico (at least temporarily) not performing that action. Trico, like Kainé, thinks and feels for himself, and that comes out in the gameplay.

“Training” and the Disappearance of Hindrances and Mixed Nudges

Over the course of the game, the frequency of moments in which Trico stares dumbly back at the player lessens. The net impact of this is that as the game progresses, many mixed nudges get replaced by player aids, as commanding Trico to do certain tasks gets easier and easier. This change in the nature of the nudges in the game over time preserves the narrative that Trico is being trained and forming a bond of friendship with the boy. As these two characters work together more and more, it becomes easier to communicate quickly and effectively. The boy has taken on the role of an animal trainer and created a capacity for communication with an animal with whom most people are unable to communicate.

Some of the player hindrances start to disappear toward the end of the game as well. There is a moment in particular when the boy is in danger of being captured by moving statues where Trico overcomes his fear of the stained glass eyes to jump in to destroy the statues and save the boy. As these hindrances disappear, it preserves the narrative that Trico cares for the boy and is willing to overcome fear and danger in order to save the boy, just as the boy overcomes his own fears and dangerous situations to save Trico. The existence of a vast number of mixed nudges early in the game that gradually turn into mostly player aids (or at least mixed nudges that are aids far more often than hindrances) over time displays the growing bond between these two characters. The game succeeds at displaying the birth of this friendship through of the nudges in the gameplay as opposed to dialogue or cut-scenes, which are few and far-between in the game.

Trico and the boy connecting with each other.

Responding to Critical Review

Game Informer complains that “Trico’s inability to consistently follow your commands drags the experience down more than anything else,” yet they also say that “The Last Guardian forges a connection between the player and Trico unlike anything else in gaming.” Now we can understand that Trico’s inability to consistently follow commands is actually a crucial part of how that special connection gets forged. While it is tempting to view the inconsistencies in the control scheme as factors that make The Last Guardian worse, it actually is the case that the controls do work to develop the relationship between the boy and his beast. [8] The nudges present in the boy’s gameplay reinforce his status as a young child, and the nudges present in controlling Trico reinforce his status as a non-human creature. It is not as The Verge author Andrew Webster says: “Often [the controls] don’t work as they should, and you’ll need to push through some terribly frustrating moments to experience everything The Last Guardian has to offer.” Rather, the terribly frustrating moments are an essential part of what the game has to offer in creating the relationship between the boy and Trico.

Although it may be initially tempting to criticize a game because of “clunky” controls, I hope that this analysis has shown that it’s worth taking pause to consider what a game’s control scheme may be saying about the story of the game itself. While it is true that at times controlling the boy and Trico is difficult in surprising ways, these aspects of the gameplay carry weight in preserving the narrative consistency of the game. The mixed nudges present in controlling the boy drive home his attempt to be cautious, even though his youth sometimes leads him to misread situations. The wide variety of nudges present in controlling Trico drives home his status as a non-human animal, and the change in types of nudges over time shows how he forms a strong bond and ability to communicate with the boy. Kotaku reviewer Mike Fahey sums it up well by saying “The unpredictable AI can make for some frustrating moments, but that frustration only enhances the illusion that this strange cat-beast is a living thing. I am not irritated with a video game. I am irritated with my large feathered friend.” [9] The game uses nudges in a way that is poignant and subtle to develop the relationship within the partnership that the game features.

Directions for Future Research

We’ve covered a lot of ground in these articles. Starting from defining nudgy gameplay and progressing through games that don’t need nudges to games with player aids and hindrances, and then on to games with mixed nudges based on avatar perspective, we’ve seen a wide variety of ways that games have dealt with the variable that is the player in ways that preserve their narratives. My hope is that the reader uses this way of thinking to critically analyze the games that they play, including ones that I did not discuss in this article specifically, and that these articles can serve as a starting point for further analysis.

To that end, there are many topics I brought up in these articles that I did not have space or time to comment on to the degree that is deserved. I think it pertinent to bring up a few of those topics and pose questions as a place to leave the reader at the end of this work. Hopefully one of these questions will spark a reader’s thinking and they will think of some way to explain some aspect of the stories in video games that at this point remains elusive.

One topic that I hinted at but did not dive into for lack of space is the issue of the definition of ‘avatar’. While the term is frequently used among game fans and analysts alike, the word does not seem to have a consistent definition. So what exactly is the avatar? How does the avatar differ from other player-controlled entities? WaTF founder Aaron Suduiko has some foundational thoughts on these questions in the form of his senior thesis, which is an ontology of single-player video games. But other than that work, the question at this point has no clear answer.

Another open topic is the topic of multiplayer generally, something I discussed in Part I of this series in the context of multiplayer skill tournaments, and how games of that sort are better off remaining nudgeless. One challenge in writing that section was identifying exactly what the narrative of a multiplayer game is. Finding the narrative within a multiplayer game is not as easy as it might initially appear. Consider, for example, a group of six players cooperatively playing a Destiny mission. While there is a story presented by the game in terms of voice lines and cut scenes, there is also a narrative being weaved within the conversation between the players, which need not actually bear any relation to the cut scenes and voice lines. Which of these is the dominant narrative? Or do they coexist? How do you analyze a narrative that has multiple agents influencing the narrative’s events? This is massively under-explored territory, even here on With a Terrible Fate.

Nudgy Controls Conclusion

Participatory storytelling has a unique challenge to handle: how does a storyteller convey a cohesive narrative to an audience that has a hand the instantiation of that narrative? We can all imagine an audience member in some participatory theater who gets bored and rolls his eyes at a dramatic moment in the show, critically undermining believability of the narrative being presented. This sort of challenge is a constant issue for writers of stories for games. How do you make sense of the role of the player in your story? What if the narrative requires skill on the part of the player that the player does not possess? What if your player is too skillful in a moment when failure is expected? What if your player’s desire is to try and break the narrative consistency of your game through their actions? In general, how do you handle the variable that is the player, who is importantly external to your game?

Sometimes the most effective technique is to nudge the player’s input toward a more narratively appropriate output in the controls themselves. We’ve seen how doing this can make a character appropriately badass regardless of player skill, and how it can be used to make vivid the critical condition of a dying character. But beyond that we’ve seen an even more subtle and fascinating capacity that these nudges possess. Nudgy controls can create and reinforce character traits and relationships, to the extent that a game like The Last Guardian needs little exposition other than just the gameplay itself.

It’s time to stop judging the control scheme of a game solely on how “tight” the controls are. Sometimes a game’s controls are difficult, or frustrating, or even too easy, in a way that reinforces the narrative of a game. Gameplay and narrative are inseparable. Let’s start judging control schemes based on how well they work with the narrative, rather than in the superficial ways we have been up until now.

Nathan Randall is a featured author at With a Terrible Fate. Check out his bio to learn more.


[1] Game Informer.

[2] Game Informer.

[3] For this section, I stipulate that the avatar is a sentient being, for sake of simplicity. This is not actually a requirement for the analysis to work, but it makes the argument easier to follow.

[4] While mixed nudges that arise from personality traits and perspectives, such as the ones described in the previous section, are deep and rich, this is not the only possible manifestation of mixed nudges. To see this, consider the following case. One could imagine a science fiction game in which the avatar has a “quantum fuse box” implanted into his brain. The device works in the following way: half of the times it is activated it makes the avatar successful at whatever he attempts to do, aiding the player tremendously, and half of the time it forces the avatar to fail at whatever he attempts to do, hindering the player. The activation of the device occurs randomly, and the output of the device is random.

This hypothetical game definitely has nudges whenever the device is activated, in that any input on the part of the player is shifted, and the nudges preserve the narrative consistency of the existence and effectiveness of the quantum fuse box. But the nudges are player aids half of the time and player hindrances half the time, meaning that they are mixed nudges. So there is no requirement for mixed nudges to arise out of avatar perspective. Thanks to Aaron Suduiko for proposing the quantum fuse box example.

[5] Player-controlled entities and its subset, avatars, actually end up being incredibly rich and complicated territory to consider. All avatars are player-controlled entities, but it’s not clear where the dividing line between the categories is. What differentiates a player-controlled entity from an avatar? Are any of the individual units in a game like Halo Wars avatars? In a role-playing game in which the player controls an entire party of characters, is each character just a player-controlled entity, or an avatar as well? Are all of the characters avatars of the player? Is one character the player’s avatar and the rest just player-controlled entities? The answers here are not clear, and so for the most part I will leave these questions unanswered, as the answers are likely long and tangential to the topic at hand. This leaves open the possibility that player-controlled entities and avatars are in fact the same set of entities, making one of the two terms redundant. Intuitively this does not seem to be the case, as it seems that some things are avatars and others are simply player-controlled.

[6] I leave open the possibility that Pikachu is the player’s avatar, but common intuition from players is that while he is controllable by the player, he is not the player’s avatar.

[7] Note how even in first-person, in which we cannot see a manifestation of our character on-screen, we still think of the character from whose eyes we are seeing to be the “avatar.” There can be no figure on screen and yet we can refer felicitously to an avatar being present. This is odd and warrants further analysis.

[8] Of course, my analysis of nudges in The Last Guardian doesn’t excuse all of its control issues. I readily admit that controlling the camera in The Last Guardian is pointlessly difficult and that the game would have been better with tighter camera control.

[9] Mike Fahey, Kotaku.

The Real Hostage in the Zero Escape Series is You

by Kent Vainio, Featured Author.

*Warning: Spoilers to follow for the Zero Escape series!*

What if a video game could make you feel just as trapped as the characters in it? Take a moment to imagine what that would feel like, to be sitting comfortably on your couch in the real world and at the very same time feel trapped in an diabolical escape game with your life on the line. How would you react?

That’s exactly where the Zero Escape games come in, a series of visual novel masterpieces that accomplish all the above and more. They engross, involve, and trap the player in their poignant and terrifying stories about humans trying to survive deadly escape games. I can personally attest to the fact that these not just any ordinary visual novel experiences.

How are these games able to trap their players? The key idea is that of morphic resonance. If that term sounds to you like a highly fascinating but scientifically unproven theory of biological communication, then you are spot on. Rupert Sheldrake first coined the term in his 1981 book A New Science of Life. It is a pseudoscience concept describing, in Sheldrake’s word’s, “the idea of mysterious telepathy-type interconnections between organisms and of collective memories within species.” [1] According to this view, memories and experiences are stored in so-called morphic fields that surround us all the time, which can then transmit this information to other organisms of the same type.

You might wonder how this outlandish concept connects with visual novel video games. Well, the real magic of this idea is how it is used in combination with the games’ well-designed narrative structures and gameplay to create a vivid feeling of immersion in their fictional game worlds. In this article, I compare and contrast the depiction of morphic fields in the first two games of the Zero Escape series in the context of player-avatar interactions, with the ultimate aim of demonstrating just how effectively this concept is used to trap the player. This feeling of being trapped consequently invites them to consider the games’ pseudoscientific world as their own reality, which leads them to deeply question human psychology and the truly fascinating unknown depths of the subconscious mind. To accomplish this, I first analyze the narratives of both games, and the ways in which they use the idea of morphic fields, followed by an analysis of these fields in the context of gameplay. I then tie these ideas together to show how the games can teach us about human nature and the incredible human mind.

Essential Background

The first game in the Zero Escape series, entitled Nine Hours, Nine Persons, Nine Doors (which I will simply call 999 for the sake of brevity), revolves around the nightmarish experience of protagonist [2] Junpei, who wakes up in a sinking ship, only to be forced to play the deadly “Nonary Game” with eight other participants in order to escape the failing vessel before they all drown. Each participant is given a bracelet with a number on it, and these numbers can then be added up to progress through a corresponding door with that sum on it, with the ultimate aim of escaping through the number 9 door. Along the way, the participants must solve challenging escape-the-room puzzles to advance through the ship, all the while contending with the uncertainty, fear, and malice of their fellow game players. Upon completing the game, it turns out that the current Nonary Game is actually a replication of a previous one that occurred prior to the events of 999. The original was instigated by a malevolent pharmaceutical company, Cradle Pharmaceutical, which endeavored to conduct further research into the idea of morphic fields. To accomplish this, Cradle took nine sibling pairs (the game tells us that siblings are said to have an extra special affinity for communicating through the morphic field) and then forced them to the play the same life-threatening Nonary Game that the player experiences in the first person, through Junpei’s eyes. By putting the siblings in mortal danger, Cradle hoped to draw out their morphic resonance powers, which, according to the game, become vastly more potent in the face of imminent danger. All but one of the children managed to escape the game, as a lone girl, Akane, was unable to solve a challenging Sudoku puzzle that resulted in her untimely incineration. It turns out that the mastermind behind the current Nonary Game is none other than Akane herself, manipulating Junpei through the morphic field to help her stay alive in the past. Ultimately, this plan succeeds, Akane is revived, and the group escapes. [3]

Virtue’s Last Reward (which I will call VLR for convenience), is the direct sequel to 999 and once again incorporates a Nonary Game, this time involving a unique feature called the Ambidex Game—a game of betrayal, reminiscent of the prisoner’s dilemma—in which players can choose to either betray or ally with a partner to gain or lose points, with nine points necessary in total to leave the facility. Anyone who reaches zero points or fewer will be executed, and no participants can leave until someone wins the game. To make the situation even worse, all the participants have been unknowingly infected with a deadly virus called Radical-6, which slowly robs the host of their mental faculties, eventually turning them into an animalistic murderer and ultimately killing them. The plot’s backstory (slightly less relevant than 999’s for the specific purposes of the analysis in this article) involves a terrorist group trying to wipe out the human population by spreading Radical-6, resulting in the game’s protagonist, Sigma, one of humanity’s few survivors, constructing a life-threatening game that will allow his present consciousness to swap with his past consciousness from a previous timeline through morphic resonance. This is possible because, just like in 999, morphic resonance powers are increased when facing extreme danger. In this way, Sigma travels back in time to stop the outbreak of the virus using the knowledge he has accumulated about how to stop it. [4] This scheme is highly complex, and is graphically represented in the diagram below.

Pic1ZE.png

In this image, the bright blue arrows represent the flow of Sigma’s conscience through time, with the aim of ending up at Point E to trigger an alternate future where most of humanity does not perish from Radical-6.

As can be seen from the plotlines of each game, both stories depend centrally on the concept of morphic resonance. However, the truly fascinating aspect of this series is the way in which the games manage to evoke a seemingly real sense of morphic resonance between the player in the real world and the protagonist in the game world through a combination of expertly designed gameplay and unique narrative structure. This conceptual bridge between the player and the protagonist allows the player to more deeply empathize with the plight of those partaking in the Nonary Game, and ultimately leads to a blurring of the distinction between the real world and the game world.

Morphic Resonance and Narrative

To prove that morphic resonance connecting the player and avatar, we first begin with an analysis of the narrative structures of 999 and VLR. By ‘narrative structure’, I mean the way in which a story is told—for example, as a single story told by a narrator, or as a web of interconnecting narrative branches that together form a cohesive story. The narratives of both games require using information from a previous playthrough to help inform the success of a subsequent one, an action which is a direct analogue to the idea of transmitting knowledge through a morphic field from the past to the future. Although Junpei/Sigma are technically the ones performing the actions in the worlds of the games, it is the player themselves who is accumulating knowledge from successive playthroughs and then imparting their knowledge to the corresponding avatar in the game. Thus the player is effectively transmitting knowledge to their avatar through a morphic field of sorts, by controlling him on each playthrough.

999 has an infinitely looping narrative structure with 6 endings, one of which is the “true ending.” This means that 5 endings end up with the protagonist dying some kind of horrific death, while the final ending—the “true ending”—involves escaping the Nonary Game and finding out all of the plot’s backstory, in essence making it the only real “ending” to the game, an ending that can only be obtained by dying multiple times during separate playthroughs. In order to navigate their way through this narrative structure, the player must first play the game a few times to get a sense of the characters’ different personalities and the context of their situation, and then on subsequent playthroughs use that information to make choices resulting in varied narrative outcomes. For example, in one of the most common endings, Clover, a young girl who at first seems to be the most innocuous participant in the Nonary Game, brutally murders Junpei with an axe after starting to doubt his loyalty. Thereafter, the player learns not to go into a puzzle room in a group with Clover, which will inevitably end up with them confronting her alone and being killed.

CloverAxeEnding.png

Clover walks away nonchalantly after cutting down Junpei with an axe.

In a similar fashion, the player might learn of characters that are hiding secrets or harboring vital information to the progression of the plot, and so on subsequent playthroughs they will choose to form exploration groups with these characters in order to advance their respective storylines. This process of trial and error involving the accumulation of information across successive playthroughs exactly mirrors communication through a morphic field between the player and avatar.

VLR also utilizes a similar branching narrative structure, but one that more directly relates to the idea of morphic fields enabling the transfer of knowledge between parallel timelines. Although 999 does not really touch upon morphic resonance between parallel universes, VLR makes it abundantly clear that the human consciousness is able to jump between universes in the stream of time, and it is this type of morphic resonance that is utilized by the protagonist, Sigma, to survive the Nonary Game and escape alive. Right from the start of the game, the player is presented with a screen full of branching timelines that diverge at every key decision point in the game.

BranchingTimelineZE.png

This image is an example of what the player’s narrative flow chart might look near the end of VLR. Grey boxes with white question marks represent yet-unseen parts of the game’s narrative. The “NOVEL” sections in blue are narrative choices that the player has made and the sections in green with question marks are decisions that have already been resolved or could be resolved upon replaying them by gleaning information from other parallel timelines (with the ones in black representing yet-unresolved dilemmas). The character icons are endings for those specific characters. Finally, the skull icons indicate points at which the player died.

Like 999, the player must play through certain branches of the narrative, and then must use the knowledge gleaned from these short playthroughs to advance other sections of the timeline. This is especially important in the case of the “story locks,” or black icons with question marks on them, which are key moments in which the protagonist faces impending doom—for example, being threatened by another character. The only way to move past these locks is to explore other timelines in the game and talk to other characters in order to find out the piece of information that will help the protagonist survive that specific event.

This process is identical to the proposed use of morphic resonance in both Nonary Games, which is to transfer knowledge to the participants in times of extreme stress or need, when their ability to connect to and resonate with the morphic field is enhanced. In this case the player acts as the transmitter of information through the field, and the video-game protagonist is the receiver. By forcing the player to use information from parallel universes to advance the story, the game strictly imposes the paradigm of morphic fields on their communication with the game’s protagonist. This morphic communication with the protagonist is made even more believable by the fact that the real world and the game world could be seen as existing in parallel dimensions, and thus information can be transferred between them in the way that the game describes.

Morphic Resonance and Gameplay

Aside from the narrative structures of each game, the gameplay mechanics further reinforce the idea that the player and protagonist communicate through morphic resonance. Both games involve extended puzzle-solving escape-room sequences, in which the player must interact with the environment around the protagonist to help them solve challenging problems. Once again, the player is using their knowledge from real-life areas of problem-solving and mathematics to help the protagonist complete puzzles and escape, which we can understand as the player communicating ideas to the protagonist through a morphic field. Moreover, by virtue of being video game avatars, Junpei and Sigma literally can’t solve puzzles without the player’s knowledge and influence, and hence are directly reliant on the player transferring knowledge to them to progress through the game.

Perhaps the most poignant example of morphic communication between player and protagonist manifested in gameplay occurs at the end of 999, as the player helps Akane solve the Sudoku puzzle that resulted in her untimely death in the past. The player must save Akane by performing actions on the DS touch screen that are then transferred to the past, represented by Akane sitting by the puzzle on the top screen. However, the DS must also be rotated 180 degrees to view the puzzle right-way-up.

Turning the DS upside-down to help Akane stay alive in the past.

This juxtaposition of the two screens on one device, as well as the physical rotation involved in solving the puzzle, very literally bring the action of the game into the player’s real-life surroundings, making them feel as if they are communicating with Akane through the morphic field. Although Junpei is the one most immediately transmitting the instructions to Akane within the game, it is the player who is helping Junpei and Akane escape, and thus the player who is acting as a transmitter of information through a morphic field between their world and Junpei’s world.

Unlike 999, VLR does not have any single experience like this one. However, the much more direct action of jumping through time at will to find out information about the Ambidex Game does create an equivalent sense of physical connection to the idea of morphic resonance.

Why Does Player-Avatar Morphic Resonance Matter?

Clearly the game developers have gone to great lengths to ensure that the players feel connected to the protagonists in each game through morphic fields, which invites the question of exactly why this is the case. On a superficial level, this greatly increases the enjoyment factor of both games and makes them much more engaging, fascinating experiences. Not only are the ideas of morphic fields and subconscious communication between parallel universes able to pique any player’s interest right from the get-go, but the fact that players feel like they are seemingly performing this type of communication in real life makes it a more visceral experience of surprise and discovery.

However, at a deeper level, mirroring the player’s experience with those of the characters in the game also makes the player feel trapped in the game world. By virtue of being escape games, these titles not only aim to present an engaging experience, but also to put human behavior and psychology under the magnifying glass in situations of extreme stress, as in the Nonary Game. They therefore endeavor to convey much more profound messages about human nature, and by making the player feel trapped in the game and thereby empathize more with the characters’ struggles, both internal and external, they can get such messages across quite effectively. Despite displaying seemingly negative features of human behavior, such as the eager willingness to betray others to save one’s own life (highlighted by the Ambidex Game), or endanger many people’s lives for the sake of research (such as in the case of Cradle Pharmaceutical in 999), these games ultimately convey hope and optimism about the untapped potential of the human mind and subconscious. By having the main characters escape in the “true” ending of both games, staying loyal to the very last, the games convey a sense of hope that, no matter how bad the situation gets, human ingenuity will always pull through. In the case of this game series specifically, the unfathomable, untapped depths of the human subconscious are the saving grace of the day, making the player deeply excited and enthused that something like the morphic field might actually exist and might be watching over us, so to speak, in times of trouble.

Conclusion

Despite being a pseudoscience concept with no true scientific backing, morphic resonance is successfully used to ignite the player’s imagination upon playing these games, and instills them with a sense of wonder about all of the undiscovered facets of the human mind. In this way, the developers are able to effectively get across the excitement they themselves felt when reading about this fantastical concept for the first time. And who knows: maybe the next time you find yourself in a potentially life-threatening situation (which I of course hope does not happen!), humanity’s morphic field will save the day after all.

null

Kent Vainio is a featured author at With a Terrible Fate. Check out his bio to learn more.

Citations

[1] “Morphic Resonance – The Skeptic’s Dictionary – Skepdic.com.”

[2] In both 999 and VLR, the player’s avatar is the main protagonist of the story (either Junpei or Sigma respectively), so I use the terms avatar and protagonist interchangeably depending on the context.

[3] “Nine Hours, Nine Persons, Nine Doors.” Wikipedia.

[4] “Zero Escape: Virtue’s Last Reward.” Wikipedia.

 

Where are the Humans in NieR: Automata?

Introduction

Regulars of With a Terrible Fate know that Nier is near and dear to my heart because it was the game that first motivated me to write analytically about video games, even before my work on Majora’s Mask. You can therefore imagine how excited I was when the game was given a sequel, NieR: Automata. While I was initially worried that it would fall short of its predecessor, I think it’s safe to say that NieR: Automata ended up being even philosophically richer than Nier. To that end, it’s time that NieR: Automata met With a Terrible Fate.

Regular readers also know that my analytic method with respect to video games typically focuses on clarifying the precise and often surprising relations in which the players of video games stand to the stories of video games. In that regard, this paper is no exception: I want to convince you that you, the player, are involved in the story of NieR: Automata in a surprising and illuminating way. But, for the sake of full transparency, I’ll start by warning you that, as you might imagine, this work contains LIBERAL SPOILERS for NieR: Automata, Nier, and Drakengard (starting in the next paragraph!). If you’re at all familiar with Nier and/or NieR: Automata, then you know that the stories of these games deeply depend on facts that are only revealed quite late in the game; so, if you haven’t yet played through the games, I wouldn’t recommend reading this yet.

With that in mind, let me offer a roadmap of the paper. I frame this analysis as the attempt to answer a seemingly simple question: “Where are the humans in NieR: Automata?” Now, if you’ve only just started the game, you’d probably say, “They’re on the moon, obviously”; if instead you’ve played through the whole game, you’d probably say that this is an ill-posed question because, at the time of NieR: Automata’s events, humans are long-extinct—there are only machines lifeforms, androids, animals, plants, and pods. Fair enough; however, it’s undeniable that the presence of humanity is virtually ubiquitous in the world of the game. Machine lifeforms slowly recover human culture and become sentient; androids identify themselves as sentient even before machine lifeforms do; by the end of Ending E, even the simple pods that accompany androids are beginning to exhibit “human” traits like compassion and attachment. So when I ask where the humans are in the game, what I’m really asking is what the origin is of all these specifically human properties that the various organisms in the game eventually instantiate—especially the property of sentience, or self-awareness. You might think the answer is simply that these human properties originated in the humans that went extinct long ago in the game’s world; after all, the game mentions that the human characteristics of the androids are the result of their human creators.

It’s this second response that I want to challenge: I think that the player, rather than the extinct humans of the game’s world, are the source of the sentience that emerges in androids, machine lifeforms, and pods throughout the course of the game. I begin by clarifying the scope of my thesis, in an effort to show that, so far as I can see, my claims don’t threaten what one might call “canonical” interpretations of the game’s story. Then, I use an analysis of player-avatar relations to argue that the player is the origin of sentience and humanity in Nier and NieR: Automata. This, I think, is a fairly easy thesis to endorse. After this is established, I argue for the significantly more controversial thesis that the fictional world of NieR: Automata is actually nothing more than a data structure; that is to say, it is true within the fiction of the game that the world is just a computer simulation being manipulated by a real player. Finally, I conclude by explaining why these theses matter for understanding NieR: Automata: the game’s metaphysics, I argue, establishes unexpected, fictional, ethical mandates that bind the player as they engage the game.

1. Preliminaries

In the past—especially in my initial, four-year-old work on Nier—I have sometimes failed to be sufficiently clear about the scope and level on which my analyses of video games have applied, which has led to some confusion about how my work ought to be evaluated in comparison with competing analyses or interpretations of the games in question. This is an especially acute danger when discussing NieR: Automata because there are myriad possible ways in which one could interpret “the game.” To name a few: are you analyzing the game as a stand-alone narrative, or as the third installment in the three-part narrative sequence of Drakengard, Nier, and Nier: Automata? Are you analyzing the game as a set of equally possible narratives with 26 different endings, or are you analyzing the single narrative and ending within the game that you take to constitute the “true” story? And so on. There’s no obvious reason to endorse any one such analytical approach over the others; what matters is being clear on precisely what your analytical approach is, so as to avoid having it confused with other approaches in the vicinity. By clarifying my own approach in this way, I aim to show why the claims it generates about the game are fairly compatible with a wide array of other plausible analyses of the game.

To see what my project is up to, we need to distinguish between what we might think of as two “levels” of analysis. Call the first level of analysis ‘Narrative-Event Analysis’ (or ‘NE Analysis’), and define it as follows.

NE Analysis: The analysis or interpretation of the various events of a narrative, and of how those events are interrelated.

This is what most video game theorists and art critics are up to: they take the events of a given story and try to make meaning out of those events in a particular way. When YouTube personalities analyze and explain the lore of the Dark Souls games, they’re engaged in NE Analysis; when you try to sort out where the story of The Legend of Zelda: Breath of the Wild fits into the larger set of Zelda timelines, you’re engaged in NE Analysis; when you’re explaining how on earth the Shadowlord of Nier logically fits into Ending E of Drakengard, you’re engaged in NE Analysis. This is the time-honored tradition of taking the various events of a story and seeing how they best cohere with one another to form one meaningful, comprehensible work of art.

Breath of the Wild Overview

The question of where Breath of the Wild fits into Zelda timelines is a question for NE Analysis.

Now consider an altogether different level of analysis. Call it ‘Narrative-Grounding Analysis’ (‘NG Analysis’), and define it as follows.

NG Analysis: The analysis or interpretation of the metaphysical foundation in virtue of which the various events of a narrative obtain, and how that metaphysical foundation relates to the events that it actualizes.

Put this way, NG Analysis might sound unfamiliar, but (1) I think we often ask ourselves NG-Analysis questions about stories, and (2) this is the exact sort of analysis I’ve been applying to video games for several years now on this site. When you ask yourself what makes the constant regeneration of the Chosen Undead in Dark Souls possible, you are engaging in NG Analysis; when someone explains what it is about the world of Zelda that makes time travel possible, they are engaging in NG Analysis; when I am analyzing what makes it possible for machine and Replicants to become sentient in the world of Nier and NieR: Automata, I am engaging in NG Analysis. This is also the sort of analysis I was undertaking when I claimed that: the player is the source of moral reality in Majora’s Mask; the narrative of BioShock Infinite is a universal collapse event caused by the player; and the entirety of Bloodborne is a dream.

The Blood Minister

My analysis claiming that all of Bloodborne is a dream is an example of NG Analysis.

What’s crucial to notice about these two levels of analysis is that neither level of analysis, at least in any obvious way, makes claims about the other level of analysis. Insofar as this is true, the video game theorist is licensed to engage in NE Analysis about a game without worrying about what the right NG Analysis of the game is, and vice versa. For instance, suppose that you’re trying to decide between two competing theories of how Link is able to travel through time in the Zelda games: according to one theory, this is made possible by the will of the Goddess Hylia, and according to the other theory, it is made possible because Link is some special kind of entity that can freely move through time in a way that ordinary Hylians can’t.[1] These two theories are trying to explain the same narrative events, and they can’t both be right; that means that we have to choose between them if we want to have a correct understanding of the game’s story (assuming that one of these two theories is correct, as opposed to both of them being incorrect). However, neither of these theories is going to have anything to say about how time travel in the game works: they’re only going to say what makes time travel in the game possible. So consider a question like this: in The Legend of Zelda: Ocarina of Time, how do the actions of Adult Link affect the events that Young Link experiences seven years earlier? (Think of an example like the Spirit Temple, where Adult Link and Young Link are apparently “interacting” with each other across time.) This is a question about how the events of the game’s narrative relate to one another—which is to say, it’s a question for NE Analysis to resolve. Whether time travel is made possible by Hylia or by Link’s constitution isn’t going to have any direct bearing on how the events concerning Adult Link relate to the events concerning Young Link; to answer this question, we instead need an NE Analysis specific to those events (e.g., “time travel works by allowing Adult Link to rewrite events of the past”).

I’ve only aimed to show here that NE Analysis and NG analysis are indeed separate levels of analysis: when you’re engaged in NG analysis, you’re analyzing something fundamentally different than what you analyze in NE Analysis: in NG Analysis you analyze the metaphysical foundation of events in a story, whereas in NE Analysis you analyze the events themselves. Why does this matter as preliminaries to my analysis of NieR: Automata? Well, there are many interesting questions about how the events of NieR: Automata relate to each other, to the events of Nier, and to the events of Drakengard. To name a few potential questions of this sort: where did the aliens in NieR: Automata come from? How, if at all, does White Chlorination Syndrome relate to the Black Scrawl? What effects does 2B’s consciousness have on A2, after A2 kills 2B? It should be clear by now, I hope, that these are all questions that NE Analysis is tasked with answering. And it bears mentioning that, typically, when people talk about “canon interpretations” of a story—roughly, the “correct” interpretation of a story’s events, often deemed correct simply because the author says it’s correct—are interpretations that similarly belong to NE Analysis. Canonical interpretations of narratives rarely have anything substantive to say about the metaphysical grounds of a narrative’s events.

Recall that what I’m interested in pursuing in this paper is a matter of NG Analysis: namely, the question of what it is in virtue of which apparently human properties obtain in the world of NieR: Automata. Given what I’ve said, it follows that my arguments in this paper won’t directly bear on “canon” issues of how to properly interpret the events of the game on the level of NE Analysis. Put differently: if you already have some favorite theory about how the events of the game are interrelated, my work here doesn’t necessarily pose a threat to that theory.[2] If, on the other hand, you have a favorite theory about the metaphysical grounds of the narrative’s events (and I frankly haven’t seen any such theories out there yet), then my theory is a competitor to that theory, and you’ll have to see which seems more plausible to you upon reflection.

2. Becoming Human

I’ve established the level on which I intend my analysis to operate: in this paper, we’re exploring the metaphysical foundations of NieR: Automata. In this section, I offer an argument to the conclusion that the humanity of the player is what metaphysically grounds an entity’s “becoming sentient” (i.e. being self-aware and instantiating human properties) in NieR: Automata and NieR. I’ll make this argument by first focusing on the nature of maso and Project Gestalt, and then by extending it to the nature of machine lifeforms, androids, and pods. This will directly lead us to the argument of the next section—that the world of NieR: Automata is a data structure.

Drakengard Giant

The Giant/Queen-beast, from which the maso originates.

‘Maso’ is a substance that originated in the ending of Drakengard that served as the impetus for Nier and (subsequently) NieR: Automata. Very roughly, in Ending E of Drakengard, the protagonists confront and destroy an otherworldly “Giant” (also known as the Queen-beast) that subsequently releases maso, an otherworldly, “multidimensional” particle. The maso induces ‘White Chlorination Syndrome’ (‘WCS), a disease that forces a choice on humans: either form a pact to become the servant of a god from another world, or perish by turning into a statue made of salt. Humans are able to avoid this disease by using maso to develop “multidimensional technology” that separates their souls from their bodies until a time at which WCS has died off, at which point humans would reunite their souls with their bodies (this plan of defense against the disease was called ‘Project Gestalt’ and was central to the plot of Nier). However, the soulless bodies preserved for humans—entities called ‘Replicants’—ended up developing “a sense of self” (i.e. sentience). This advent of self-awareness in Replicants led to a corresponding loss of sentience in the separated souls of humans, called ‘Gestalts’—this loss of sentience was known as ‘relapsing’ and caused the Gestalts to turn into aggressive, animalistic creatures (known to Replicants as ‘shades’).

Father and Daughter

Replicant Nier and his daughter, Replicant Yonah.

The protagonist and avatar of Nier—technically named by the player, but called ‘Nier’ for convenience—is the Replicant corresponding to “the Original” Gestalt, someone whose data was central to the development and sustainability of Project Gestalt. The story of Nier (again, very roughly) follows Nier’s struggle to save his daughter—also a Replicant—from “the Shadowlord”—an entity that Nier sees as an enemy, but who is actually his own Gestalt (“the Original”) trying to reclaim his daughter’s Replicant. When Nier kills his Gestalt, he effectively derails Project Gestalt, which leads to the eventual extinction of humanity.

Nier killing the Shadowlord

Replicant Nier killing the Shadowlord, his own Gestalt.

That’s a far-too-condensed reconstruction of what I take to be the key and relatively uncontroversial elements of the narrative than begins with Ending E of Drakengard and proceeds through the conclusion of Nier. The key points to notice for my purposes are: (1) maso is a multidimensional substance that binds humans to gods from other worlds, (2) the avatar of Nier is the Replicant that corresponds to Project Gestalt’s Original, and (3) sentience is more-or-less zero sum between a given Gestalt-Replicant pair: if the Replicant gains it, then the Gestalt starts down the road to losing it (and metaphysically, this seems reasonable: if a Gestalt-Replicant pair is supposed to be just one conscious entity, split into body and soul, presumably it would be able to sustain just one consciousness).

I think that, merely from the fairly uncontroversial facts I’ve highlighted about the story, a surprising but intuitive thesis about the source of sentience in Nier presents itself: namely, the player of the game is the source of sentience in Nier (the avatar) and other Replicants. Notice again that maso, according to Drakengard, is a substance that straddles dimensions and binds people to the gods of other worlds. It seems appropriate and explanatorily powerful to say that, as an avatar, Nier—again, the Replicant corresponding to the Original—is importantly bound to the player of the game, an extra-dimensional entity that determines Nier’s actions and choices throughout the game’s story. Given that we know maso renders humans the servants of gods, and Gestalt technology is derived from maso, we can explain Nier’s sentience by saying that he inherits it from the extra-dimensional entity to which he is bound: a sentient, human player.[3]

A potential objection: what about the fact that other Replicants gain sentience in Nier? These other Replicants are clearly not avatars, and so it can’t be the case that sentience in Nier is categorically derived from the player’s sentience.

My response: recall that Nier’s Gestalt has the special status of being “the Original” in Project Gestalt. To my knowledge, this status is never given a full and precise explanation, except to say that this Gestalt uniquely makes Project Gestalt possible, and that Project Gestalt is irreparably derailed when Nier kills his Gestalt. Given this special priority that Nier and his Gestalt have in the efficacy and progress of Project Gestalt, it strikes me as plausible to suppose that Nier’s sentience would play a causally decisive role in the emergent sentience of other Replicants. That is to say, Nier’s status as the Original’s Replicant makes it the case that his acquired sentience—which, again, he inherits from the player—subsequently induces sentience in other Replicants. So, even while the other Replicants don’t directly inherit sentience from the player, their sentience is still derived from the player, given the causally decisive role of Nier’s sentience.

Another potential objection: the player of a video game is a real person, not a fictional entity. So they simply couldn’t be part of the game’s narrative: real things can’t causally interact with fictional things in that way (e.g., I, a real person, can’t stop Tom Sawyer from painting a fence in the fiction of Mark Twain).

My reply: no doubt this is true, but fictions give real people fictional roles to play all the time. Think, for example, of second-person novels, which put the reader in the fictional role of whomever the narrative is addressing. And even though it might seem unintuitive or metaphysically unhappy to say that the player, from “another dimension,” is influencing the actions of Nier in his dimension, recall that the narrative already allows for this kind of influence even prior to my interpretation: again, WCS induces pacts between humans and gods from other worlds. So my analysis is metaphysically of a piece with the rest of Nier’s narrative.

I think my analysis is illuminating here because it links the rise of sentience in Replicants to Project Gestalt’s origins in maso; it also gives explanatory and metaphysical force to Nier’s status as an avatar (that is to say, Nier is an avatar because his maso-derived connection to the player allows the player to determine his actions). The analysis also strikes me as better than saying that Replicants “simply became sentient,” because, by linking sentience in Nier to the human playing the game, we are able to identify the sentience of Replicants as derived from a real source of sentience (i.e. an actual human).

So much for the good reasons to accept the player as the ground of sentience in Nier; the question now is, can we extend this metaphysical account of sentience to the world of NieR: Automata? Yes, but admittedly we can’t do it directly: given that YoRHa androids, machine lifeforms, and pods aren’t the direct products of Project Gestalt, we can’t simply say that maso technology allows the player to influence them all, and leave it at that. However, I think we can make an argument by inference-to-the-best-explanation that gives us good reason to believe that the metaphysical account we’ve given in Nier does extend to NieR: Automata; we’ll just have to invoke some data about machine cores and the player’s metaphysical relation to the game’s world.

Screen Shot 2017-04-24 at 6.43.56 AM

2B and 9S with their black boxes, fashioned from recycles machine cores.

First, machine cores. These are the central components of the machine lifeforms that the YoRHa androids in NieR: Automata battle without end; it’s also revealed late in the game that these cores are also recycled and used as “black boxes” to power YoRHa androids. There are two crucial upshots about these machine cores. The first upshot is that information archives provided in the game reveal that the cores are responsible for the structure of the consciousness of whatever entity they’re powering; we know this because the archives say that, in virtue of both machine lifeforms and YoRHa androids using machine cores, “it could be said that the consciousnesses of YoRHa units and machine lifeforms share the same structure.” The second upshot is that machine cores are well-suited to represent consciousness in entities that are designed to ultimately be destroyed. We know this because archives within the game report that “black boxes were installed [in androids] after determining that it would be inhumane to install standard AI in androids that are ultimately destined for disposal.” Given that YoRHa androids are apparently sentient, and their machine-core powered black box is responsible for the structure of the androids’ consciousness, it follows that we can understand what grounds sentience and humanity in NieR: Automata by understanding how machine cores are conducive to sentience and humanity.

Now, consider the question of what sort of metaphysical relation the player stands in to the world of Nier: Automata. What, in other words, does the player’s access to the game’s world amount to, within the fiction of the game? It seems to me that players have a fairly direct form of access to and influence on the game’s world. The game’s manifold endings exemplify this: the player’s choices are often able to determine not only the actions of their avatars, but also the desires and motivations of their avatars. For example, if the player directs 9S away from his initial mission helping 2B, the game ends with text saying: “9S was last heard to say: ‘I can’t control my curiosity about machines anymore. I’m leaving so I can study them as much as I want!’ He was never heard from again” (this is Ending G). Similarly, if the player has 2B kill the machines that putatively want to establish a peace treaty with Pascal’s village, the game ends with text saying: “In a sudden fit of temper, 2B wiped out the machine lifeforms, and no peace was born that day” (this is Ending J). This tight connection between the player’s choices and the mental states attributed by the game to the avatar androids suggests that the player does have some measure of influence over not just the androids’ actions, but over their psychology as well. Even more to the point, players can alter the very constitution of their avatar androids by changing the plug-in chips that determine their various characteristics, even going so far as to remove their OS chip if they wish. In other words, players seem to have deep and pervasive control over myriad constitutive features of their avatar androids’ identities.

Screen Shot 2017-04-24 at 6.48.27 AM

Ending G, in which the player directs 9S away from his mission supporting 2B.

In various ways, the activities of the pods that accompany avatar androids 9S, 2B, and A2 also suggest that the player enjoys a direct presence within the fiction of the game. In particular, notice that when the player uses her controls to change the “camera’s” perspective on the game, she doesn’t actually move some disembodied, third-person viewpoint: instead, as she moves the camera, the pod following her avatar moves accordingly, in such a way that the pod is always facing straight ahead from the player’s perspective. This establishes a sense in which the pod is directly connecting the player to the world of the game, thereby allowing the player to really be present within the world of the fiction rather than merely viewing the fiction from an external position in the real world.

Screen Shot 2017-04-24 at 6.51.19 AM

Notice that even as 2B faces orthogonally to the camera view, her pod matches the direction of the camera view.

Now we have on the table all the considerations needed to argue to the conclusion that the player is the metaphysical source of sentience in NieR: Automata. First, the consciousness manifested in YoRHa androids and machine lifeforms isn’t standard AI; given the context, which says that standard AI would have been inhumane for disposable androids, we can safely infer that the consciousness made possible by machine cores is somehow “less authentic,” “less genuine,” or less “sui generis” than “standard AI,” where “standard AI” probably means genuinely, intrinsically self-conscious AI of the sort that we still have yet to achieve in the real world. We also know that the player of NieR: Automata has apparently direct access to the game’s fictional world: the pods act as a direct means of access within the fiction by which the player can manipulate the world, and the player’s choices are reflected in the actions, psychology, and basic makeup of the androids; further, the iterative structure of androids’ existence—constantly dying, being re-instantiated, and recovering their old data—closely mirrors the player’s actions of guiding them through the game, failing, reconstituting the android, and recovering their data. Now, returning to my analysis of Nier and assuming that it’s correct, we also know that technology exists (namely, the maso technology of Project Gestalt) that allows humans from other dimensions to impart their sentience to otherwise non-sentient entities. Given that such technology already existed, I think we can infer that the best explanation of the sentience that emerges in androids and machine lifeforms is that, through the construction of black boxes from machine cores, androids were able to induce the same sort of relationship between android and player that previously existed between Nier and player. And, just as the player’s sentience proliferated throughout Replicants in Nier, so too was the player’s sentience diffused in NieR: Automata amongst beings with the relevant kind of technology—that is, beings with machine cores. This explains why both YoRHa androids and machine lifeforms are susceptible to becoming sentient.

What about the pods? It’s clear enough by the end of NieR: Automata’s Ending E that the pods are also at least on their way to sentience, if not fully sentient; yet there’s no evidence (so far as I know) that they’re also powered by machine cores. So how can my account explain their sentience, since they presumably wouldn’t be connected to the player’s sentience via machine cores? I think the answer here is straightforward. Recall that, on my account, pods act as conduits that directly connect the player to the world of the game. Given this direct relationship between the player and the pods, there isn’t any need to appeal to machine cores in explaining the pods’ emergent sentience: we can instead say that, since the pods already possess basic operational intelligence and they’re being used to directly transmit the player’s agency to the game’s world, it’s only natural that the pods could somehow “pick up on” or learn to emulate the consciousness of the player to whom they are intimately connected. This response is admittedly somewhat more vague than the analyses of sentience in androids and machine lifeforms, but this vagueness is a direct result of there being proportionately less information available about the structure and ontology of pods. Thus, I don’t think the additional vagueness in my account should speak against my analysis per se; we should instead just be disappointed that there isn’t more documentation about pods within the world of the game.

If my arguments in this section are right, then the sentience that emerges in Replicants, YoRHa androids, machine lifeforms, and pods are all deeply related in a surprising and informative way: all of them are derived, directly or indirectly, from the metaphysically foundational sentience of the video games’ player. As I emphasized at the outset, this Narrative-Grounds Analysis needn’t settle the most pressing questions of how to interpret the actual events of the games: my analysis, for example, needn’t bear on question of who the aliens are who brought the machine lifeforms to Earth, nor need it bear on the question of which of NieR: Automata’s endings is the “true” ending (if that’s even an intelligible question to begin with). What the analysis instead succeeds in doing is establishing a crucial link between the player of the Nier games and the content of those games: the player doesn’t just determine what the avatars do in those games—the player actually enables entities in those games to become sentient within the fiction, in a metaphysically robust sense.

3. Playing a Fictional Video Game

I think that the above analyses is the best account of the metaphysical foundation of sentience across both Nier and NieR: Automata; however, I think that NieR: Automata suggests a further, much more radical interpretation of the fiction’s metaphysics, one which invites us to reinterpret the precise significance of the player’s sentience and agency on the game’s world. I want to emphasize, however, that this further interpretation is both (1) much more speculative than the above analysis and (2) theoretically separable from the above analysis: that is to say, you can consistently endorse my above analysis while also rejecting the argument presented in this section. All the same, I would be remiss not to mention this more radical interpretation of the game’s world, because there is at least some evidence for it within the game and it allows us to conceptualize the game in an extremely unexpected, unorthodox, and challenging way.

The central thrust of this more controversial interpretation is that it is true within the fiction of Drakengard, Nier, and NieR: Automata that the world is nothing more than a data structure being manipulated by a human from the outside. In other words, put roughly, this interpretation claims that it’s true within the fiction of the video game that the world is nothing more than a video game. Just to be clear about how radical this thought is: our typical assumption with the fictional worlds of video games is that these worlds, within the context of the fiction, are real. For example, it doesn’t seem to be true within the fiction of The Legend of Zelda that the world is an interactive data structure; instead, it seems true within the fiction that there is a real world called Hyrule, in which Link really performs certain actions, quests, etc. The thesis I’m exploring in this section is claiming that it isn’t true within the fiction of Nier games that there is a “real world” in this sense: instead, within the context of the fiction, there is a computer-generated world with which a human player interacts.[4]

I see two central data in NieR: Automata that support the thesis that the fiction of the Nier games represents a pure data structure: the first datum is information about the overarching “network” that governs machine lifeforms in the game, and the second datum is the way in which the game’s content is generally represented to the player. I consider each datum in turn.

After the player completes Ending E of the game (assuming the player doesn’t delete her data—more on that in the next section), a “Machine Research Report” is added to her information archive. The report, written by Information Analysis Officer Jackass, details the network that governs the machine lifeforms, explaining how it was created and how it evolved into a “meta-network,” codenamed ‘N2’ (typically represented within the game as two Red Girls). It offers the following information about the machines, their network, and their meta-network.

NieR Automata N2

A representation of N2 as one of the Red Girls.

“Machine lifeforms are weapons created by the aliens. The only command given for their behavior was to ‘defeat the enemy’. However, it appears that their capacity for growth and evolution went too far, and they eventually turned on and killed their creators.

“At this point, machine lifeforms recognized that the goal of ‘defeating the enemy’ actually REQUIRED an enemy. In order to maintain this singular objective, they reached the contradictory conclusion that their current enemies—the androids—could not be annihilated completely, lest they no longer have an enemy to defeat.

“In order to resolve this inherent contradiction, the machine lifeforms began to intentionally cause deficiencies in their network, diversifying the vectors of evolution for all machines. This is the cause behind some of the more ‘special’ machine lifeforms, such as Pascal and the Forest King.

“Meanwhile, the deficient network began repeating a process of self-repair while incorporating surrounding information, until it finally reached a fixed state as a new form of network. Traces of information regarding human memories from the quantum server of the old model were discovered, indicating that it had integrated them during the final stages of its growth process. Said server contained a record of the discarded ‘Project Gestalt’, as well as information on the human who was the first successful example of the Gestalt process.

“Having acquired information regarding humanity, the network’s structure changed once more, becoming what might better be called a meta network (or a ‘concept’, to borrow the words of the machines). This led directly to the formation of the ego we identify as N2.

“…So then! To sum up: For hundreds of years, we’ve been fighting a network of machines with the ghost of humanity at its core. We’ve been living in a stupid ****ing world where we fight an endless war that we COULDN’T POSSIBLY LOSE, all for the sake of some Council of Humanity on the moon that doesn’t even exist.”

The obvious way to read this is to take it at face value: aliens created machines that killed them; these machines fought the androids; the machines ultimately learned about the real events of Project Gestalt and evolved, etc. But there’s another potential interpretation of this information available to us: suppose that aliens created a vast data structure, with “machine lifeform” programs that were governed by an overarching network with some sort of artificial intelligence. The network was designed with the purpose of “defeating enemies”; after generating and killing virtual representations of their creators, the only network-independent entities it knew, the network had to find a further, more sustainable way to fulfill its purpose. To this end, the network generated a virtual history of humanity and Project Gestalt within the data structure, along with the subsequent androids whose express purpose—protecting humanity—would necessarily put them into conflict with the machine lifeforms, thereby ensuring that the network would always be able to strive towards its purpose of defeating enemies. On such an interpretation, the worlds of Drakengard, Nier, and NieR: Automata are just the data structure generated by the machine’s network and meta-network: the network is the cause of those worlds, rather than just another element contained within those worlds.

Of course, the network and its machines couldn’t fulfill its purpose of defeating enemies simply by programming other entities to attack it: this would effectively constitute a fight against oneself, which is no real fight as such. So, the natural solution was to enable some external agency to control the androids and direct them to fight against the machines—and this external agency is what the player provides. The interesting, unintended consequence of the player’s introduction to this data structure—returning to the themes of the last section—is that the player’s sentience ends up “infecting” otherwise non-sentient computer programs with genuine sentience, which turns what was once a mere data structure with quasi-artificial intelligence into a virtual world that supports sentient virtual beings.

To reiterate, this interpretation is absolutely wild. Nevertheless, I think there’s enough evidence for it within the game to at least consider it as a seriously possible interpretation. Consider as further information about the machine network the monolithic Tower that emerges after the death of 2B—the Tower in which N2 resides, and in which A2 and 9S face each other. The purpose of this Tower is expressed by N2 directly to 9S, as he is losing consciousness during Ending D. It’s worth quoting what 9S learns from N2 about the Tower.

NieR Automata Tower

The Tower in NieR: Automata.

“This tower is a colossal cannon built to destroy the human server. Destroy the server… and rob the androids of their very foundation. That was the plan devised by [the Red Girls—i.e. N2].

“But they changed their mind. They saw us androids. They saw Adam. And Eve. They saw how we lived, considered the meaning of existence, and came to a different conclusion.

“This tower doesn’t fire artillery. It fires an ark. An ark containing memories of the foolish machine lifeforms. An ark that sends those memories to a new world.

“Perhaps they’ll never reach that world. Perhaps they’ll wander an empty sky for eternity. It’s all the same to the girls. For them, time is without end.”

In a similar way to the Machine Research Report above, we could interpret this in the obvious and literal way, but it seems like there’s another reading available that resonates with the radical interpretation we’re presently considering. On this alternative reading, the Tower is something like the central hub that generates the virtual world. Its libraries of “information” with various port numbers are actually libraries of functions to call to instantiate and run all the various virtual entities that constitute the network’s world; the network planned to fulfill its purpose (“defeat the enemy”) by annihilating its enemies (this is the discussion of the Tower as a “colossal cannon”), until the network realized it could better fulfill its purpose in perpetuity by using the input of a human to perpetually re-instantiate the network’s virtual world and enemies over and over again. Remember the fact that NieR: Automata has 26 endings? On the wild interpretation we’re currently considering, the multitude of endings is explained by being the networks’ way of prompting players to “send the memories” of the virtual world’s entities in the game to “a new world”—that is, a new possible outcome of the game. By constantly replaying and exploring all of the possibilities of the game, the player allows the network and its machines to infinitely strive to fulfill their purpose of defeating the enemy. In this way, the very structure of the game reinforces the idea that the machine network generated a virtual world for the player to engage in order to fulfill the network’s purpose of “defeating the enemy.”

As I mentioned earlier, the way in which the game presents its fictional content to the player further reinforces this wild theory that its world is just a data structure. The loading screens in the game present presumably in-game data about the various vitals and systems pertaining to whichever android is serving as the player’s avatar; pods are able to use the loading screen as a communication interface, further implying its in-game status as some sort of abstract data structure; the entire world as presented to the player will sometimes appear to “glitch” when all is not right with their android’s sensory systems, even though the world is not presented to the player through the android’s visual field; and the omnipresence of the virtual “data space” in which 9S can hack—appearing everywhere from in machine lifeforms, to the minds of androids, to locks, to seals on the Tower, further suggests that the world could foundationally be just a virtual data structure. Taken individually, each of these data could be furnished with an alternative explanation; yet taken holistically, together with the previous considerations about the origin of the network, it seems at least possible to seriously consider that the world of the game is itself nothing more than a video game generated by the machine network.

This analysis of the game’s metaphysics is of course controversial, and I’m not at all as confident in it as I am in the previous section’s conclusions about the player as the metaphysical basis for sentience in the fiction. Yet the analysis has distinctive merits. NieR: Automata is a game that is obsessed with the formal elements of video games: machine lifeforms are designed with the purpose of defeating enemies (in other words, they are meant to be enemies to the avatar), and avatars—the YoRHa androids—are designed with the purpose of defending humanity (in other words, they serve humanity while also being directed by the inputs of an actual human player). This metaphysical analysis explains these parallels between narrative form and content by saying that the game’s fictional world just is the virtual world of a video game, and its constituent characters are designed accordingly. It also captures the narrative significance of the wide array of endings that the game has: whereas we would otherwise presumably have to admit that there’s no intrinsic narrative reason why the game has so many possible endings (we might instead simply say something like “the developers thought it would be more entertaining,” which doesn’t seem as satisfying an explanation), we can instead say on this account that the machine network constructed the world in this way in order to keep the player coming back and thereby sustaining the network’s purpose. So, although this section is not intended as a staunch defense of this interpretation of the game’s world, it is an invitation to take seriously the idea that NieR: Automata’s universe might really be what it most immediately appears to be: a video game.

4. The Ethics of Being a Sentience-Source

Suppose you find my above arguments convincing. You might still feel the urge to ask: “So what?” After all, I was very clear at the outset of this paper that analyses of a narrative’s metaphysical foundation needn’t have any direct bearing on how we interpret the events of that narrative. If that’s true, then why should we even bother with NG Analysis?

Well, in the first place, I should hope it’s apparent by now that NG Analysis does have implications for how we understand a video game and its fiction, even if it doesn’t directly bear on events in the game. I imagine, for instance, that we might feel different playing through NieR: Automata and thinking that its fictional world is fictionally just a data structure, versus playing through the same game and thinking its fictional world should be understood as fictionally real in the same way that our actual world is understood as real. Or consider how different the series of games would be if sentience arose intrinsically from Replicants, androids, and machine lifeforms, rather than arising derivatively from the sentience of the player. On that alternative understanding of the games’ metaphysics, the games would be presenting a world in which sentience can naturally arise out of programmed machines. In contrast, that isn’t the case on my interpretation: because the sentience of all these entities is ultimately grounded in the player’s sentience, machines only end up being sentient because the sentience of a naturally sentient lifeform (the human player) is shared with the machine. I take it that a fictional world in which intrinsically sentient machines are possible is crucially different from a fictional world in which such machines are not possible.

But suppose now for the sake of argument that the above considerations don’t move you. I want to close by considering one other way in which the analysis of NieR: Automata’s metaphysics deeply matters: namely, it determines the ethical commitments that the player has within the game to androids, pods, machine lifeforms, and other players.

If a given entity is sentient, then we typically think that the entity has moral rights—that is, there are morally permissible and morally impermissible ways for a moral agent (like a human) to treat that entity. Because androids, machine lifeforms, and (eventually) pods are sentient within the fiction of NieR: Automata, that means that, fictionally, there are right and wrong ways to treat them. These entities of course don’t have real moral rights because it isn’t the case that the programs representing them in the video game are literally capable of robust artificial intelligence, but when we engage in the fiction, it stand to reason that we must treat them as fictional entities with moral rights because of their fictional sentience. But notice that, based on your preferred metaphysics of sentience in the game, the sense in which these entities have moral rights will differ accordingly. If you think that these entities naturally became sentient independently of the player’s sentience, then they will have fictional moral rights regardless of whether the player interacts with the fictional world or not. On the other hand, if you agree with me that the sentience of these entities fundamentally depends on the sentience of the player, then it follows that these entities only have moral rights so long as the player interacts with the game’s world and thereby renders them sentient.

Why should these ethical considerations be any more compelling a case for the value of NG Analysis than the earlier considerations were? Because, it turns out, these ethical considerations will determine what choice you should make at a crucial juncture in the game’s narrative.

In Ending E of NieR: Automata, Project YoRHa enters its final phase: destruction of all androids and deletion of all data. Pods 153 and 042, together with the player, decide to recover the data of 9S, 2B, and A2 (the avatar androids)—thereby preserving the player’s data and allowing the player to continue exploring the game’s world and possibilities. In order to recover the androids’ data, however, the player must complete an exceedingly challenging mini-game in which she pilots a digital ship that destroys all the names in the game’s credits, all while avoiding myriad projectiles that the names are firing at the ship.

Screen Shot 2017-04-24 at 7.00.52 AM

The credits-based mini-game in Ending E. Getting hit by three projectiles total is fatal.

It’s very difficult to complete this mission alone; however, after failing several times (assuming the player is connected to the online network of other players), the player will receive a “rescue offer” from other players. If the player accepts, then the ships of other players will join the player’s ship, making the mission extremely easy; however, every time a projectile connects with the pack of ships, another player’s data (not the original player’s) is lost. Once the player completes the mission, the androids’ data is successfully restored, and the pods offer the player an option: if she so chooses, the player can sacrifice her own data in order to help another player reach this ending, just as she was (presumably) helped in reaching the ending. At the price of the save data and records you have accumulated in the game, you can help another player—a perfect stranger.[5] Here’s the crucial ethical choice: do you agree to help the other player or not?

If you have the view that androids, and machine lifeforms, and pods are fictionally sentient independent of you, the player, then, within the context of the fiction, these entities have moral rights against you no matter what. Given that choosing to delete your save data plausibly entails more-or-less “erasing” these entities, it stands to reason that such a view would forbid you from deleting your save data: to do so would be to help a stranger at the cost of annihilating countless sentient beings. In contrast, if you have the view that the fictional sentience of these entities fundamentally depends on the sentience of the player, then it follows that, were you to withdraw yourself from that fiction—for instance, by deleting your save data—then these entities wouldn’t be fictionally sentient anymore, and thus wouldn’t have fictional moral rights against you. On such a view, helping a stranger would not transgress against the moral rights of anyone, since, upon deleting your save data, the androids, machine lifeforms, and pods would lose their foundational connection to your sentience, from which their own sentience derived. Since, other things being equal, you probably ought to help the stranger since you were probably helped by strangers in successfully reaching Ending E, it follows on this view that you ought to delete your save data. So your view of the metaphysics of sentience in Nier: Automata could end up determining what you morally ought to do within the fiction when presented with this choice at the end of Ending E. If you think that your choices as a player within a video game matter at all, then this means you can’t afford to overlook the metaphysical foundation of NieR: Automata.

Conclusion

Nier: Automata, as I said at the outset, is a philosophically rich game across a wide variety of dimensions. I’ve only aimed in this paper to analyze the most foundational of those dimensions: the metaphysics of the game’s fiction. But those metaphysics, we’ve seen, are quite illuminating with respect to the rest of the game: they afford the player a central role as the wellspring of sentience in the game’s world, and they suggest new ways of grounding the self-consciously “video-game” aspects of the game’s narrative. These metaphysics may well be part of why the game’s exploration of sentience and the meaning of being human is so compelling: even as machines and androids wrestle with these concepts, the sentience they are trying to understand is ultimately your very own sentience; the humanity they want to know is your humanity. The human in NieR: Automata, therefore, is the one behind the controller.

2B and 9S

[1] Obviously, these are both toy examples, and it isn’t obvious that either of them is the correct account of Link’s time-traveling abilities.

[2] I say my work “doesn’t necessarily” pose a threat to that theory because there surely may be specific cases and ways in which an account of a narrative’s fictional grounds might restrict the set of possible interpretations of that narrative’s events. My point is simply that there is no a priori, categorical entailment relation between theories of a narrative’s fictional grounds (NG Analyses) and theories of the meaning of that narrative’s events (NE Analyses).

[3] In precisely what sense does Nier “serve” the player? An easy response would be to say that the player “controls” Nier in just the way that the literal control mechanics of the game suggest. If you’ve read my recent work on the foundations of video game storytelling, then you know I don’t think it’s right to say that players control avatars in that way; however, on my view, the explanation would simply be that the player already occupies a fictional role in the grounding of the game’s narrative, and Nier simply embellishes that fictional role by identifying it as a human, extra-dimensional entity controlling Nier. All of which is to say: my preferred view of video game metaphysics supports the interpretation of Nier that I offer here, but one needn’t subscribe to my broader video-game metaphysics in order to endorse this interpretation of Nier.

[4] While the interpretation I’m considering here is radical, it’s not without precedent: my most recent work on Xenoblade Chronicles defends the view that its universe (or, at least, the main universe within the game) is best understood to fictionally be a computer-generated world with external input from a player.

[5] There are of course ways to avoid the hard choice here by, for example, backing up your save data on an external source that the game can’t delete. I’m ignoring such methods on the grounds that they are illegitimate responses to the choice within the context of the fiction.

From PAX Aus: The Psychology and Neuroscience of Jump Scares

-by Nathan Randall, Featured Author. The following article is based on Nathan’s portion of With a Terrible Fate’s horror panel at PAX Australia 2016.

Lately there has been a trend of games released that center on jump scares.[1] The moment-to-moment gameplay in these games is relatively minimal, and in some cases even rather dull. But then, apparently out of nowhere, the monster appears on screen, killing the protagonist and scaring the player in the process. Some of these games include Slenderman, the upcoming Resident Evil 7, and the Five Nights at Freddy’s series.

But what is it about these games that makes them so effective at scaring people? And why might it be that people actually enjoy the experience of being scared senseless? It turns out that the fields of behavioral psychology and neuroscience have some answers to these questions. In order to answer them I will discuss various types of learning and how they apply to jump scares, describe the effectiveness of jump scares when the player is trying to multitask, and wrap up with a discussion of how hormones create the positive feelings that lead players to keep playing.

Before diving into these academic fields, however, I’d like to summarize the game that I’ll be using as my paradigmatic example of a game that makes fantastic use of jump scares: Five Nights at Freddy’s. Feel free to skip to the following two paragraphs if you’re already familiar with the game.

In Five Nights at Freddy’s the player plays as a nighttime security guard who’s been hired to run five night shifts at the Chuck-E.-Cheese-type location “Freddy’s.” However, as quickly becomes clear to the player, the real security threat at Freddy’s is not a break-in, but rather the animatronics that come to life at night and try to eat the people in the building. So the goal of the game ends up being simply to keep the animatronics from killing you during your five-night employment.

five-nights-security-guard

A personification of the security guard from Five Nights with two of the deadly animatronics standing next to him.

You have to do this all from within the confines of the security room, but you do have a few tools at your disposal. You can check the security footage for any of the dozen or so cameras set up throughout the facility, and you can briefly lock the doors to the security room. If one of the animatronics successfully gets to the security room, a jump scare follows, and the player loses. You can see a video of the gameplay including a jump scare below.

Five Nights makes use of two different types of jump scares, which I term player-dependent and player-independent jump scares. The difference between these two types of jump scares is fairly intuitive. Player-dependent scares are contingent on the actions of the player. If the player sits still and does absolutely nothing, then the jump scare will not happen. However, if the player does some particular action, the jump scare will happen. Player-independent scares are exactly the opposite: they are not contingent on the actions of the player. The jump scare will happen even when the player does absolutely nothing.

However, there is one important complexity in this model. Jump scares that depend on player inaction function equivalently to jump scares that happen irrespective of player input. There are jump scares that only occur if the player fails to do certain things. The lack of occurrence of a jump scare is contingent on the actions of the player insofar as the player can prevent the jump scare through action. However, the occurrence of the jump scare is actually contingent upon player inaction. Thus, when the jump scare actually appears, it behaves as a player-independent scare rather than a player-dependent scare. More important than that, the jump scares in question make use of the same underlying psychology as the player-independent jump scares, and because of that it is useful to think of jump scares that occur only if the player fails to do certain things as player-independent.

Player-dependent and player-independent jump scares make use of different underlying psychology. Player-dependent scares are based on operant conditioning, whereas player-independent scares are based on classical conditioning. Operant conditioning occurs when an animal performs some behavior more frequently because it is rewarded (or performs it less if it’s punished). In contrast, classical conditioning is the process of associating certain stimuli with other stimuli. I’ll discuss each of these types of conditioning and the associated jump scare type in turn.

Operant conditioning was first described by B.F. Skinner (along with Edward Thorndike). Skinner was known for the “Skinner Box,” which was the primary experimental paradigm for operant conditioning studies for decades. The basic idea of the Skinner Box is to put an animal in a box rigged with various contraptions. These contraptions give the animal some reward or punishment in a fixed way to specific actions performed by the animal in the box (some of the rewards were food, juice, sex, or just freedom from the box; the usual punishment was an electric shock). Skinner and Thorndike’s crucial initial discovery was that the animals tended to perform the actions that gave them rewards more quickly and artfully as more trials were run. This idea that actions that are rewarded occur more frequently is the basis of operant conditioning.

thorndike-box

Thorndike’s original experiment, in which a cat is placed in a box with a mechanism that opens the door.

skinner-box

A Skinner Box. The mouse can press the lever to receive a food pellet.

Creating an effective player-dependent jump scare, then, is a matter of playing with this tendency that people have to form action-response associations. The two ways of playing with this tendency that I’ll discuss in this article are: giving the player a false sense of security, and constantly changing the rules.

Creating a false sense of security is a fairly straightforward process. For a while, the game is very predictable. The player performs some action A in a specific context X, and then receives some reward R. This process repeats several times. Now whenever the player is in context X, they perform A without giving it much thought, and receive the reward R. To create the jump scare, all that need be done is make it so that at some point when the player is in context X, they perform action A, and instead of receiving R they receive a jump scare. This formula is very simple to execute, and when done properly is very effective, because it disrupts the operant conditioning process.

Another way that horror games play with operant conditioning is by never allowing associations to form in the first place. There are two ways in which this can happen:

  1. Nothing ever happens the same way given the same input.
  2. The player fails regardless of their input.

Both of these techniques have surprising consequences, however. Depending on how they’re used, games that incorporate these techniques can stray outside of the horror genre, or even create an emotional experience distressing enough that the player is more likely to stop playing then see the game through.

The tricky aspect about (1) is that this conditioning paradigm can easily stray out of horror and into absurdist comedy. One of the defining aspects of absurdist comedy is the inability for the audience to predict how events in the artwork will unfold. The two examples I’ll give are Jazzpunk and a very strange game, Japanese World Cup 3.

Rather than attempt to explain either of these games, I recommend watching the videos. The key takeaway from these examples is this: if the rules of the game are constantly changing and weird stuff keeps happening, then the game will likely induce laughter, or at least an “I don’t understand” response from the player.

jazzpunk

A tourist in Jazzpunk talks to the player. The “incoherent nonsense” is the subtitle for what the tourist is saying.

The tricky aspect of (2) has to do with another idea within behavioral psychology called learned helplessness. To understand learned helplessness, I’m going to explain the experimental procedure that led to its discovery. The experimental setup is basically a specialized Skinner Box. There are two compartments in the box, each with a floor capable of delivering an electric shock to an animal. There is a hole in between the two sections through which the animal can pass.

learned-helplessness

A diagram of the experimental paradigm that was used to first discover learned helplessness.

The experiment was originally run with dogs. There were two different conditions for the dogs. In both conditions, a light would turn on preceding an electric shock from the floor. What differed between the conditions was how much of the floor was shocked. In one condition, only the compartment that the dog was in when the light turned on got shocked. In the other condition, both compartments delivered a shock.

The behavior of the dogs varied massively between the two conditions. In the condition where only one compartment was shocked at a time, the dogs learned to jump to the other compartment as soon as it saw the light. In the other condition, however, the dogs eventually stopped doing anything at all. They would just lie there and whimper as they were being shocked. As a matter of fact, this was still the behavior of the dogs even after switching to the other condition. These dogs were in a learned helpless state.

The conclusion of the experiment was that the dogs in the second condition had learned that there was nothing that they could do to prevent the shock, and this state persisted even after options became available for the dog to help itself. Learned helplessness is the state of hopelessness and despair when those feelings are at their most vivid.

Learned helplessness is an incredibly powerful emotional tool, and not something that game designers should overlook if they seek to make emotionally powerful games. But there is a huge problem with a game intentionally putting its player in a learned helpless state: the player is not actually trapped inside of the game in the way that the dogs were trapped in the cage. An average player is likely to quit long before they reach a state of despair, just out of frustration.

rage-quit

So in general, if a goal of game design is designing a game that people want to play, it’s probably better to avoid mechanics that make the player feel helpless.

However, some games are able to masterfully deploy learned helplessness without compelling players to give up as a result. One of those games is Undertale (warning: the following section has spoilers for the ending of Undertale). One of the final bosses of the game is Photoshop Flowey, Flowey’s form after he ascends to Godhood by absorbing the souls of six humans. He’s determined not only to defeat the player, but also to show them their powerlessness. To do so, he repeatedly kills the player and crashes their game, all the while telling the player that they can’t win and that they’re doomed to failure. The player learns one thing from Flowey: they can’t win. Personally speaking, the boss fight put me in a state of hopelessness unlike anything I’d felt in a game before.

photoshop-flowey

Photoshop Flowey.

So why doesn’t the player just stop playing? Why aren’t there many rage quits during this boss fight? The answer has to do with a major tagline for the game: “You are filled with DETERMINATION.”

determination-in-undertale

The player sees this line appear every time they save the game, and they are also told not to give up every time that they are killed. The player has been given hints throughout the game regarding what to do during the Photoshop Flowey boss fight: not give up. The learned helplessness induced by Photoshop Flowey is thus made palatable by giving the player an anchor so that they do not quit along the way, and eventually see the other side of the confrontation. Eventually the game does allow the player to win when the souls of the humans rebel against Flowey and help the player defeat him. The game takes the player through an experience of learned helplessness and then helps them come out of it into triumph.

Classical conditioning was discovered by Ivan Pavlov while working with dogs. The experimental paradigm worked as follows. Initially, when Pavlov rang a bell, his dogs would not salivate in response (there is nothing inherently salivation-inducing about the sound of a bell). But, after repeatedly pairing the sound of the bell with giving the dogs food, eventually simply ringing the bell would cause the dogs to salivate. The bell thus became predictive of food, and caused a response of food-expectation from the dogs.

classical-conditioning

A graphical description of classical conditioning.

Classical conditioning forms the basis of player-independent jump scares, especially in terms of suspense. By classically conditioning the player, a movie can create powerful feelings of suspense. While suspense is a powerful horror technique, I will not focus on it in this article other than to say that an effective player-independent jump scare tends be one that has little suspense beforehand, and thus is difficult to predict. One method of making a jump scare work well is to remove any predictive hints that it is about to happen. Thus player-independent jump scares depend on unpredictability to be effective.

But removing the predictive hints is actually harder to do than one may think. In our lives as consumers of media, we have been classically conditioned to consider many different things to be “suspenseful,” and thus predictive of a future jump scare. That’s part of the reason why watching a lot of horror makes jump scares in general less effective: the well-trained eye can see the scares coming. Modern culture has made many player-independent jump scares predictable. Their effectiveness has thus been undermined, and we as viewers are often not scared, or even find them laughable.

But video games are able to avoid the problem of the predictability of player-independent jump scares because of the potential for the use of randomness in games. The potential for video games to randomly generate content makes player-independent jump scares fundamentally less predictable than those of movies. A player-independent scare can just be set up on a random timer, and thus be less predictable than a movie, even in a second or third watching or play-through. Whereas in a movie you could pause at exactly the moment the jump scare occurs, look at the progress bar, and record the time that the bar reads, there is no plausible way to do this in games. An example of one of these random player-independent jump scares in a game comes from Five Nights at Freddy’s. The animatronics will at some point end up at the door to the security room and jump out to kill the player, but this event occurs on a roughly random timer.

pausing-a-movie

A movie can be paused at a particular time. The same thing will be happening in a movie at that particular time every time it is watched. Games are not so consistent.

Thus it is easier in some sense to pinpoint exactly when a jump scare will happen in a movie than it is in a game.

At this point we have most of the tools we need to analyze why it is that Five Nights at Freddy’s will scare you. First, related to operant conditioning, the game-ending jump scares (which are the most potent ones) are player-independent. The player can take action to try to stop the scare from happening, but when the jump scare actually happens there are no player-dependent stimuli preceding it. So the main jump scares end up being player-independent. Second, since player-independent jump scares are more random in games than in movies, and since the game does a good job at hiding the cues for the jump scare, the jump scares are more likely to catch you off guard.

The third and final reason that this game is so effective at scaring its players is that the game induces in the player a state of cognitive overload. I will unpack this term by diving into some neural circuitry so that we can better understand just how Five Nights overload these circuits.

The model of neural circuitry that I will introduce makes use of an important hypothesis in neuroscience: the cellular connectionist hypothesis. The theory states that if we understand how a neuron (the primary communicative cell in the brain) functions, how it communicates to other neurons, and how systems of neurons are connected to each other, then we can understand the function of the brain, and how the brain creates human thought and behavior. One important corollary of this theory is that if a particular communication pathway in the brain is faster than another pathway, the cognitive or behavioral response associated with the former pathway will happen more quickly than the behavior associated with the latter pathway.

The following model will initially appear a bit confusing, but I will break it down piece by piece.

model-of-neural-pathways

A diagram showing the communication pathways between various brain areas.

The chart shows the communication pathways in the brain that progress from sense to cognitive and/or behavioral responses. The four items in the middle are different brain areas that communicate with each other to progress from sense to response. The arrows simply represent communication pathways.

There are four brain areas to consider in this model. The first is the thalamus. I will not be discussing the function of the thalamus in this article, as it is complicated and not inherently related to fear response like the other brains areas I’ve included are. The only function the thalamus plays in the model I’ve presented is a time-waster: it takes longer to pass through the thalamus than it does to just traverse an arrow in the model.

The amygdala (to make an admittedly gross oversimplification) is the fear-center of the brain. When activated, it arouses the body, in a way that can either be positive or negative depending on context. Activation in the amygdala tends to correlate with a feeling of fear.

The prefrontal cortex is an area largely responsible for complex cognition and self-control. Thus most of its function is to suppress action in other areas of the brain, including the amygdala.

In a further top-down process, the dorsolateral prefrontal cortex manages the function of the prefrontal cortex. This process often displays as management of multitasking.

There three features of the diagram that I would like to emphasize in particular. The first is that the path from the senses, to the amygdala, to thoughts/feelings/responses is the shortest, and thus fastest, pathway. In contrast, the shortest pathway through the prefrontal cortex runs through the thalamus, and thus takes a little bit longer than the amygdalar pathways. Finally, the three brain areas that I’ve focused on can all communicate with each other.

shortest-neural-pathway

The shortest communication pathway in the model. This one runs through the amygdala.

longer-neural-pathway

A slightly longer communication pathway that runs through the prefrontal cortex.

neural-communication

The three main brain areas in question communicate through the prefrontal cortex.

If we combine these three features of the diagram together with the mechanics of Five Nights at Freddy’s, we can start to get a clearer idea of the cognitive overload the game has the potential to put the player in, and the multiple levels of fear that a player is likely to experience. The mechanics of Five Nights at Freddy’s focus on multitasking. The player needs to keep track of multiple screens, multiple monsters, the battery levels on various devices, and even multiple doors to their room. Thus the dorsolateral prefrontal cortex is likely very active while playing Five Nights, as it is working to make sure that the prefrontal cortex is multitasking effectively and efficiently. Normally people are fairly decent at these sorts of multitasking games, but Five Nights adds in the complication of an impending jump scare.

The amygdalar pathway is faster than the more rational prefrontal cortex pathway, meaning that no matter what the player does, it is difficult not be scared for at least a fraction of a second in response to a good jump scare. But, if a person is expecting that a jump scare is coming, the prefrontal cortex can work to suppress the amygdala in order to keep the response from being as strong as it might otherwise be. But this takes work on the part of the prefrontal cortex, and prevents it from multitasking as effectively as it otherwise could. So, in the conditions of cognitive overload that Five Nights at Freddy’s imposes on the player, the player is likely to get scared by the jump scare, likely worried that a jump scare may happen at any moment, and likely anxious that they are not doing tasks well enough to prevent the jump scare. All in all, these are the proper conditions to leave a player a shivering mess (myself included).

So if Five Nights at Freddy’s is so effective at making people uncomfortable, why does anybody play it? One answer to this question relates to hormones in the body. After the jump scare occurs, there is a release of excitatory hormones throughout the body. These excitatory hormones are context-dependent: if you are in a safe place physically and/or mentally, you tend to feel good, and if you are in an unsafe place physically and/or mentally, you will be likely to feel terrible.

When the jump scare is over, hopefully the player detaches from the game a little bit, and realizes that they are in a safe space. So with the added hormone they feel good. So they decide to play another round. And then the hormone rush happens again so they play another round. This cycle could potentially repeat for a long time.[2]

But for two reasons the above cycle will not be infinite. First, players get better at games over time. As this happens, it does not take as much cognitive control to play the game, and the player can dedicate more cognitive effort toward suppressing the amygdala. Second, the player can also habituate to the jump scare, which means that there is less brain activation in response to the fear stimulus than there was when the player was first playing the game. These factors combine to cause less of a fear response upon seeing the jump scare.

In order to keep the players engaged from a neuroscientific and behavioral-psychological perspective is a scarier, more challenging game. In releasing sequels frequently that feature roughly the same gameplay but with more difficult challenges and scarier monsters, the developer of Five Nights at Freddy’s has accomplished exactly that. He’s given the players exactly what they want out of a sequel: a game way harder and scarier than the last one. One can see this progression by looking at the difference in monster art between Five Nights at Freddy’s (original) and Five Nights at Freddy’s 3.

five-nights-at-freddys

An animatronic from the original Five Nights at Freddy’s.

five-nightsat-freddys-3

An animatronic from Five Nights at Freddy’s 3.

We can use behavioral psychology to think about two different kinds of jump scares: player-dependent, and player-independent. Player-dependent jump scares make use of operant conditioning techniques to be effective, particularly by defying players expectations, or never allowing those expectations to form. Player-independent jump scares make use of classical conditioning, and are most effective when the player feels clueless about potential future jump scares.

Neuroanatomical pathways allow us to more precisely understand the jump scares at work in Five Nights at Freddy’s. Since the amygdalar pathway is shorter than the prefrontal cortex pathways, the only way to avoid being scared is to suppress the amygdala ahead of time, which is difficult to do in the cognitive overload situation that Five Nights puts the player in. So the player is highly likely to be scared. Even though the player eventually will habituate to these jump scares, or just get good enough at the game never to encounter one, since there is a new entry of the game every few months, there is always a harder, scarier challenge to take up.

Jump scares are a chance in games to systematically think about how the mechanics of a game emotionally impact the player. Jump scares do not need to be guess-and-check to create; they can be crafted to have precise emotional effects.

Nathan Randall is a featured author at With a Terrible Fate. Check out his bio to learn more.

[1] I’d like to thank my fellow With A Terrible Fate game analyst, Matt McGill, for sharing his thoughts about the place of classical and operant conditioning in the context of game design. In this article I both intend to advance my own ideas and to be a conduit for some of Matt’s.

[2] This cycle does not manifest for everyone. Personally, I get so shaken up after a good jump scare that I often end up never playing the game again.

Explore Horror with Us at PAX Aus

I’m thrilled to publicly announce on the site that With a Terrible Fate will be presenting a panel at Pax Australia this weekend. We’ll be talking about video game horror in the Dropbear Theatre to 7:30PM-8:30PM, and we hope to see you there. Right now, without giving too much away, I want to give you a taste of what you can expect if and when you meet With a Terrible Fate this weekend.

I, With a Terrible Fate Founder Aaron Suduiko, will team up with Featured Authors Nathan Randall and Laila Carter to discuss what makes horror storytelling special in the medium of video games. We’re each going to take a distinct methodological approach to analyzing video game horror based on our academic backgrounds; my hope is that the combination of our very different analytical perspectives will demonstrate how much people can learn about games by considering them through a variety of theoretical lenses.

Nathan Randall

Nathan will be applying the studies and theories of neuroscience to explore what makes for a really effective jump scare in video games. He’ll discuss various learning and fear mechanisms in our brains, and how games are especially well-positioned as a medium to capitalize on these mechanisms. Along the way, he’ll analyze 5 Nights at Freddy’sUndertale, and even JazzpunkEver thought about the science behind a really good game? There’s a lot to it, and Nate will show you just what makes it all so cool.

 

 

 

 

laila-carterLaila will be exploring how horror storytelling in video games fits into broader, long-standing traditions of horror in folklore, mythology, and literature. What does BioShock have to do with the Odyssey? How does Lovecraftian horror come about in S.O.M.A.? What insight can a Minotaur give us into Amnesia? Laila has answers to all of these questions–oh, and she’ll be talking about “daemonic warped spaces” and P.T., too.

 

 

 

suduiko-video-game-art-presentation

Lastly, I’ll be applying the tools of analytic philosophy, together with my body of work on video game theory, to explore the ways in which games can use the metaphysics of their worlds to generate especially deep-seated and cerebral horror for the player. I’ll argue that the horror of Bloodborne is actually much more realistic than you thought (and you’ll wish I hadn’t shown you why that’s the case). I’ll argue that the metaphysics of Termina imply an interpretation of Majora’s Mask that strays outside the realm of Legend of Zelda canon and instead finds its home in nihilistic terror. I’ll argue that the horror of Silent Hill 2 isn’t fundamentally about James’ relationship with any of the other characters in the town–rather, it’s about his relationship with the player. If you want to get primed for this section (or spoil it for yourself), you can check out my earlier work on Bloodborne and my comprehensive analysis of Majora’s Mask.

We’ll all be hanging around after the panel to answer any questions you may have, and we’ll be around throughout the rest of PAX if you want to keep the conversation going. We’ll also hopefully be able to get the presentation documented in some capacity, so look for that online in the coming week if you can’t make it to PAX Aus.

To all you PAX-goers: see you Saturday.

 

Beyond the Moral Binary: Decision-Making in Video Games

by Richard Nguyen, Featured Author.

Video games designers engineer worlds receptive to player input. Players are empowered with the agency to make decisions that can change the course of the game’s narrative and the characters within it. This decision-making is a core, interactive tenet of video games. In emulating the experience of choice and deliberation, there are various elements that designers must consider. Key among them is morality, or the principles humans hold to distinguish between “right” and “wrong” behavior, and how it influences player choice. The mechanics of moral decision-making across video games have been diverse, and only sometimes effective. In the time I have spent playing narrative games with morality as a central component and game mechanic, I have found that the games with the most minimal and least intrusive systems better emulate not only moral decision-making, but also the emotional consequences that follow. Presenting morality as its own discrete game mechanic is counter-intuitive, because it diminishes the emotional impact and self-evaluation of moral decision-making.

To begin, I will be applying a rudimentary framework of morality to fuel this discussion because the focus is not on morality proper, but on how it influences player choice. Video games that use the moral binary framework present to the player three possible moral courses of action: good, bad, and, sometimes, neutral. For our purposes, we will assume that the majority of players are good-natured, and believe in what society deems and teaches them to be “right” or “good.” At the very least, players understand what should be done. This includes, but is not limited to, altruism and cooperation. Good moral decisions often require self-sacrifice to achieve a greater good. Your avatar will sacrifice money for the emotional satisfaction of having donated to a virtual beggar. “Wrong” or “bad” behaviors, then, violate moral laws. Such behaviors include, but are not limited to, murder, lying, cheating, and stealing. Video games present morally “wrong” or “evil” choices as temptation, the desire to make the easier, selfish choice. Of course, life is not so simple as “right” and “wrong” or “good” and “bad.” To clarify, I will be using “good” and “right” to refer to the same concept, and will be using them interchangeably. The same applies to “bad” and “wrong”. The “neutral” alternative describes behaviors with no moral value, which is often presented as inaction in gaming scenarios. A flavored subset of the “neutral” choice is the “morally gray” choice, occupying a middle area between “good” and “bad” in which the moral value of an action is unclear. For instance, a typically “wrong” behavior, such as stealing, may be inflected with the “right” intention, such as stealing medicine in order to save your dying sister. In this situation, it is difficult to value the action as fully “good” or “bad”.

Screen Shot 2016-04-25 at 3.53.44 PM

The moral binary of Infamous (discussed below).

I outline this moral theory under the assumption that players’ moral beliefs will extend to the decisions they make as the avatar in the game world. Of course, players often experiment with moral decision-making in games by “role-playing” the good or bad person, but such an action already makes players acknowledge their pre-existing moral beliefs. At this point, players become detached enough from the avatar, under the knowledge that the avatar’s actions do little to reflect their own moral selves, that they would care drastically less about the consequences of such actions. I will instead be examining the cases in which players seek to make decisions in games as if their avatars were a full extension of their moral selves. In other words, players make decisions as if their own moral selves were truly operating in this world. Therefore, players would care more about how their decisions accurately reflect their moral beliefs. Otherwise, there are little to no personal stakes involved in decisions when you know they say nothing about you.  

Screen Shot 2016-04-25 at 3.55.41 PM

Fallout‘s Good and Evil (discussed below).

Designers often abide by the convention that morally right decisions are selfless and performed for the greater good, while morally wrong decisions are selfish and performed for personal gain. Players that make the morally right decision often engage in the more difficult and complicated narrative pathway. For instance, choosing to ignore a mission directive in order to save an endangered life may lead to punishment, and requires the player to work harder make up for lost time or resources. In spite of the extra layer of difficulty, these morally right decisions are more emotionally rewarding because they preserve the player’s conscience. Again, we assume that the majority of players inherently abide by what society deems to be right and wrong. Players that make the morally wrong decisions engage in the more expedient pathway that facilitates direct personal gain. For instance, choosing to ignore endangered civilian lives in order to fulfill the mission directive leads to no direct punishments. Instead, the consequences of this morally “wrong” decision come through the emotions of guilt and disappointment due to its violation of the player’s conscience. This is not to say that players are discouraged from making morally “wrong” decisions in video games. Rather, having players choose either a “good” or “bad” decision places responsibility on their own hands, rather than the writer’s. Allowing players to explore the emotional consequences of both ends of the moral spectrum forces them to reevaluate their own beliefs. In the case of the moral binary in video games, such reevaluation turns into the reaffirmation of societal norms. Designers use this moral theory in decision-making to reinforce the conventional meaning of “right” and “wrong.”

The two primary elements of morality in a video game context are intention and behavior. The player’s intentions are enacted through the avatar’s in-game behavior. In other words, the decisions made in a video game are determined by player intention. The behavior can be objectively categorized into “right” and “wrong” according to the game’s narrative. However, the behavior carries with it the player’s intention, which cannot definitively be measured or categorized by the game itself. The player’s subjective experience is then the key factor in determining how well the video game emulates moral decision-making. What the avatar feels is independent of the player’s own feelings as a result of a moral decision. With the binary morality system, designers make a direct appeal to the player and his or her moral beliefs.

The psychological phenomenon of “cognitive dissonance,” where one’s conflicting and inconsistent behaviors and beliefs cause discomfort, drives the consequences of moral decision-making. This internal, emotional conflict compels a person to change one of those beliefs/behaviors in order to reduce such discomfort. When good-natured players make a morally “wrong” decision in a videogame, their beliefs will be inconsistent with their behavior. Even if the player unwittingly or does not believe that they made a morally “wrong” decision, the game’s systems will still punish and treat them as if they did. For example, a person playing Grand Theft Auto 5 may fire a gun in public and not believe that it is wrong or against the law. The game’s systems, in the form of police, will nevertheless respond negatively. The player is left to reconcile his moral beliefs with those of the video game. There are three likely responses when a good-natured person (as we assume the majority of us are) makes a morally wrong decision: (1) change your beliefs to be more consistent with your behavior, (2) live with and accept the discomfort and inconsistency, or (3) sublimate, and find a reason or rationale to justify your inconsistency. The idea is that cognitive dissonance creates the emotion of discomfort. The first two options are labeled as truer dissonance scenarios because they are done in response to such discomfort. Option (3), on the other hand, precludes discomfort because the sublimation will have already taken place due to a third-party influence. Thus, players are not made aware of the inconsistency and continues, unaffected by their moral decision. From my experience, the most effective moral systems have compelled me to respond with Options (1) and (2), which most align with realistic moral decision-making and the phenomenon of cognitive dissonance. By provoking the visceral discomfort of making a decision you realize was inconsistent with your beliefs, you will ostensibly be more compelled to respond. When video games inspire Option (3), sublimation, the player transfers responsibility to a third party and is therefore relieved of any personal, emotional consequence. Sublimation allows players to rationalize or provide an external explanation for their behavior. Therefore, responsibility for that moral decision is displaced, which mitigates any true feelings of cognitive dissonance. This is not to say that Option (3) never occurs in realistic moral decision-making. I am arguing that the modern video game most often counter-intuitively facilitates this transfer of responsibility, even when their goal is to appeal to or challenge a player’s moral beliefs through cognitive dissonance.

Screen Shot 2016-04-25 at 4.01.46 PM

Pictured: Leon Festinger’s Cognitive Dissonance Model, with three possible actions (in green) that a subject could implement to reduce cognitive dissonance.

Now that I have clarified both my moral framework and the role cognitive dissonance plays in moral decision-making, I will analyze how these work in popular video games that use the moral binary framework. I will examine its role and evolution in several narrative-driven open-world and role-playing games. We will start from the simplest, most direct binary systems and work our way into games that add eschew the binary for more minimalist approaches.

In the Infamous series, the player must decide whether his avatar (Cole), a super-being with electric-based powers, will be a “hero” (good) or a “villain” (bad). In order to secure the most successful playthrough, in which the player unlocks the strongest abilities and completes the narrative, players must commit to one moral path and constantly commit the deeds that earn them either good or bad karma points. Each path provides unique abilities inaccessible in the other, incentivizing commitment to one moral path rather than neutrality. As a result, players have access to only two viable playthroughs of the same story. The hero playthrough facilitates a precise and focused combat play style while keeping your electricity blue, and the villain playthrough facilitates a chaotic and destructive combat playstyle while turning your electricity red. In order to earn karma points, the player must constantly engage in activities consistent with the respective path, as demarcated by the video game itself. Good karma points are earned by helping citizens and choosing the good prompt instead of the bad during pivotal story events. Bad karma points are earned by destroying the city, murdering citizens, and choosing the bad prompt instead of the good during pivotal story events. There are no neutral or morally grey options. A player’s karma meter is plastered on the heads-up display (“HUD”) to remind the player that their actions are omnisciently tracked and scored, essentially turning morality into its own mini-game.

Screen Shot 2016-04-25 at 4.05.55 PM.png

Karmic decisions in Infamous 2.

In spite of its blatant tracking and systematic reminders, Infamous’s binary morality system is comically shallow and ineffective in producing realistic emotional consequences. The game reduces moral decision-making to a binary, because it can only be completed upon fulfillment of either the hero or villain pathways. The narrative makes its morality clear in that heroes are “good” and villains are “bad.” For the ordinary player, the only choice then is to consider whether they want to be consistent with their own good-natured beliefs and choose the hero path, or to deviate from the norm and explore moral violations as a villain. Aside from the joys of blowing everything up, choosing the villain’s path should then inspire some amount of discomfort, which should consequently lead to either (1) a change in player attitude to coincide with the behavior, (2) an acceptance of the discomfort, or (3) sublimation. The game’s blatant morality system in all cases inspires sublimation, and therefore fails to provoke any genuine cognitive dissonance within the player for several reasons.

Screen Shot 2016-04-25 at 4.08.27 PM.png

Karmic choice in Infamous Second Son.

First of all, Infamous’s blatant tracking turns morality into a purposeful meta-game to be conquered. Therefore, the goal to reach the highest karma levels is extrinsically motivated by in-game rewards such as unlockable abilities, rather than intrinsically motivated by the game’s narrative. The sheer volume of moral decisions the player makes as Cole are driven not by how the player would act, but by what moral pathway the player committed to at the very beginning. This allows for little moral experimentation on a case-by-case basis, as the player’s goal is to globally make either good or bad decisions.

 

Second, the game’s design makes it so that skill progression is tied to fully achieving full hero or villain status. This makes it difficult to completely finish the game if the player does not commit to a moral pathway. Thus, game designers are obligated to provide

Screen Shot 2016-04-25 at 4.11.15 PM.png

Karma farming in Infamous 2.

players with the opportunity to “farm” karma points, in the case that they have poorly leveraged the karma system, to advance in power. Scattering redundant and bountiful opportunities to advance in karma level throughout the city diminishes the emotional impact of each moral decision. For example, there will be countless civilians on the street whom you can either choose to revive (good) or bio-leech for energy (bad). This becomes mundane because (1) you have already made the same decision countless times before and (2) you do not have a choice because your decision has already been made based on your playthrough. Infamous presents morality as a game mechanic with clear, delineated consequences. Both pathways end in earning more powerful abilities. By asking the player to virtually choose a side at the beginning of the playthrough, no further thought or questioning is required because the player no longer feels any responsibility for their actions. Once players lose a

Screen Shot 2016-04-25 at 4.12.39 PM.png

The Karma Meter in Infamous.

sense of responsibility for their and their avatar’s actions, it is easier to dissociate themselves from moral acts that the avatar has performed. The game itself tracks and quantifies the player’s moral choices and produces a predictable response every time. Any cognitive dissonance is displaced by how the game virtually forces the player to commit to a single moral pathway in order to succeed. In games like Infamous, we submit to the game’s predetermined, simplistic morality, and are given no chance to evaluate such decisions based on our own moral beliefs.

Granted, no one has ever expected Infamous’s binary morality system to be the paragon of moral decision-making in video games or for it to change anyone’s moral code. Yet, it is clear that binary morality systems have become the rule, not the exception to exploring morality in video games. For example, high-profile and critically acclaimed narrative games such as BioShock, the Mass Effect trilogy, and even the Fallout series all abide by similar moral mechanics.

In BioShock, the ending changes based on the player’s decisions about how to deal with its

Screen Shot 2016-04-25 at 4.17.25 PM

BioShock‘s harvest-or-rescue binary.

Little Sisters. The binary morality is as follows: save the sister (good) or harvest her (bad). Harvesting a sister will kill her in order to drain her life force and reap more economic benefits.  The moral dimension of this decision lies in determining the fate of this narrative entity, in choosing whether or not to kill the sister. The good and right choice is to save the sister and restore her life, which provides less Adam (in-game currency) immediately but is rewarded with gifts of gratitude later on. One of the game’s central figures, Tenenbaum, explicitly denotes this to be the narratively good moral choice, especially when the most optimistic and humanist ending can only be achieved upon saving all of the sisters in Rapture. It is only in this ending where the Sisters help the avatar escape from Rapture. The cutscene, saccharine and hopeful, is accompanied by Tenenbaum’s affirmation of the player’s “good” morality.  The morally bad and “wrong” choice is to harvest the sisters and essentially take their life to receive more Adam immediately but with no long-term reward. The bad ending (accompanied by Tenenbaum’s extremely bitter and dismissive monologue if the player harvests all the sisters) depict the avatar’s brutal and power-hungry takeover of Rapture’s remains, and the splicer’s savage invasion of the world above the surface.

The narrative makes evident, through Tenenbaum’s insistence upon humanity and these dichotomous endings, that there is a clear moral binary between good and bad. Yet, by tying the moral decisions concerning the fate of these sisters to directly economic, rather than purely emotional consequences, the game pollutes any potential moments of cognitive dissonance as a result of the morally “wrong” decision. What is initially posited as a measure of the player’s moral values is transformed into an exercise in economic impulsivity: whether or not players can delay immediate gratification for longer-term rewards. This is not to say that moral decisions can never be tied to economic consequences. Choosing between stealing or donating money holds unpredictable consequences and punishments, and one can get away with morally bad economic decisions while feeling internal guilt. For BioShock, however, the endings clearly attempt to evoke emotional consequences, particularly through Tenenbaum’s shaming of the player in the bad endings with no further reference to economic rewards. The experience of cognitive dissonance would be where the morally “bad” player either (1) changes their beliefs to be more consistent with their actions (believing that they were inherently justified in or truly wanted to harvest the sisters) or (2) accepts their actions as bad and lives with the shame of having murdered little children.

Thus, it seems as though the added economic layer of Adam rewards in moral decision-making was done more out of convenience, a way to give the player Adam instead of inspiring a moral quandary. By the end of the game, players may place responsibility on economic motivations, rather than personal or internal motivations, as the driving force behind their decisions. Moral responsibility is displaced by the justifications of either achieving a certain ending cutscene or by maximizing economic gain. As a result, the player experiences no dissonance because their “bad” actions are believed to be consistent not with their moral beliefs, but rather with this other economic motivation.  While BioShock does a better job of posing a more complicated moral situation than the simple choice of “being a hero” versus “being a dick,” it instead settles with the economic quandary of choosing between “being a rich hero” versus “being an impoverished dick.”

While I adore the Mass Effect trilogy, I would be foolish to believe that people did not already determine to pursue a full “paragon” (good) or “renegade” (bad) playthrough within the first ten minutes. Paragon choices most often involve dealing through compassion, non-violence, and patience, whereas renegade choices are aggressive, violent, and intimidating. Narratively, paragon decisions are framed as heroic, which is met by an NPC’s openness and friendliness. On the other hand, renegade decisions are framed as apathetic and ruthless, met with an NPC’s fear and disapproval. The game’s feedback loop then reinforces the idea that paragon is conventionally good, and renegade is conventionally bad. The entire morality mechanic in this game revolves around the choices made in conversation. In fact, the game’s dynamics conversation wheel facilitates moral decision-making without the player even having to look at the dialogue options:, the upper right and left segments of the wheel are paragon choices and the lower right and left segments are renegade choices. The right middle section is reserved for neutral options, but is not a viable option for those looking to maximize their moral decision-making output. While being neutral is, in and of itself, a moral decision, the game grants little to no narrative benefits to doing so, and players are positioned to either progress to full paragon or renegade status.

Screen Shot 2016-04-25 at 4.21.29 PM

A representation of the six choices on Mass Effect‘s conversation wheel.

Players can practically play and achieve full paragon or renegade status without even

Screen Shot 2016-04-25 at 4.23.39 PM.png

Mass Effect‘s conversation wheel in-game. Note the color-coding for paragon and renegade choices.

reading or thinking about the dialogue options they choose. At this point, players have broken the moral binary system, because the player action no longer directly reflects their beliefs, eliminating the possibility for cognitive dissonance and genuine moral quandaries. Mass Effect nearly transforms moral decision-making into an automatic, thoughtless process. Instead of playing as how you deem to be the appropriate moral choice to make in different contexts, your morality is globally predetermined by the type of playthrough you wish to achieve. There are incentives and narrative rewards for committing to either paragon or renegade, and nothing is gained by choosing neutral dialogue options. For

Screen Shot 2016-04-25 at 4.25.46 PM

Another in-game conversation.

instance, Commander Shepard begins as a neutral personality to fit the player, and is strongly characterized by moral decisions the player makes at the dialogue wheel. There is even a meter that tracks how good and bad your Shepard is on a moral spectrum. You start in the neutral gray zone in the middle, and “progression” is achieved whenever your tracker moves towards paragon’s blue side or renegade’s red side. As a player, morally
wrong acts can then be justified by playing by the game’s moral rules, and not their own. By turning morality into a game in and of itself, you undercut any emotional consequences these decisions may have on the player.

 

Screen Shot 2016-04-25 at 4.27.30 PM

Progress along Mass Effect’s moral tracker.

The Fallout series has done well in both perpetuating and addressing the problematic moral binary in video games. In Fallout 3, your behaviors are omnisciently tracked and

Screen Shot 2016-04-25 at 4.32.40 PM

Fallout‘s Karma Indicator.

marked under a karma score, distinguishing the both the player’s and the avatar’s actions as good or evil. Good choices include granting charity to survivors in the wasteland, while evil choices include stealing, even when no one is looking and even if the object were but a mere paper clip. Again, this is another example of an unrealistic moral scenario, in which every time you steal a paper clip you receive a notification and unpleasant screech denoting that you have lost karma. It is almost as though I avoid making evil choices, not to avoid guilt or to save my karma score, but primarily to avoid that unpleasant screech. Here is yet another case in which the game’s progression system rewards committing to one moral side, and every decision you make is under scrutiny and is met with predictable consequences. Upon learning that the only penalty to pay for stealing is a bit of on-screen text and a screech, why not just steal

Screen Shot 2016-04-25 at 4.34.14 PM.png

Fallout‘s Karma Indicator again.

everything when no one is looking? Any guilt you might feel regularly is diminished by the reminder that this morality system is but a meta-game that can be exploited to increase your karma level by repeatedly donating caps to any schmuck in the wasteland. Fallout New Vegas takes measures to address this issue by incentivizing players to maintain a morally neutral playthrough via dedicated and rewarding perks for neutrality.

Screen Shot 2016-04-25 at 4.29.14 PM

The good-and-evil point system of Fallout 3.

However, there still lies an issue in the blatant “gaminess” of its morality systems, where players feel as though their moral decisions are motivated extrinsically rather than intrinsically. In this case, players feel the need to satisfy the game’s expectation to commit to one of two (or, for New Vegas, three) moral pathways because of the various

Screen Shot 2016-04-25 at 4.36.35 PM.png

Fallout: New Vegas’s Karma Tracker.

benefits/perks that come with such a playthrough. Not only that, but the Fallout games also fail to imbue narrative consequences to a player’s morality. For the sake of preserving this open-world game’s consistency across playthroughs, the narrative is largely unaffected by player’s moral decisions. NPCs respond equally to “bad” and “good” avatars. The game’s primary response to moral decisions is merely mechanical, by the omniscient tracking meter and consequent on-screen notification of when a player has committed a moral decision. The drastic disconnect between the player’s moral decisions and the game world’s frigid indifference to such moral actions inspires little questioning or thought. Players, knowing that their actions have minimal consequence, place moral responsibility upon the game’s system rather than themselves and their own moral beliefs. By the end, the experience has boiled down to accommodating the game’s own defined sense of morality instead of exploring your own beliefs.

However, not all hope is lost! Some games come closer to emulating the experience of moral decision-making. Telltale’s The Walking Dead series remarkably captures the insecurity, spontaneity, and unpredictability that often comes with moral decision-

Screen Shot 2016-04-25 at 4.38.29 PM

The Walking Dead’s ambiguous, unpredictable choices.

making. Throughout the game’s interactive cutscenes, there are often timed decisions players must make between four options. The player never knows which decisions are tracked, nor what consequences they might have, whether short-term or long-term. The only indicator players receive are a line of text that denotes “[insert character name here] will remember that.” Even in that statement, the impact is ambiguous, and the player is left to discern whether they made a good or bad decision according to their own morality, rather than that of the game’s narrative. Mechanically, The Walking Dead presents no explicit menu or HUD tracker for the player’s morality level, provides little-to-no feedback on these decisions’ narrative/gameplay impacts, and inflicts unpredictable

Screen Shot 2016-04-25 at 4.39.36 PM

Clementine’s ambiguous response to your choice.

consequences. By contrast, the games mentioned above explicitly posited their own binary moral system: firm rules that the player must play by. In addition, the games above predictably provided information and definitive feedback to these moral decisions, lessening their emotional impact in the long run. Players, once made cognizant of the extrinsic forces that may be guiding their decisions, feel relieved of any moral responsibility for choices made in these narratives. This is because player action is driven and can be explained by a factor other than their internal beliefs. In The Walking

Screen Shot 2016-04-25 at 4.39.47 PM

The consequences of your actions are ambiguous.

Dead, a minimalist morality system with no clear categorization or consequence keeps responsibility in the player’s hands. To explain, systems may still track player choices and make them instrumental to the progression of the story. However, minimalist systems do little to display or indicate to the player the value of their decisions and how they will impact the narrative, which feels more realistic. Choices made are more satisfying when the player understands or feels that they have been intrinsically motivated, and are the result of their own agency unpolluted by other incentives.

The Witcher 3 also succeeds in unpredictably imbuing morality into the seemingly mundane scenarios that occur in its world. Aside from major quest lines that also pose variable, complicated moral decisions, the decisions the player makes through Geralt’s ordinary day of work reach a sobering, disarming level of emotional realism. Geralt constantly runs across merchants, beggars, looters, and all sorts of unsavory characters throughout the game world. More often than not, the player must decide upon whether or not to

Screen Shot 2016-04-25 at 4.39.58 PM

In The Witcher 3, dialogue options have no clear hierarchy or consequences.

intervene, and how to resolve conflicts upon entrance. For instance, consider this example: a townsman asks me to find his missing child in the woods. Here, I have the opportunity to haggle for more pay beyond my standard fares, even though it is evident that he holds very little of value in his hut of sticks and mud. I eventually discover the son’s bones, leftover by wolves. Upon return, I am presented with two more difficult decisions. I can choose to lie about his son’s fate or to tell him the truth, which is a subjective moral quandary I will not pursue here. Either way, he refuses to pay me because I have produced no evidence, while realistically he is likely disheartened by his loss and has no money anyways. At this point, I can choose to “Witcher” mind-trick him into paying me, take it by force, or leave him in his grief.

Even if I choose to be “evil” and force him into paying me, I will be receiving so little money that it would be insignificant, and my “evil” deed will not be sufficiently justified by the economic gains. The difference between such economic decisions in The Witcher 3 and BioShock is that, while they are both tied to “bad” morality, BioShock’s immediate rewards and short-term gain rationalize the decision. Here, the economic rewards are so blatantly insignificant that the only rationale behind such a deed most likely stems from the player’s indifference to this NPC’s plight. Therefore, The Witcher 3 is more likely to provoke cognitive dissonance because morally “bad” decisions can not be rationalized or justified by any other incentives.

I will admit that I opted to mind-trick him for his money, as a spur-of-the-moment decision. I took his handful of coins and left him to grieve for his son. What is remarkable is that nothing guided me to make such a morally questionable decision. Money mattered little to me, so it must have been a matter of pride: desiring some acknowledgment for the completion of work. I would like to think of myself as a good person, and I always aspire to do so in video games. Yet, no substantial financial, mechanical, or other extrinsic factor possessed me to exploit the man. The worst part is, I got away with it, and I have to live with this decision throughout the rest of my playthrough, not to mention the chance that I may see that man again. At this moment, I felt like a bad person, and chose to live with this discomfort.

This side quest alone presents at least three moral choices that work. They work because The Witcher 3 holds no formal morality system, which means none of your actions is omnisciently tracked or denoted by the HUD. More importantly, the consequences/punishments are unpredictable and change depending on context. My interactions with the desperate townsman above may be repeated in different scenarios and stories with different effects. I found these numerous little scenarios to be the most

Screen Shot 2016-04-25 at 4.40.06 PM

Some moral choices in The Witcher 3 must be made within a time limit.

effective because the game appeared to be indifferent to my choices. The Witcher 3’s world of vice and monsters holds no definitive criteria to define good and evil actions, and therefore does little to mechanically address them, such as through on-screen notifications. This places all responsibility upon the player to (1) determine what is right and wrong based on his own beliefs and (2) deal with the consequences (e.g. guilt) of his own accord. Beyond crimes committed in the city, the game realistically grants you the freedom to be both the hero and the dick without formal judgment beyond your own self-evaluation and the unpredictable reactions of narrative agents. This is not to say that the game holds no morality at all, but that it does not commit to an objective, explicitly defined moral binary. The moral universe is then determined not by the game itself, but by the agents, such as NPC characters, within it that interact with the player and present their own diverse moral beliefs.

Self-contained moments in other video games also succeed in provoking realistic moral quandaries. For instance, Red Dead Redemption: Undead Nightmare has a side quest in which you hunt for a monster that is terrorizing country folk. You find that it is a peaceful sasquatch, the last of its kind. You must choose between killing it to satisfy the bounty and, in a sense, end its loneliness, or to leave it to live and die alone in solitude. Here, there is no clear good or bad, even if the choice is still binary. The choice will therefore also have no clear or predictable consequences. You will have to live with this permanent, immutable choice for the rest of the game, as the game itself will be indifferent to your decision.

Screen Shot 2016-04-25 at 4.40.15 PM

The Sasquatch Encounter in Red Dead Redemption: Undead Nightmare.

Games like The Walking Dead and The Witcher 3 capture an essential component of moral decision-making: internal conflict. One’s cognitive dissonance is most active when these moral decisions have no extrinsic explanation or justification. Rather, the quandary is found within, an internal conflict propelled by self-evaluation. Discrete morality systems, such as the prominent binary system, may actually detract from the emotional impact of moral decision-making because it so readily and easily provides players with an extrinsic justification for their behaviors. By turning morality into an explicit meta-game, designers may unintentionally displace the player’s responsibility for their own actions and hinder the effects of cognitive dissonance in moral decision-making. Minimalist game design for moral decision-making better matches the moral experiences of ordinary life. Should I steal a cookie from the cookie jar? No one will know. The lines between good and bad are realistically blurred, because there exists no omniscient authority (unless you count your conscience) to denote and tally you on all the karmic decisions you have made in a day. At the end of the day, moral experiences in video games should not be determined by karma meters and reward systems.

Richard Nguyen is a featured author at With a Terrible Fate.  Check out his bio to learn more.

Why SOMA is More Fact Than Fiction

by Matt McGill, Featured Author.

Einz, a two-year-old girl living in Bangkok, Thailand, was diagnosed with a rare form of brain cancer. She did not have much time left to live, but Einz’s parents wanted to give her another chance at life by having her brain removed and cryogenically frozen shortly after her death. Einz’s parents, both medical engineers, have faith in the perseverance of the scientific community: one day, they believe, Einz will be able to live again in a new body.[1] To some people, this story may seem like it is straight out of a science fiction novel; but, how forward-thinking were Einz’s parents? The field of neurobiology is a young but growing one, and so it is filled with a lot of speculation and uncertainty; not only scientists but also filmmakers and game designers create their own storylines about what the future of neurobiology may hold. Einz’s parents have their theory: a future where the information within an intact brain can be used to re-create a human being. A similar, but more specific theory, is presented in the game SOMA: futuristic technologies such as brain scans allow humans to create digital copies of a person’s memory, personality and consciousness that can be uploaded into new robot bodies or virtual worlds. One could imagine that this brain scan technology in SOMA could be used to retrieve the information stored in Einz’s frozen brain and then upload this information into a new medium, allowing Einz to live once again. However, how practical are the technologies presented in SOMA? Could such technology ever become a reality and affect the lives of people such as Einz? The answer is yes.

In order to thoroughly address the practicality of SOMA, let’s first consider what new technologies are presented in the game. SOMA is a sci-fi horror game that follows the protagonist, Simon, who is a brain scan of his former self uploaded onto a robot body. Simon wakes up in an underwater facility called “Pathos-II” approximately one century after his death, only to find out that Earth has become uninhabitable due to an impact with a comet; however, a systems engineer at Pathos-II named Catherine created a project called the “ARK,” where brain scans of people are uploaded onto a virtual world meant to simulate reality. Although Catherine is now deceased, Simon is able to communicate with a brain scan of her that he uploads onto various computers as the two try to find the ARK and launch it into space, where it will be powered by the sun for thousands of years. As is clear from the storyline, two important technologies utilized throughout the game are the brain scans and the ARK. The issue with the brain scans is two-fold: first, how to obtain and copy someone’s brain information, and second, the consequences of uploading a brain scan to a new body. As will be explained later, new research into memory and shows that it is somehow possible to retrieve brain information (relating to memory, personality, and the like); so, as the field of neurobiology grows, the idea of a brain scan comes closer and closer to reality. Also, new computer chip technology shows that artificial intelligence can be advanced to the extent that, when combined with a brain scan containing a person’s brain information, could allow a brain scan to learn and thus adapt to any new body. Finally, the practicality of the ARK is a question of the ability to read and render brain information, which even today is becoming a reality with new research into brain-machine interfaces that translate electrical signals from neurons into movement of prosthetic limbs. Thus, the technologies in SOMA may be less science fiction than they appear.

SOMA Landscape

SOMA features an expansive, but largely deserted, underwater research facility.

Let’s begin with the core idea in SOMA: the brain scan. The game explains that a brain scan is where a person’s memories, personality, and consciousness are scanned and stored in digital form, and this scan can be transferred to a new body. It’s important to note that this scan is a copy of the patient’s brain; information is not being transferred, where the scan goes from medium A to B, leaving A empty; instead information is copied and pasted so that the scan again moves from medium A to B and now A and B contain the same information. This point is necessary to highlight since the following discussions on the practicality of brain scans assume there is a copying of information and do not delve into the means by which such brain information could be transferred from a perhaps living medium to a computer or robot. As will be further explained below, a brain scan as a copy is a feasible leap from today’s knowledge, whereas a brain scan as a transfer is not obviously a feasible leap. Let’s consider an example of a brain scan in the game: in order to retrieve the ARK Simon needs a new body that is better equipped for greater depths, and so Catherine tries to convince Simon to undergo another brain scan. At first, the way Catherine explains the brain scans to Simon is a bit cryptic: she explains that, once she makes a copy of a patient’s brain and transfers it to the new body, there is some 50-50 “coin toss” that decides where the patient wakes up, referring to which copy contains the patient’s present awareness. Although not explicitly stated in the game, since Catherine is aware of how such technology works and Simon is being reluctant, we can infer that Catherine is tricking the current form of Simon (Simon A), whom she puts to sleep immediately after the copy (Simon B) is made. Simon B will then wake up only to find Simon A asleep. At the moment the copy is made, both Simon A and Simon B will be the same insofar as they have the same memories and consciousness, meaning that when Simon B wakes up, he’ll emerge with the same experiences as that of Simon A (including being told about the 50-50 coin toss) and so he will believe he won this “coin toss” that put his awareness in the body with the new suit. However, what’s going on is that there are two copies of Simon, two conscious awarenesses that begin the same but quickly diverge from each other as Simon B leaves to help Catherine while Simon A remains asleep, only to awaken distraught that Catherine has left and he is still in the old body; Simon A will believe he has lost the “coin toss.”

Simon's Body

Simon in his first robot body, after the brain scan is complete. The copy of Simon in the new diving suit looks on just before he leaves with Catherine.

There are two important questions at hand in order to determine the practicality of such technology: first the feasibility of a brain scan itself, and second the implications of uploading a scan to a new body. In order to “copy” the information constituting a person’s memories and consciousness, science would need knowledge of the manner in which this information is encoded in the first place. The question of how the brain encodes information has plagued neurobiology ever since the field’s inception. There are many theories of how information is encoded, such as: a rate code theory, where information is simply encoded by the frequency of a neuron’s electrical spikes; a temporal code theory, where information is encoded by the timing of a neuron’s firing with other brain activity such as brain waves; a synchrony code theory, where populations of neurons must fire together in order to successfully project information.[2] Other theories posit that information encoding is an amalgamation of several codes. Although the problem seems impossibly complex, consider the following: you are walking down the street and the smell from a restaurant reminds you of your grandmother’s cooking, and then you start telling your friends about all of the delicious foods you remember and how much you loved the tire swing in your grandparent’s backyard and so on. The mere fact that we can conjure up these memories so effortlessly shows that there exists some way to access these memories. The previous theory of memory analogized memory to an art galley: in an art gallery, you can see paintings but never touch them; like the paintings, this theory held that memories are static and unchanged upon your recalling of them. However, recent research shows that memory acts more like a filing cabinet: there are defined processes of accessing memories (opening the file), using and adapting the information for future use (reading and writing on the file), and re-storing that memory (putting the file back in the cabinet); in this theory memories can be altered once they are recalled and can even be forgotten if they are not filed correctly, unlike the paintings in an art gallery. Research on these processes is just beginning and new discoveries are constantly being made. For example, a 2006 study showed that when a rat is awake, it replays positional information in its brain in reverse.[3] Specifically, there are neurons in the brain called “place cells,” which signal in response to you standing in a particular spot in a given space. Let’s say the rat was trying to find food one day, and walked through areas 1, 2, and 3, each signaled by a different place cell, in order to find the food; then, while the rat is eating the food, these place cells fire in the sequence “3, 2, 1” as if the rat is trying to remember how to return home. As shown through these place cells, this memory information about the places we’ve been can become available to us if we know the right places to look. Other research has found that humans possess “familiarity” and “novelty” detectors that allow us to recognize whether or not we have previously seen a stimulus.[4] It’s clear that the knowledge base is growing and the potential of being able to access someone’s memories and everything else that makes up who they are is becoming real. The futuristic brain scans in SOMA may be less fictional than they appear.

Testing Place Cells in Rats

Here, place cells in the rat’s hippocampus fire in response to it being in areas 1, 2, and 3 as the rat moves. While eating, the place cells signal in the reverse order to which the rat first traversed these areas. This phenomenon is known as ‘reverse replay’.

As mentioned previously, another issue determining whether the technology in SOMA can be reality is whether a brain scan could be uploaded to an entirely new body and still be able to function properly, as in SOMA we see Simon running around in several different bodies almost as if he were still his human self. First, there is an inherent connection between the brain and the body, as a person’s brain is optimized for their body and the way this body interacts with the world (such as pain tolerance, limb length, hand size, physical fitness, etc.); uploading a brain onto a body that doesn’t match the original individual could be disastrous, but again in SOMA Simon has little problem adjusting to the new bodies that his brain scan is uploaded onto. Second, humans are learning and adapting beings, and so one problem is that a simple brain scan may not be enough to simulate human functionalities and may simply act as a personality stuck in time. The brain consists of billions of neurons with countless different connections, and the ability to learn and adapt arises from structural changes both in the neurons themselves and the connections between them, such as connection between two neurons, called a synapse, becoming larger and filled with more receptors on the receiving neuron’s side. The idea is known as ‘neural plasticity’, which is the capacity of the brain to adapt to new conditions through changing the connections or strength of connections between neurons. If we were to copy the information in the brain, we would not be copying the ability or potential of these neurons to change, since this is a constantly shifting phenomenon resulting from chemicals and other signals changing within the brain itself. However, this issue of the inability to “copy plasticity” could seemingly be solved through complex artificial intelligence, which, when meshed with a chip containing a brain scan, could allow a brain copy to learn and adapt; this would both allow Simon to continue learning and forming memories, which is exactly what he does in SOMA as he helps Catherine, as well as enable him to adjust to his new body. New technology today includes the TrueNorth IBM cognitive computing chips, which are inspired by the workings of the brain. The chip is said to contain one million “neurons” with over 256 million “connections” that can dynamically integrate information spatially and temporally and generate an output.[5] The system is based on the “integrate and fire” idea of neurons, where a neuron with a set threshold integrates many signals and will fire if and only if the sum of the signals passes said threshold. Technology such as this makes the possibility of artificial intelligence with the efficiency and functionalities of the brain much more tangible, thus rendering the reality of SOMA again strikingly reasonable. However, it’s important to note the limitations of said technology thus far. Although achieving a model that imitates the “integrate and fire” ideas of the brain is astounding, this model is primitive in its understanding of the brain. Neurons are connected to each other through axons, which project a signal, and dendrites, which accept a signal. These protrusions have their own electrical and molecular properties that can change how a neuron integrates information, and even the geometry of a neuron’s axons and dendrites can change the way the neuron functions and behaves. The good news is that more and more research is beginning to uncover exactly how this information processing works, and so one could imagine it being incorporated into technology such as new-age computer chips.

Suprathreshold Neural Signaling

Here is an example of the nuances of neural signaling. In the above image, surrounding neurons make connections called synapses on a single neuron at locations A and B. At these synapses, electrical signals in the presynaptic neuron can result in electrical signals in the postsynaptic neuron (through chemical intermediates). The graph in the below image shows the electrical signals measured in the neuron above when A fires alone, when B fires alone, and when A and B fire simultaneously. One might expect that when A and B fire together, the electrical signal measured should be the arithmetic sum of the individual electrical signals. However, they actually result in what is called a ‘suprathreshold signal’ (labeled in the diagram as “A and B together”).

Finally, with this brain scan technology, Catherine is aiming to create the ARK, which is a virtual world onto which one can upload people’s brain scans so that these individuals can continue on living in this virtual reality. I will not delve into the methodology of creating a virtual world, but I will specifically address what it means to read and render brain information. Consider the brain-machine interface, which is a direct communication pathway between a person’s neural signals and some device. This is a growing technology used for amputee patients. Specifically, one current methodology being tested places electrodes into the muscles responsible for moving the joints and body parts in the lost limb (such as placing electrodes in the pectoral muscle and in muscles on the shoulder for those who have lost an arm). These electrodes are able to pick up thought-generated nerve impulses that would normally go to the now-absent limbs, and instead transmit this information to the prosthesis, thereby controlling the movements of the arm.[6] With simple knowledge of anatomy and neural signaling, we can create a bionic man. Along a similar mindset, with more knowledge of how information is stored and processed in the brain, the stage is set for technology that can read and even control someone’s mind. And some of this work has already been successful: brain implants exist that read electrical information and transfer it to a computer for decoding; this computer is then connected to a sleeve of electrodes around a patient’s arm, allowing for control of the patient’s limbs from her brain without passage through her spinal cord.[7] With more knowledge into how memories and other information is stored, it seems plausible that one could read and utilize someone’s memories and personality, perhaps by uploading said information onto a computer program where avatars are driven by the brain scan information to go about and interact with a virtual world. The possibilities of such a future are not science fiction.

Bypassing the Spinal Cord in Limb Movement

On the left are the electrodes on the patient’s pectoral muscle. The right panels depict the pattern of neural activity (where red is high activity) corresponding to certain muscle movements.

With new technologies, ethical questions may arise concerning how to conduct our behavior with such advanced capabilities, and such questions arise within SOMA. First, how should you behave if more than one copy of yourself exists? When Catherine asks Simon to copy himself so they can utilize the stronger diving suit, Simon is given the choice of whether or not to keep the other copy of himself alive. At one moment in time these two copies were exactly the same, but as time continues these copies will experience the world in different ways and thus diverge, becoming different beings. Would allow another version of yourself to exist in some capacity, or would you choose to deactivate this second version? This question is left for the player to decide, among other choices in the game. Another dilemma in SOMA concerns the end goal of the game itself: with technology such as the ARK, is uploading human beings a worthwhile cause in order to preserve what is left from humanity? The ARK is only able to run off of solar power for a finite period of time, and so at some point the ARK too will shut off, taking the last of humanity with it. Also, the virtual world of the ARK is a simulation of a deciduous forest with a park; the ARK is far less expansive than the world we now live in with a far smaller and likely less diverse population. With all of this in mind, is the ARK a depressing solution that aims at delaying the inevitable end of humanity through creation of an inferior world for ourselves? The game not only leaves several opportunities for the player to ponder the ARK as a goal, but also presents other perspectives on the ARK through the stories of past Pathos-II employees. As Simon explores Pathos-II, he hears news of people brain-scanned for the ARK who then killed themselves so that no copy of them had to remain on Earth; they perhaps saw the ARK as another opportunity to live, but this suicidal action is more nuanced. These brain scans are not transfers, and so it is not obvious what good suicide accomplishes here; one copy of these employees exists on the ARK while the other is still on Earth, and so killing their body on Earth is just like any other act of suicide. The version of these employees on Earth will never know what their corresponding version on the ARK is experiencing.

ARK Kiosk

In the epilogue, Simon finds himself in the ARK, and a kiosk asks him whether he wants to continue living in the ARK or would rather deactivate himself.

Overall, these issues can prompt different responses from different people, and such responses may affect how you play and experience SOMA. However, one thing is for sure: this science fiction game is not as fictional as it appears; Einz may be able to live again soon.

Matt McGill is a featured author at With a Terrible Fate. Check out his bio to learn more.

[1] “Frozen Child: The Youngest Person to Be Cryogenically Preserved – BBC News.” BBC News. N.p., 15 Oct. 2015. Web. 12 Dec. 2015.

[2] Stanley, Garrett B. “Reading and Writing the Neural Code.” Nature Neuroscience Nat Neurosci 16.3 (2013): 259-63. Web.

[3] Foster, David J., and Matthew A. Wilson. “Reverse Replay of Behavioural Sequences in Hippocampal Place Cells during the Awake State.” Nature 440.7084 (2006): 680-83. Web.

[4] Rutishauser, Ueli, Adam N. Mamelak, and Erin M. Schuman. “Single-Trial Learning of Novel Stimuli by Individual Neurons of the Human Hippocampus-Amygdala Complex.” Neuron 49.6 (2006): 805-13. Web.

[5] Modha, Dharmendra S. ” Introducing a Brain-inspired Computer.” IBM Research: Brain-inspired Chip. N.p., n.d. Web. 15 Jan. 2016.

[6] “Introducing Jesse Sullivan, the World’s First ‘Bionic Man.'” Jesse Sullivan, First “Bionic Man” Rehabilitation Institute of Chicago, n.d. Web. 15 Jan. 2016.

[7] Hochberg LR et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006. 442: 164-171.

 

“What Is It Like to Be a Gamer?” A speech on the value of video games.

Regular visitors of With a Terrible Fate may recall that, last February, I delivered a speech as part of the Lowell House Speech Series at Harvard University. In it, I discussed my decision to pursue the study of video game philosophy instead of medicine. This year, on 1.26.16, I delivered another speech as part of that same Speech Series; in it, I discuss why video games are worth playing and studying at any age.

I offer the transcript below, in full:

What are video games good for? I study the stories of video games, so I worry about that question a lot. I want to share one way I think the special stories of video games, and the way we engage them, teach gamers to learn from one another.

We often try to communicate with one another by referencing our experiences. We argue about aspects of society that offend us; we talk about aspects of our identities that other people lack direct knowledge of – if you are a woman and feel that your employer treats female employees worse than men, you might want me to understand what that is like, although I am a man. We want to convey to others what it is like to be us – but how can we, when others have no way of standing in our shoes?

Video games can help us share who we are. Let me explain why.

When I was in high school, I played a little-known video game called “Nier.” The game tells the story of a man, Nier, who will stop at nothing to save his daughter, Yonah, from a deadly plague. The emotional depth and complexity of this game were what first motivated me to study and analyze video games in school. If you haven’t yet, you owe it to yourself to play it some day.

At the same time that I dove into analyzing Nier, I also felt compelled to share the game with two of my closest high school friends, Dan and Nate. Just as I wanted everyone I knew to read The Catcher in the Rye after I first encountered the classic in middle school, I now wanted Dan and Nate to experience this game. One after the other, I passed my copy of the game along to Dan and then Nate in the hall between classes.

But when I spoke with them both after they handed the game back to me, I discovered something I hadn’t expected: although they had played the same game as I had, we each made different choices in the game—something that couldn’t have happened if we’d all seen the same movie or read the same book. I had focused on exploring the secrets of the game’s world, digging through virtual basements for classified government records to learn what had caused the apocalypse. Dan focused instead on exploring the relationship between Nier—the player’s character—and Yonah. He completed quests collecting food for Yonah, and she surprised him by making him a cake to thank him. Nate found the desolate wasteland of the game depressing, so he completed it once and moved on.

As I discussed Nier with my friends over lunches and in between classes, I learned about my friends and myself through the choices that we each made. We talked about how a single story prompted us to act differently from one another, and we accounted for our actions. Through these conversations, I learned how much Dan valued the intimacy fathers share with their daughters; I learned that Nate wanted to be excited about the environments of video game worlds, so that he could jump at the opportunity to explore them. Through these conversations, I began to articulate to my friends my desire to unravel mysteries.

Video games allow players to share their experiences with one another by grounding them in a common story. Games invite players to enter a single world, chart their own course through it, and compare their journeys with their friends. By giving us a special position in an artistic world, our choices become part of the work of art—something we can discuss and make meaning out of with others.

When I think about video games’ practical value, I always reflect on the degree to which they can unify people and allow them to understand and share their way of being. This invitation to share ourselves with others is the hidden utility of the engaging, epic stories waiting in the many worlds of video games.

But be warned, it can take a lot of work to get to know someone else—so you may just need to spend many hours playing video games.

Who Is Cloud?  How a Player Can Construct An Avatar’s Identity

-by Nathan Randall, Featured Author.

I’m unbelievably excited about the HD remake of Final Fantasy VII.  What better way is there to honor one of the greatest games of all time than to give it the graphical content that it deserves?  In the wake of its announcement I felt I should write something, so I decided to follow With a Terrible Fate‘s example of analyzing one particular moment of the game, and why it worked so well.

In the first disc of Final Fantasy VII, after Cloud gets out of Midgar, which he barely escaped with his life, he spends a night in a tavern with his team.  Cloud’s goal that night is simple: convince his team that a man named Sephiroth is a danger to the world and must be stopped.  In order to convince them, Cloud tells a story– a story of his first encounter with Sephiroth at Nibelheim.  What’s interesting about this story is that the player takes control of Cloud’s actions during the story.

Cloud recalls being dispatched to Nibelheim with Sephiroth by SOLDIER, an elite branch of the Shinra army, to take care of some monster problems in the area.  So in the sequence he and Sephiroth go to the town; Sephiroth begins to turn bad, so Cloud confronts him.  What the player probably does not know at this point in the game, however, is that Cloud’s memories are actually misattributed.  Cloud’s memories describe events that happened to a man named Zack, not to Cloud.

Tifa correcting Cloud's memories

The introduction of Zack

Final Fantasy VII is largely a story about the identity of Cloud, and the second of the game’s three discs features several moments where Cloud’s memories are proven false, fragmenting his sense of self.  Cloud’s encounter with Sephiroth at Nibelheim is one of several of the memories proven to be flawed.  The process of Cloud’s identity falling to pieces is difficult for the player to watch for two reasons: one, because the player participates in the construction of Cloud’s identity; and two, because the player shares some of Cloud’s flawed memories.  The process of playing through the Cloud’s memory sequence at Nibelheim both leads to the player helping to construct Cloud’s identity and also to the player sharing the memories with Cloud.  So now we must answer two questions:  Why does playing through the memory sequence allow the player to form Cloud’s identity more than just watching it allows?  And, why does the player share these memories with Cloud?

Let’s start with the second of those two questions.  Within a memory, one of the essential parts is memory of action.  By nature of Cloud being an avatar for the player, the player can sometimes determine Cloud’s actions.  Thus, in the memory segment in question, since the player can determine some of Cloud’s actions in the past, the player is actively taking a role of action within the memory.  The memories in the sequence also in part become the memories of the player, which the player assumes to be true within the work of fiction, since she was the one who acted in the scenario.

To answer the first question — “Why does playing through the memory sequence allow the player to form Cloud’s identity more than just watching it allows?” — let’s take a look at Psychodynamics.  Psychodynamic theory postulates that a person’s identity is directly linked to their interpretation of their past.  In the process of remembering, a person reinterprets their memories.  Memory is dynamic in the psychodynamic view:  each time one remembers an event, their interpretation has the potential to change.  When Cloud tells the story of his memory in Nibelheim, he is creating an interpretation at the same time.  And since the player controls Cloud’s actions during the memory, she is part of the creation of his interpretation of events.  The player participates in creating Cloud’s current interpretation of his past, and thus participates in part in the creation of Cloud’s identity.

Sigmund Freud, father of psychoanalysis

Since the player both has a personal memory of certain events in Cloud’s past and also has helped shape his interpretation of those events, the player has a difficult time accepting challenges to Cloud’s identity and these memories as false within the fiction of the story.  She helped create the memory and, by extension, Cloud’s identity.  It makes sense then that when the player finds out that the Nibelheim memory sequence was misattributed, she resists that conclusion.  Watching Cloud’s memories and identity fall to pieces ends up being an extremely difficult experience for the player in part because she played through some of the experiences that turn out to be misattributed.  The connection between the player and Cloud’s identity becomes very strong by the end of disc one.

Square took advantage of a particular feature of video games to give their game more impact.  That feature of video games is that playing a story, as opposed to just hearing a story, makes the player more naturally inclined to trust the sequence of events that unfolds before them, since she helped create them.  There are of course cases where this is not true, in which the player is given ample reason not to believe the reality that is being presented to her (for examples of this, I point the reader to the “Batman: Arkham” series’ Scarecrow sections and “The Stanley Parable” Insanity Ending).  But when it becomes clear that Cloud’s identity was manufactured from misplaced memories, the revelation takes on more impact because the player could control Cloud during his manufactured memories.

FFVII

Nathan Randall is a featured author at With a Terrible Fate.  Check out his bio to learn more.

Self-Guided Evolution: What “Deus Ex” and “Flowers for Algernon” teach about personal development.

Regulars of With a Terrible Fate know that, back in the spring of 2013, I undertook a project analyzing the various role-playing dynamics of well-known video games and theatrical pieces.  I have been publishing pieces of this study over the past few months on this site, which is the first time they have been published online — I began with an analysis of “Legend of Zelda:  Majora’s Mask” and “Six Characters in Search of an Author” (which you can read here) which I followed with an exploration of the similarities between “Dishonored” and “Macbeth” (which you can read in Parts 1, 2, and 3).  Now, I am releasing the third installment of this old study, in which I argue that “Deus Ex” and “Flowers for Algernon” both give us unexpected insights into how we can become better versions of ourselves.  My hope is this will be a timely moment for this work, in anticipation of the latest addition to the “Deus Ex” franchise:  “Deus Ex:  Mankind Divided.”

Note, again, that this older work is not altogether reflective of my current stances on video game theory.  It also focuses much more heavily on the phenomenology of games — that is, what it is like to experience playing a particular game — whereas my current work is more focused on the architecture of games as aesthetic objects.  Nevertheless, I hope readers will enjoy the piece.  Stay tuned, too, for updated work on “Deus Ex” in the weeks to come.  It is a series well-deserving of its many accolades.

Adam Jensen

Treatment IV

The Augmentative Role: Striving to be more than We Are

I had to know. The meaning of my total existence involves knowing the possibilities of my future as well as my past. Where I’m going as well as where I’ve been. Although we know the end of the maze holds death, I see now that the path I choose through the maze makes me what I am. – Charlie Gordon, “Flowers for Algernon”[1]

 

Synopses

  • “Deus Ex: Human Revolution,” Square Enix

“Deus Ex: Human Revolution” poses a question hauntingly relevant in our modern society: ought humans to implement biotechnology to take control of their own evolutionary development? The game is set in a near-future world where biomedical corporations have taken the lead in the new market of human augmentation: the enhancement of humans by implanting technology which allows them to “unlock the hidden capacity of our DNA,” ranging anywhere from increased physical strength to enhanced social skills. The central conflict asks the question of how society will move forward with this newfound ability to “play God,” with “purist” organizations and extremist factions standing in opposition to biomedical companies and their patrons.

The player assumes the role of Adam Jensen, the head of security for Detroit-based leading biomedical corporation, Sarif Industries. On the eve of a planned meeting between the company and the U.N., at which the company planned to present their most recent findings and argue against the need for augmentation regulations, an unknown group attacked the company; their leading research team, headed by Dr. Megan Reed, was killed. Adam was brutalized in the attack, and was subsequently heavily augmented by David Sarif (the company CEO) so that he might survive. He then embarks on a mission across the city and globe to uncover the truth behind the attack, and finds a far greater conspiracy than he ever imagined. He learns that the attack was orchestrated by a number of high-powered cogs in a much larger machine: the Illuminati, seeking to exploit augmentation implants to exert control over all augmented people from the inside. They staged the scientists’ deaths and kidnaped them in order to develop a biochip that they could distribute under the pretense of a software update, thereby literally enabling mind control over vast populations. Hugh Darrow, the father of augmentation technology, having discovered this, broadcasts his own signal to the chips, inducing acute psychosis in all augs (i.e., ‘augmented people’) in an effort to “put the genie [of augmentation technology] back in the bottle” by making mankind privy to its dangers through example. Adam is left with the task of disarming the signal and deciding what message to broadcast around the world explaining what happened. He has the options of: blaming the signal on purist group “The Humanity Front”; blaming it on a biomedical error; telling them the truth of what happened; or destroying the entire broadcasting facility, letting the truth die with it.[2] In so doing, Jensen is made to choose the future trajectory of humanity in regards to its perception of augmentation. After a journey in each Jensen has seen every way augmentation has affected the world, the player is given the onus of choosing how mankind might best proceed.

  • “Flowers for Algernon,” David Rogers, based on the novel of the same name by Daniel Keyes

“Flowers for Algernon” tells the story of Charlie Gordon, a mentally disabled thirty-two-year-old, whose teacher volunteers him for an experimental intelligence-amplifying operation, which enables him to learn and retain information at an exponentially higher rate. Charlie steps out of the shell of his disability and sees the world first as others see it, and eventually in a far more integrated, enlightened way than any of them can perceive it. Yet his emotional development is unable to keep pace with his intellectual development, as he is tormented by his past, the way he can now see how little respect he was given when he was mentally disabled, and his inability to relate to those of lesser intelligence than he.

Eventually, Charlie learns that there was a flaw in the original research (a flaw which, ironically, could not be perceived except by his enhanced intellect), and that he will eventually lose his intelligence until he is back where he began. In so learning, he is put in a position where he must live the entirety of his life as intelligently conceived in the space of a few short months of research – and learning to love. Charlie’s tragic story speaks to the question of who we truly are, who we might become, and how we change by virtue of the journey.

  • Role Playing in Psychotherapy, Raymond J. Corsini

In this slim volume, Corsini deftly outlines role-playing’s role as a pragmatic, aggressively effective tool in the psychotherapist’s arsenal. He describes its threefold usefulness for purposes of diagnosis, teaching, and training. Of particular interest to us are his theories of role playing’s use in ‘training’: he argues that a patient, directed by the psychoanalyst, can effectively change his behavior through role playing sessions – and, subsequently, can change his perception of himself (what Corsini refers to as the patient’s ‘self-concept’). This idea of self-transformation through an external enabler is similar to the dynamics of both Charlie and Adam’s growth; therefore, it is a useful template through which we can understand role-playing in both of these stories.

Introduction: “Without control, there’s no room for freedom”

A canonical philosophical question is whether free will exists. Do we, as humans, possess the capacity to genuinely choose? Or are our decisions, along with everything else ever to occur, predetermined by some metaphysical god? Or, is choice illusory by virtue of our behavior being conditioned by external stimuli and subsequent reward pathways? Complex, involved arguments exist for virtually every side of this debate, and we will not presume in this piece to offer any substantive framework for answering this question; rather, the question of free will serves as our jumping-off point for a central issue of this study: how do the measures one takes to define oneself relate to external influences?

B. F. Skinner believed that behavior was a function of external conditioning and reward, what he referred as ‘operant conditioning’. In his world, everyone’s behavior (or ‘will’) is a product of the feedback their actions yield from the environment upon which they are performed. Skinner’s world is not necessarily one of strict determinism – rather, environments establish behavioral patterns by greatly increasing the probability of individuals electing to act in certain ways that yield desirable results. Thus, the individual still acts “as he wishes” – the environment simply influences how he wishes to act.

A behaviorist paradigm such as Skinner’s is not far-removed from questions of role assumption, particularly where theater is concerned. Consider the director-actor relationship: the actor is responsible for bringing the reality of the play to life, and it is his choices and actions onstage which effect this; yet these choices are heavily informed by the vision and directions of the director. So the word of a theatrical play might be roughly generalized as ‘a distinct but not discrete reality with an onstage locus of choice (i.e., the actors) and offstage locus of control (i.e., the director)’. To continue the Skinnerian analogy, the director serves as the environment by which the actors onstage – and, consequently, the collective and personal realities of the play – are conditioned.[3]

By considering this dynamic, we are beginning to flesh out a fourth version of the meta-role: what we will call the augmentative role paradigm. It is this paradigm that provides a distinct mechanism for a change in self. When Adam reaches Hugh Darrow’s Arctic hideaway, ‘Panchaea’, he finds Humanity Front leader William Taggart holed up in a server room, hiding from the crazed augs. In trying to convince Adam to blame the catastrophe on biomedical corporations to compel strong industry oversight, Taggart warns Adam that, “without control, there’s no room for freedom – only anarchy.” The augmentative meta-role paradigm offers insight into people at their most dynamically transformative, but with a crucial caveat: this transformative capacity is externally potentiated.

 

“Hybrid life support”: a sketch of the evolutionary self

To better understand the augmentative paradigm at work, we will break it into a multi-component model, and assess the way in which the components work together to facilitate self-evolution. We can graphically represent the model as follows.

augmentative meta-role paradigm

We can formally define the paradigm in this way: the augmentative meta-role paradigm describes the transformation of a base role (‘A’, the triangle) into the base role’s choice of any number of variant roles (‘C’, the set of possible shapes) by virtue of an internal evolution of A made possible by an external evolving agent (‘B’, the arrows transforming A into a member of set C). In our standard meta-role terminology, the base role is the primary role, and the variant role is the secondary role.

There are two distinct-but-similar mechanisms by which the augmentative meta-role paradigm is actuated in “Deus Ex” and “Algernon”: in the former, the mechanism is human augmentation; in the latter, it is intelligence enhancement through neurosurgery. Both are effected by an external party: Adam is mortally wounded and in no position to choose whether or not he wants to be augmented, so Sarif makes the decision for him; Charlie is not intelligent enough to actually understand the implications of what is going to happen to him through the operation, merely saying that he wants to be smart like everyone else so he won’t be lonely, and so the choice is largely left to his teacher and the scientists.[4] In both cases, the base role is enabled through the external evolving agent to develop in a multifarious way: Adam is given license to activate any of the implanted augmentations he desires to evolve himself however he sees fit; Charlie, having been given the enhanced capacity to learn, may absorb whatever information he likes and administer his newfound knowledge in whatever ways he sees fit – ultimately doing so in the tragic irony of scientifically proving that he will lose his newfound intellect. Given the similarity of these two cases, we will examine the finer points of the augmentative meta-role paradigm’s dynamics in the case of each subject, and then synthesize a more general conception of the ways in which the paradigm functions.

Deus ex machina: stealing fire from the gods

How would the gears in the modern political machine respond to the potentials of human-controlled evolution? Square Enix explores that question by presenting a debate set in the home of democratic discourse (The United States of America, Detroit), as well as on a global corporate scale. As one might expect, people’s opinions are immediately polarized between those who want to move forward in human development through augmentation – biomedical corporations and augs – and those who vehemently decry human augmentations as a crude distortion of the natural order of life – lobbying groups such as the Humanity Front, and extremist factions such as Purity First. Each side is represented by a leader who conveys their group’s opinion on humanity: David Sarif, Adam’s boss and head of Sarif Industries, is a progressive aug who is seeking to drive humanity forward with unrestricted human augmentation development; William Taggart, psychologist and de facto leader of the Humanity Front, is seeking to effect rigid restrictions on human augmentation through either U.N. oversight or under-the-table support of the Illuminati bid for more authoritarian control over augmentations; Zeke Sanders is an ex-marine who was augmented to replace an eye lost in war, but, after an incident of augmentation-induced psychosis, tore out his augmentation himself and founded militant anti-augmentation group Purity First as a more direct way of opposing the advancing wave of augmentation. In the midst of all this stands Adam Jensen, the effective interloper between opposing factions.

Adam’s mobility between augmentation factions is stark, and is the main vehicle that justifies the choice placed before him at the game’s end. Adam is the ultimate symbol of augmentation in several ways: he was born as part of a genetic engineering experiment, and it was his extremely resilient DNA which enabled the technology supporting human augmentation to be developed and brought into the mainstream; his own augmentation was chosen by Sarif in Adam’s time of dying. As Adam reiterates several times throughout the game, he “didn’t ask” to be augmented – in point of fact, one private investigator who spent time investigating Adam later tells him that Adam was effectively brought back to life by augmentation. He refers to Adam’s body as a “metal corpse,” calls him a “robot,” and says that Sarif “butchered [him]” by making him a “weapon.” Adam is an interloper by virtue of the fact that on the one hand, he is under the employ of a biomedical industry, yet on the other hand, as his pilot, Malik, says has “every reason to hate augmentations” because of the way in which they were forced upon him. Thus, he has reason to sit in either the pro-augmentation or anti-augmentation group. He may also move between moderate and extremist camps insofar as he is part of the “legitimate business” sector in serving as head of Sarif Industries security, but is also granted extreme leeway in how he goes about getting things done. He may just as easily kill a mob leader in cold blood and stage it to look like a suicide, or plant drugs in the leader’s apartment and let the proper authorities neutralize him.

It is interesting to note the ways in which the game’s avatar mechanics mirror these thematic elements. Drawing from the terms explored at the start of our third treatment, we can describe “Deus Ex” as a first-person, active-avatar game. We noted in our initial definition of these terms that such a setup is an exception to the trend of games being designed either as silent-avatar first-person or active-avatar third-person, and are now in a position to explore how such an exception uniquely defines Adam as an avatar.

Adam is a character compelled to face certain situations based upon external factors – his forced augmentation, his job, the genetic experimentation upon him, and so forth. Adam earns his name by being analogous to the first man: as that Adam was God’s experiment and the root of humanity, so too is our Adam an experiment, the heart of man’s exploration into a new, biomechanical identity. This experimental origin presents Adam with a unique matrix of choices made available to him by parties external to him, but for which choices the locus of control is internal. The actual game dynamics present a strikingly similar situation: Adam is a character with a degree of independence from the player – there are actual scripted, movie-like cutscenes in the game wherein Adam is seen acting from a third-person perspective – yet his path may be directed by the player through choices of directions to take in conversations with NPC’s, methods of completing assignments (e.g., the above-mentioned mob boss), and, of course, the ultimate decision as to how to end the game. In this way, the game recapitulates Adam’s own creation by providing players with certain dimensions of a character, which they may then direct along any path they choose. The methodology of the augmentative meta-role is thereby built into the very fabric of Adam’s reality.

But what choices in particular do Adam’s augmentations, for which he did not ask, allow him do make? The most direct answer is that they enable him to explore his development in far greater depth in whatever area of growth he wishes to pursue. As we have already noted, Sarif went above and beyond the call of duty in outfitting Adam with the latest augmentations after his attack: when he first visits a LIMB clinic, one of the sites set up to perform augmentation surgeries and services augs, he is informed that his implants were designed to activate naturally over time to avoid traumatic after-shock, but that Sarif also made arrangements for Adam to be able to “turn them on manually” over time as he sees fit via Praxis kits, tools available for purchase at clinics or discovery in various places around the world. This leaves it at the player’s discretion to activate and upgrade the augmentations most suitable to his own playing style. The choices of how to proceed is broad: there are cerebral augmentations such as enhanced hacking capabilities; physical augmentations increase such things as stamina for running and armor resilience for greater health; aesthetical augmentations are as wild as dermal armor which refracts light so as to make Adam invisible; and there are such miscellaneous augmentations as the “Icarus landing system,” which allows Adam to fall from any height and land safely, stunning adversaries on his way down. Of course, a player could hypothetically acquire enough Praxis kits to activate and fully upgrade all augmentations (a time-consuming and taxing undertaking), and they would in theory all activate naturally over time; nonetheless, the immediacy of Adam’s quest at hand necessitates a measure of personal choice, meaning that our theory of internally-localized developmental choice still holds. (We can easily see by virtue of a more general example how this logic holds: given enough time, a human could no doubt become a master in every field they could possibly pursue; however, within the confines of the human lifespan, this is infeasible, and so the human must make choices as to which developmental paths they wish to take.) Within this framework, the player is able to explore the game in unique ways based on his own inclinations: if he is aggressive, then he might augment Adam’s health and inventory for as much ammunition as possible and run headfirst into the fray; if he is predisposed towards stealth, he may make Adam invisible and render his footsteps silent even while sprinting, so as to easily slip fast the most heavily-guarded areas.

It is easy to see how such potential advancement gives Adam an enormous advantage over virtually everyone else (even those who are augmented, because they typically only have one or two augmentations due to how expensive they are, whereas Adam has the works – to the point, as previously mentioned, where the P.I. refers to him derisively as a “robot”); yet we can go further and see the sheer magnitude of this advantage by examining one particular augmentation, which we will see is somewhat analogous to Charlie’s situation in “Algernon.” The Computer Assisted Social Interaction Enhancer, or CASIE, is a social enhancement augmentation that allows Adam to chemically analyze people with whom he is conversing, and, at the right moment, release appropriate pheromones (alpha, beta, or omega pheromones) to persuade them to do what he wants. Such an augmentation can be used to extract information from targets, talk someone out of suicide, and convince people to pay him for missions upfront, among other things. With it, Adam is able to easily convince adept psychologist Taggart to give up the location of his aid when it becomes clear that the aid is implicated in the Illuminati conspiracy; with it, Adam is able to convince Darrow to give up the code to shut down Panchaea’s security system by pointing out the fact that the father of augmentation uses a cane – an indicator that he himself was incompatible with the technology, and is partly motivated by bitterness about that. In fact, the only people Adam cannot convince using this software are Malik, who may be similarly augmented, and a rogue private security operative named Zelazny, who was outfitted with similar high-functioning augmentations by the company for which he once worked, Belltower. If Adam tries to use this method to persuade him to turn himself in, Zelazny responds by telling him “It’s a cute little toy you have there, Jensen. But don’t waste your time. Your CASIE won’t work on me.” Malik calls him out on using his software in a similar way. It seems, then, that Adam is only matched in his ability through augmentation by those who are similarly augmented – a fact strongly supportive of the notion that the external augmentative forces exerted upon Adam have literally enabled him to evolve to a point where the rest of humanity is, in a certain way, less able than he is.

This is certainly a significant degree of change, but the game goes further to justify the title’s allusion to the notion of deus ex machina, literally a “god from the machine.” When Adam finds Darrow at Panchaea in the midst of all the chaos he has wrought, Darrow decries humanity by saying that “People believed we should steal fire from the gods and redesign human nature.” These words at first glance seem a bit strong for what we have considered; but Darrow, as the father of this technology, knows the extent of how far it can go, from whence he derives much of his Oppenheimer-esque regret for his own proverbial atom bomb. Darrow understands the depths of his technology’s implications because he has realized its potential in the depths of Panchaea: his fortress’s security system is the deus ex machina of human augmentation.

What is the ultimate formula for a system of control? Such a system much have the knowledge-set of a computer, handling copious amounts of data at once to the point of omnipresence, along with the versatility and creativity of response afforded by the human mind. This is what Darrow has achieved in his stronghold’s security system: when Adam reaches the broadcast center, he finds a room containing an enormous machine – the central hub of the security system. In the center of the room is a column connecting three stasis chrysalides to the machine. Within each pod is a heavily augmented girl bonded to the machine via a “hybrid life support” system. The battle begins when Zhao, a biomedical company’s CEO and the last known surviving Illuminati conspirator, desperately hooks herself up to the system in a bid for control over the broadcasting beacon and all the people receiving it. It is haunting to listen to what the girls are saying during this encounter: before Zhao connects herself, they make such exclamations as the following.

“Who am I?”

“I feel cold.”

“I don’t remember.”

“Oh God, please help me, I’m scared.”

Zhao connects herself, trying to assume a god-like role as the ultimate god-from-the-machine, and the girls respond.

“So much pain.”

*Visceral scream*

“Shut this thing off!”

[Yet soon, as the battle proceeds, their responses change in kind.]

“No! Protect mother! Stay away!”

“Why does he want to kill us?”

“Kill him.”

“No more pain! Please. Keep him away from us!”

“SHOW US THE LIGHT, MOTHER. WHERE IS THE LIGHT?”

“Vital signs… normal? No, this is not normal.”

To defeat the system, Adam must first kill the external, mechanical defense turrets, then open the pods, kill each of the girls in turn, and finally strike Zhao herself, elevated and bound by mechanical chords in a disturbingly Christ-like fashion to the system. But what is the nature of the system itself?

The girls provide the answer to this question. At no moment do they come off as malicious, vindictive, or sadistic – in fact, they radiate innocence, which is perhaps why Darrow comes off as guilty when warning Adam about the defense system that “thinks for itself,” and telling him how to shut it down. The girls seem afraid, having lost their identity in the overarching sentience of the machine. They directly express their desire for the machine to be shut off because of the pain it is causing them; yet a shift soon occurs when Zhao connects and they imprint upon her as their mother. This emotional dimension to the machine allows them to bond with Zhao in opposition to Adam, their aggressor. Yet Zhao proves inadequate as a mother figure, because she seeks to manipulate the machine for her own egoistic benefits, and has no desire to protect or foster its human element; thus the girls are not relieved of their suffering in any way, and helplessly ask their “mother” “where [the light is].” Thus, the very humanity which renders the machine a god also renders it imperfect: its mechanical aspect robs its human component of identity, and Zhao is unwilling to appease the human aspect because she is using the system as she would any other machine, seeing herself as the only human interface. When she is finally bested, she is literally incinerated by the energy of the system coursing through her body, proving her own being inadequate for the system. In this way, the methodology of augmentation taken to its extreme, wherein the difference between human and machine is inscrutable, is shown to be a truly terrifying force: the system is self-contained and terribly powerful in that it destroys Zhao, but also deeply pained and scared in its human sense of lacking identity. It craves fulfillment that cannot be humanly found because someone who could serve a human role, like Zhao, is insufficient because the system’s machine qualities interface with her before its human qualities ever can.

We see, too, in the story’s periphery, the plights of those less fortunate than Adam, whose bodies reject augmentation implants in a potentially lethal reaction, who are then forced to rely for the rest of their lives upon a drug called ‘neuropozyne’ to stave off rejection symptoms. In much the same way that drug addicts turn to crime and underground operations to get a fix, these people often deal for neuropozyne in the dark because of its price tag. We also saw the way in which Zeke Zanders violently rejected augmentation after his own induced psychosis, at which point he attempted “suicide by cop” before William Taggart talked him down. Considering this in conjunction with Darrow’s security system, we see the augmentative methodology presented in “Deus Ex” as bookended by two horrifying extremes: on one side, visceral rejection of augmentation, threatening the subject’s life and sanity; on the other, perfect fusion with the machine, destroying the subject’s humanity by irreparably handicapping their capacity to relate to their own human qualities, or the humanity of others. We must also not forget that even those augs who are happily in the middle of this spectrum were susceptible to the madness invoked by Darrow’s mind-controlling signal. Adam, then, appears to sit in the perfect balance of an augmentative paradigm which, under certain circumstances, can be wildly evolutionary, but, in many other circumstances, can destroy one’s very fabric of being from the inside out.

The Other Charlie: “I can’t help feeling that I’m not me”

If we could enhance the intellect of the mentally handicapped to genius levels, ought we to do so? This is the exposition at the heart of “Flowers for Algernon,” where a man not intelligent enough to understand the world around him is thrust headfirst into it by an operation designed to revolutionize I.Q.

The crucial distinction between the evolutionary potentiation of Charlie and Adam is that whereas Adam, essentially dead, was not in a position to choose what became of him at all, Charlie was fully conscious, rendered naïve by virtue of his mental deficiency. As mentioned earlier, Charlie is aware and eager of the opportunity to become smarter through the surgery, but in an innocent way that does not grasp the implications of what would actually happen to him. Charlie clarifies this after the operation, when he is confused as to the lack of immediate change within him, and Dr. Strauss tries to explain to him how the operation worked.

Charlie: Am I smart?

Strauss: That’s not the way it works. It comes slowly and you have to work very hard to get smart.

Charlie: Then whut did I need the operation for?

Strauss: So that when you learn something, it sticks with you. Not the way it was before.

Charlie (disappointed): Oh. I thought I’d be smart right away so I could go back an’ show my frien’s at the bakery… an’ talk smart things with ‘em… like how the president makes dumb mistakes an’ all… If you’re smart, you have lotsa frien’s to talk to an’ you never get lonely by yourself all the time.[5]

Charlie, then, is in a position to “consent” to his evolutionary potentiation, but not from a competent mindset – such a mindset would only emerge after his intelligence was enhanced. The choice he made, therefore, while certainly a real one, could only be understood by him on an integrated level after he shifted (to use augmentative role terminology) from a state of base role to actuated role. After this brief state of enlightenment, he returns to his original state of intellect, and the integrated conception of his change again eludes him – we see that Charlie’s final progress report reflects this lack of understanding.

I did a dumb thing today. I fergot I wasn’ in Miss Kinnian’s class any more. So I went and sat in my old seat… an’ she looked at me funny… an’ I said, “Hello, Miss Kinnian. I’m ready fer the lesson on’y I lost the book we was usin’”… an’ Miss Kinnian… she start in to cry – isn’ that funny? – an’ ran out. Then I remember I was operationed an’ I got smart… an’ I said, Holy smoke, I pulled a real Charlie Gordon.[6]

In Charlie’s ultimate regression to his base role state, he forgets about the relationship he established with his teacher, Alice Kinnian, at his intellectual peak, reverting to his original submissive-student relationship to her. He reverts to his original simplified conception of how his intelligence-operation was meant to work, and his own conception of himself as mentally challenged, for which his cruel coworkers termed doing something stupid “pulling a Charlie Gordon.”[7] The case of Charlie Gordon illustrates an important point: one undergoing the evolutionary process of the augmentative role paradigm cannot comprehensively conceive of the evolutionary path connecting base role to variant role, unless one is currently operating as a variant role. This point was not as apparent in “Deus Ex” because Adam never “regressed” from being augmented, but we could presume it to be equally true – after all, it would have been virtually impossible for Adam to have conceived of such abilities as the CASIE without having first experienced it, thereby being in a state of variant role.

Such an isolation of understanding between base role and variant role suggests a stark stratification of self – one at which we have already hinted by means of our evolution-based terminology, but which “Algernon” explicates in even greater detail. The further Charlie evolves, and the closer he draws to his inevitable return to his mentally impaired state, the more he is haunted by the image of himself as a teenager, whom he calls “the other Charlie.”[8] This ‘other Charlie’ appears before Charlie as the psychical representation of who he once was, brought into more dramatic relief by Charlie’s increasing memory of his traumatic childhood experiences, growing up with a family who dealt terribly with his condition. In the throes of his increasing emotional instability, he explains this situation in vivid detail to Alice. The scene begins by his landlady coming to check in on him, mentioning how she saw him the previous night fumbling outside his apartment, behaving “like he was a little boy,” which we recognize as his reenactment of his childhood. Alice, upon hearing this, asks if this is why Charlie called her after ignoring her for a long time.

Charlie: I called because I wanted to see you. I didn’t remember… that. But I’m not surprised. He wants to get out. The other Charlie wants to get out.

Alice: Don’t talk like that.

Charlie: It’s true. He’s watching me. Ever since that night at the concert. That’s why I couldn’t see you. I was afraid of seeing him.

Alice: That isn’t real, Charlie. You’ve built it up in your mind.

Charlie: I can’t help feeling I’m not me. I’ve usurped his place and locked him out… the way they locked me out of the bakery. What I mean is, that Charlie exists in the past, but the past is real… so he exists… It’s Charlie, the little boy who’s afraid of women because of things his mother did to him, that comes between us.[9]

The separation of Charlie across two forms underscores the incompatibility of the base role (Charlie pre-operation and post-regression) with the variant role (Charlie post-operation, pre-regression). The difference between the two is not one of circumstance, but rather one of fundamental quality.

Beyond Charlie’s personal turmoil, he provides us with insight into the the augmentative meta-role paradigm’s dynamics in his world: his “life’s work” is a scientific paper on exactly this, which he names the “Algernon-Gordon Effect,” after himself and the mouse, Algernon, who was part of the same experiment and who serves as Charlie’s mirror image throughout his developmental journey. The hypothesis of the paper is as follows: “artificially induced intelligence deteriorates at a rate of time directly proportional to the quantity of the increase.”[10] This hypothesis explains why mice whose intelligence was simply made average through experimentation maintained that intellect throughout their lifespan, whereas Charlie and Algernon had no such hope.[11] The report itself is never explicated, but we may posit several ideas as to how this hypothesis comes to be. Perhaps the most likely explanation is the fact that such artificially induced intelligence actually impairs the subject by virtue of its not being accompanied with comparable emotional growth. Strauss explains this situation to Charlie in therapy when Charlie intimates to him that he no longer finds any joy in working at the bakery, which was his job prior to the operation.

Charlie: …why don’t I enjoy working there anymore, Doctor?

Strauss: Why? You tell me.

Charlie: … They ignore me… No, it’s more. Joe, Frank, they’re… hostile to me. I thought they’d be happy for me [about my intelligence]. They’re supposed to be my friends. It takes the pleasure out of all of this. Why?

Strauss: The more intelligent you become, the more problems you’ll have.

Charlie: Why didn’t you tell me that before the operation?

Strauss: Would you have understood? (Charlie doesn’t answer.) Your intellectual growth is going to outstrip your emotional growth, so, there will be problems. That’s why I’m here.[12]

Such raw intellect without the social skill to handle it with others or emotional skill to handle it within himself marks Charlie as a pariah in the bakery, renders any real human relationship viciously difficult, and leads him to be haunted by “the other Charlie” seeking his body’s return to him. Contrary to the image of Adam as one who can only be enhanced through augmentation, Charlie’s evolution also seems to serve as his Achilles’ heel by rendering his unenhanced dimensions inadequate to his new life.

Another possible explanation for the Algernon-Gordon effect is the sheer influx of information assimilated as a result of operation. The operation only makes Charlie “smart” insofar as it allows him to retain all the information with which he is presented. This is exemplified by one instance in which he read War and Peace in a single night.[13] Such an enhanced capacity suggests an almost inevitable overload: particularly given the inequity of overall development, as described above, it seems highly unlikely that a partly-enhanced subject would be able to sustain so dramatic a transformation permanently.

“Algernon,” we see, also defines an evolutionary capacity within certain bounds, though these bounds are somewhat different from those defined in “Deus Ex.” Whereas “Deus Ex” defined augmentative success as a spetrum of human-machine relationship between total rejection and total fusion, “Algernon” defines it as successful within moderation: that any true qualitative evolution, in the absence evolving the subject holistically, is fated to decay over time. Thus, while both suggest augmentative moderation as the key to success, the former suggests it within a framework of the base role’s level of association with the evolving agent (i.e., the human/machine relationship), whereas the latter suggests it within a framework of level of difference between the base role and resultant variant role. The overarching theme seems to be that the augmentative paradigm is most effective when moderate changes are made between the base role and variant role – so, to return to our graphical representation of the paradigm, a moderate change, such as changing a triangle into a quadrilateral, would be the most viable sort of augmentation.

Synthesis: Pragmatic Evolution

Considering the somewhat bleak admonitions of our subjects in this treatment, it might be refreshing to step back for a moment and immerse ourselves instead in a positive, realistic application of this very role-playing paradigm: the implementation of role playing in psychotherapy.

In his book on the subject, Corsini advocates for the therapist’s use of role-playing in a myriad of circumstances: in individual therapy or group therapy, for the purpose of either diagnosing the patient, teaching the patient by allowing them to observe a role-played scenario, or training the patient to alter behavior and self-perception through role-playing exercises. Corsini has useful insights on all of these means of therapeutically employing role-playing, but we will only examining its use in individual therapy for the purpose of training; we limit ourselves to this lens because, as we shall see, this particular use of role-playing perfectly mirrors our established augmentative framework, while shedding new light on some of its finer points.

Corsini defines role-playing in a psychotherapeutic training context as “a process of making inner gains, in insight and empathy, generalizations and motivations, self-confidence and peace of soul, and all of the usual subjective states of ‘mind,’ through peripheral, i.e., actional processes.”[14] The understanding is that there is a two-way street between one’s behavior and one’s self-concept, Corsini’s term for the “kind of superordinate conception of self which enable the individual to function harmoniously and predictably.”[15] He sees role-playing as an ideal psychotherapeutic tool because “the therapist and assistants can manipulate the situation to create a peak type of experience in which considerable emotionalism will be displayed. This ‘breaking of the log jam’ is invariably followed by insights and usually by feelings of comfort and behavioral changes.”[16] He supports his methodology by such examples as a small boy (‘George’) in a delinquent school, constantly beaten up, who was given a safe space in psychotherapeutic role-playing group to act intimidating and have everyone else be terrified of him. He began raining (pretend) blows upon them, and afterwards – outside of therapy – was more self-assertive to the point where he was no longer beaten up.[17] “[George’s] assumption in therapy of a role,” writes, Corsini, “though it only lasted ten minutes, that was contradictory to his self-concept, must have so shaken his self-concept that it changed into the notion: I don’t have to be afraid of others. It didn’t matter that what actually occurred in the therapy room was only play-acting. It was a veridical experience for George who grasped a new concept of himself, and changed the structure of all his thinking and behaving as a result of this one concept.”[18]

We have here a situation wherein a person’s actual conception of self is changed through an artificial-yet-veridical role-playing environment, orchestrated by a therapist serving as a “director,”[19] with the intent of making inner gains through actions which can generalize to overall observations of self. If there is any doubt remaining as to whether or not the augmentative paradigm is at work, we need only consider the therapist who directs the role playing: as Corsini says, “on the one hand [the patient] generally admires and trusts the therapist, and tends to get in a dependent relationship to various degrees; but on the other hand he resents the manipulation that the therapist engages in.”[20] This is the exact description we would expect of an external evolutionary agent, who must guide the developmental process of his subject in a way the subject cannot understand until he has evolved – the subject naturally trusts the agent’s judgment, because it is that capacity of the agent to change the subject for the better for which the subject first approached the agent for therapy; yet resentment must also linger to a degree based on the fact that the methods employed by the agents necessarily go “over the subject’s head” to some degree. We saw this resentment present in the cases of Adam and Charlie as well.

Corsini’s examples suggest that role-playing as training in psychotherapy can truly do patients good by shaking the foundation of their self-concept and liberating them from old patterns of behavior – changing their understanding of “self,” that is, in a shockingly abrupt manner. Such change, of course, cannot occur in such a way that a single session of role-playing therapy would be sufficient. Corsini gives a fitting example of this in the patient who had difficulty communicating with friends, strangers, and authority figures in situations of trivia, conflict, or where he wants something. This problem was confronted by role-playing all nine combinations of person with whom he is interacting and type of conversation being had.[21] Such a comprehensive ironing-out of every facet of behavior in order to evolve one’s self-concept seems to be in direct response to the issues of incomplete evolution by augmentation raised in “Algernon.” The issue of the base role’s relationship to the evolving agent is not directly addressed, but is opaquely resolved in the considerations of the immediacy of change effected by this role-playing device. Thus, it seems less likely that the subject would develop either a particularly dependent or adverse relationship to the tool itself, because the length of time needed for it to be effective can be as small as a handful of minutes. This, of course, does not negate the fact that role-playing will be more useful in some situations than others; nonetheless, it provides a positive, pragmatic context for consideration of how the augmentative meta-role paradigm might be beneficially implemented.

It is clear in light of this that the augmentative role paradigm can empower a subject by allowing them to determine their own self-concept. Though the paradigm is effected by an external body, that external body in this case serves only to empower the subject to change and evolve himself as he wishes. One might say that therapists are able to mold their patients as they see fit; but, ethically speaking, they may only work to change the patients in the way that the patients wish to see themselves change. So it was with Adam and Sarif, and so did it glaringly fail to be with Charlie. The augmentative meta-role paradigm, misused, can undoubtedly throw its subject into a state of internal disarray, which is why the onus on the evolving agent to enable the base role’s growth into variation in a balanced manner is so great; yet, properly implemented, this may be the most progressive meta-role paradigm yet conceived. As Sarif asks Adam pointedly before the final confrontation at Panchaea, “Would you have been able to do any of the things you did without augmentations?”

[1] “Flowers for Algernon,” Act II.

[2] The four endings vary slightly depending upon whether the player has played through making virtuous choices, malicious choices, or neutral choices; but this variance is much more subtle than that of “Dishonored,” and the primary distinction of endings is the breakdown of four choices enumerated here.

[3] We cannot ignore that the audience’s responses also serve to condition the actors – something considered in Appendix B. For now, as we are concerned with the initial formation of the reality, we are necessarily operating within a pre-audience production context, and therefore will not consider them.

[4] “Algernon,” Act I.

[5] Ibid, Act I.

[6] Ibid, Act II.

[7] Ibid, Act I.

[8] Ibid, Act II.

[9] Ibid.

[10] Ibid.

[11] Ibid, Act I.

[12] Ibid.

[13] Ibid.

[14] Role Playing in Psychotherapy, p. 91.

[15] Ibid, p. 21.

[16] Ibid, p. 102-103.

[17] Ibid, p. 22-24.

[18] Ibid, p. 24-25.

[19] Ibid, p. 41. Corsini’s treatment of the psychotherapist as a director draws a direct, completely appropriate parallel to the stage.

[20] Ibid, p. 92-93.

[21] Ibid, p. 95-102.