Rather than chasing the impossible ideal of complete realism, it is better to aim at the centre of the spectrum between realism and gamism. Hitting that elusive centre will be easier with stylised mechanics, but that isn’t enough. It will also require process intensity.
Crawford (1984, 1987) advises against making games that are data intensive, and instead encourages making process intensive games (see also Juul, 2007b; Wardrip-Fruin, 2007). Data intensity and process intensity are related via the process-to-data ratio of the game. Some games have lots of data and perform few operations on that data, like a linear game where all the encounters and behaviours are pre-scripted. Other games have very little data but use it to great effect by clever algorithms, like a sandbox game where all of the encounters occur because of the rules of the world and the artificial intelligence of the creatures in the world. This is similar to Juul’s (2005) distinction between games of progression and games of emergence, and Dan Cook’s (in Up Up Down Down, 2014) distinction between content-focused and system-focused games.
Crawford makes the argument that process intensive games are more aesthetically elegant, but I think more important is that process intensity is necessary for immersion in making games of worlds. The very narrow focus on visual fidelity is particularly problematic here because it is promoting data intensity instead of process intensity, by piling gigabytes of information into a square centimetre of skin to make it more visually realistic. In fact, this often results in the need to duplicate a richly-detailed object multiple times because there isn’t enough time to make every single rock by hand to keep them unique. And these identical, cookie-cutter objects threaten immersion when players can easily see them as duplicates (Madigan, 2013). Data intensity is another myth of immersive games. Immersive games don’t require data intensity. Process intensity is much more useful for making games of worlds.
Two examples of process intensity are emergent gameplay and procedural content generation (PCG). Emergent gameplay is the combinatorial explosion of possible situations and actions resulting from simulating elements of the game in a bottom-up rather than top-down fashion (Garneau, 2008; Pearce, 2002; Smith, 2001; Sweetser, 2008). PCG is the use of algorithms to generate parts of the game such as the environment, items, or creatures, rather than having a human design each of them by hand (Compton et al., 2013; Doull, 2008). Some have argued that PCG can benefit the process of game development (Cepero, 2011; Stuart, 2015; Up Up Down Down, 2014), or that emergent gameplay is simply more fun (Garneau, 2008; Pearce, 2002; Smith, 2001; Sweetser, 2008), but both of them can also improve immersion.
You might think emergent mechanics might make things more realistic and therefore more intuitive to understand (Sweetser & Wiles, 2005), and the visual representation of a mechanic can certainly give the player various impressions about how it might work (eg Bateman, 2014), but our simulations are always simplified and abstracted to some degree. For example, no game simulates fire in such totality that you can intuit how it works from real world experience alone. In one game, the map might be a tiled grid, and fire might spread across the cells of the grid. In another game, fire might be a property of an object that can be passed to other objects. And in another game, fire might be a static permanent feature of the environment that forms a barrier to the player’s movement. All of these fire mechanics break some of our intuitions about how real fire works. The metaphor of “fire” gives players many ideas about how the mechanic might work (it activates a schema, fostering a predictable set of hypotheses), and most of them are wrong, but one or two of them will be half-right. Players then have to learn through gameplay experience which ideas to trim away and alter until all that is left is how fire actually works in this particular game. Emergent mechanics will not save us from having to learn the mechanics; that is another myth of immersion. They provide a different benefit. Emergent gameplay and PCG ensure the world is consistent by making the game rules, not the designer, the final adjudicator on what exists and what happens.
With process intensity the game is now a portal into an alternate world. Without it the game would feel like a little fishbowl with painted scenery to distract from its constraints and linearity. When there is a focus on process intensity over data intensity, you are no longer just playing by the game designer’s rules. You are playing by the rules of the fictional world in which the game is set. If a game of progression includes a pre-scripted giant building collapsing around the player, that may be very entertaining, but not very immersive. When you see a procedurally-generated environment, or an emergent game event, the difference between that experience and the giant building collapse is similar to the difference between seeing a robbery occur, and seeing a live performance of a robbery: It is in the knowledge of where it came from. In the case of the data intensive game, it came from some designers. In the case of the process intensive game, you know that it came from the rules of the world, and that this event is dynamic and responsive and unique to you, and that knowledge makes it more real.
With board games and pen-and-paper role-playing games the players have to implement the rules, and occasional human errors render the game session “not really what would have happened” (see eg Karlsen, 2007 for examples of players discarding rules and changing what would have happened). But of course when a game of chess is digitised, players can no longer (deliberately or accidentally) make illegal moves. Similar rule-breaking happens when decisions are left to designers, such as accidentally creating plot holes and clumsy deus ex machina, or inserting a pre-rendered cutscene where the player character is standing up, even though in some players’ cases they might have broken their legs moments before (Pearce, 2002; Wolters, 2014). Every element in the game that we delegate to the computer to generate from a system of rules, is another element that avoids this problem of human error and ensures a consistent game world (Sweetser & Wiles, 2005). For example the bottom-up approach of emergent gameplay provides general-case mechanics that always apply, which are contrasted with special-case rules that Crawford (1984) calls dirt. Rather than designing data, we should focus on designing processes.
Therefore a game of a world should have mechanics that are not just realistic or stylised, but also focus on process intensity over data intensity, for example by concentrating on emergent gameplay and PCG. Practical techniques for creating PCG and emergent gameplay are described by Compton et al. (2013), Smith (2001), Smith & Smith (2004), Sweetser (2008), and Tutenel et al. (2008). This would not only make the world more consistent in its appearance and behaviour, but also expand the player’s agency to do whatever is logically possible according to the rules of the world: Agency and Affordances
Would the Holodeck be any fun?