How Do You Adapt: In Games and Life

No problem cannot be solved with enough duct tape.

No problem cannot be solved with enough duct tape.

One of the most exhilarating and interesting events in games is when you have to improvise. You aren’t executing a rote formula for success, and you aren’t bumbling around randomly, but recombining knowledge to create a new strategy, tailor-made to the situation at hand. Games with procedural generation and complex systems can often create situations where necessity is the mother of invention, and player of roguelikes and emergent sandbox games will tell you how much fun that can be. But what is actually happening? And can you get better at adaptation through practice?

A summary of how games can make people more adaptable to complex situations (co-author):

Tichon, J. G., & Tornqvist, D. (2016). Video Games: Developing Resilience, Competence, and Mastery. In D. Villani, P. Cipresso, A. Gaggioli, & G. Riva (Eds.) Integrating Technology in Positive Psychology Practice (pp. 247-265). Hershey, PA: Information Science Reference. doi:10.4018/978-1-4666-9986-1.ch011

http://www.igi-global.com/chapter/video-games/146910

 

It’s a good overview, but here are four interesting psychological phenomena that I couldn’t fit in. Just to explain a bit first:

Creativity

Two very important things most people don’t know: It can be improved (and measured); And it isn’t the one thing. It is several things.

  • Fluency: The number of unique, non-redundant ideas or problem solutions you can think of.
  • Flexibility: The breadth and number of distinct semantic categories accessed. It reflects the capacity to switch approaches, goals, perspectives, etc.
  • Originality: The rarity of the idea. It reflects the ability to approach the situation in a new way, without relying on routine or habitual thought.

Cognitive flexibility

  • Local switching: Inhibiting a habitual response to cues – having to override a learned method or break a habit. Such as when the rules suddenly change in an object-sorting game and now one must sort by colour instead of shape.
  • Global switching: Using working memory to maintain and manipulate information for separate mental tasks.

 

How do people adapt? There are actually two conflicting effects. The first I’ll call the “roguelike” effect (Players of Dark Souls will also understand). When you increase the stakes, people are more careful and thoughtful. Specifically, you can modulate just how far ahead people plan by changing the cost of choosing incorrectly – Do you get pushed back a step, or pushed all the way back to square one?

Some problems we solve entirely in our heads before making a move, and others we just make it up as we go, without any actual planning ahead. But most problems are somewhere in between – you plan some steps ahead, then enact those steps, then re-evaluate and plan your next sequence after you’ve acted. Often this is due to the nature of the problem itself, and its own costs for taking the wrong actions. But we can add on top of this an artificial cost to increase the average length of action plans, up to a maximum imposed by the nature of the problem and the limitations of human cognition.

This effect is going to sound very familiar to players of roguelikes and other games where you play with the constant possibility of losing all your progress. For example, in Dark Souls every time you rest at a bonfire to refill your healing potions, all of the enemies respawn. This means that if you are fighting your way to a boss, and every time you finally reach the boss you are out of healing potions, then clearly you are employing a bad strategy. You can go back and refill, but that undoes your progress killing bad guys. You are going to have to try again defeating the same group of enemies before the boss. If your old strategy isn’t leaving you with enough health then clearly you need to adapt and try a new strategy. It uses difficulty to force you to think and experiment.

The second effect is almost the complete opposite. I’ll call it the “sandbox” effect. It is the tendency to experiment more in the absence of coercion. (Possibly related is the finding that large incentives promote problem solving by rote procedure and more relaxed environments promote creative problem solving.) When people are given a completely unknown black box and told to explore, they do just that. When told to get the black box to do a specific thing, they turn the dials and try to achieve their mission (They usually don’t do very well, but they do better than a robot acting randomly). Then you bring both groups of people back (the explorers and the achievers) a second time and this time you give them both a new mission, which is different from what the achievers were trying to do last time. On this new mission those who freely explored the system do better than those who had to achieve a different mission last time. It seems exploring gives people general knowledge of how the black box works, whereas achieving a mission gives people only specific knowledge of how to achieve that one mission.

It is important to note that this is research on novel, opaque systems (black boxes). And what researchers think is going on is that when told to just explore, we build up a mental model of the system based on rules. This is highly transferable knowledge because we can then mentally simulate what would happen in new scenarios we’ve never seen before, based on the rules we have learned. In contrast, when told to achieve an outcome, our focus narrows and we instead collect a library of situations that relate to that mission (Instance-Based Learning Theory). We store what the situation was, what action we took, and what the outcome was (if it was good or bad for our mission). Then when we next need to make a choice, we just look up our library of past situations to find a solution. This is non-transferable knowledge, because when you come across a new situation that doesn’t have a match in your library, you’re stuck.

This exploration effect will be familiar to anyone who loves sandbox games, where half the fun is just poking at the systems to see how they react. For example, in Diablo 3 they opened up all the skills. In previous games like Diablo 2, whenever you spent a point on a skill it was permanent and the only thing you could do to reverse it was start over from scratch with a completely new character. Back to square one. This is not an environment friendly to experimenting with new ideas. But in Diablo 3 they made the skills hot-swappable, so that you could at any time completely redesign your skill set. This made it trivially easy to reverse your choices and try an idea as soon as you had it. Diablo 3 encouraged you to experiment by making it easy.

So which is it: To encourage adaptation to a new situation, should you make things easy like Diablo 3 or hard like Dark Souls?

Barriers

If you read the research on regret (eg Gilovich & Medvec, 1995) you get the impression that it is more important to remove barriers to experimenting than to increase incentives. And in general we expect more persistence from people who have an internal locus of control, in terms of how they explain failures:

  • Internal vs External: Is the cause about me, or about other people / the world? If the cause is that I’m stupid then there isn’t much I can do about that, but if the cause is that the necessary resources were absent then that is an obstacle I can overcome in future.
  • Unstable vs Stable: Is the cause going to disappear with time, or stay forever?
  • Specific vs Global: Is the cause specific to this one situation, or is it everywhere?

If you have an internal locus of control (An attitude of “I’m in control of my ability to succeed”) then you are less likely to give up in the face of failure, ultimately making more progress and solving more problems (Huber, 1995). However, I haven’t been able to find any studies on the capacity for game play to affect locus of control or explanatory style.

Perseverative Errors

Imagine you are given a deck of cards and told to sort them into categories (Hassin et al., 2009). You aren’t told what the categories are, only told whether each choice is right or wrong. So you start placing the cards into their different boxes and hear either a “correct” or “incorrect” with each placement. You could probably figure out if the categories are for odds or evens, reds or blacks, picture cards or number cards, etc. But then imagine that without a word, halfway through the experiment the scientists change the rules and don’t tell you. For example, if it was by reds and blacks before, maybe halfway through it is suddenly odds and evens. Would you notice? Would you figure it out from the simple “correct” and “incorrect” feedback?

Reusing old strategies (sticking to the old card sorting rules) are called “perseverative errors” in this research. I was a little frustrated with this particular study because there was literally a change of rules. When you are dealing with a complex system (a black box) and get a surprising result, which is more likely: that the rules have changed, or that they were more complicated than you originally thought? There are several things you can do in the face of failure (or any surprising data):

1A) Persist – Irrelevant: You ignore or discard this data. You persist in your usual strategy.

1B) Persist – Outlier: You think this data point is an outlier or an anomaly, and so your reliably-successful strategy should resume its effectiveness on the next trial. But you revise your conception of the probability of the outcome given your strategy in light of this new data – sometimes your strategy can fail. This treats the phenomenon as a stochastic black box, paying attention only to frequency of outcome and not cause of outcome.

2A) Adapt – New Rules: You think that a change in the rules has occurred. You discard your old strategies and / or model of the system (or just the parts of it that are contradicted by this data), and try to figure out new ones that match this data. This seems to focus on getting results as soon as possible, not on finding the truth, given that you don’t seem concerned with why the rules appear to have changed in the first place.

2B) Adapt – Incomplete Rules: You think that your current set of strategies / model of the system is incomplete. Instead of discarding the old model entirely, you try to expand and adjust it to be able to explain both the old and the new data.

These options progress on a scale of curiosity, starting with an interest in moving right along and getting the task over with (option 1A), and ending with an interest in getting to the hidden truth and attaining the fullest understanding possible (option 2B). This is analogous to the distinction between performance orientation (being concerned with getting a high score on the test) and learning orientation (concerned with improving your abilities) in education.

Another way of looking at it is through dual-space theory, where there is the solution space and the hypothesis space. When you solve a problem, you explore the solution space to try to reach the goal. You can think of it as the branching choices in a game like chess – each choice is a branching path, and by taking those actions you are traversing the solution space to try to reach a branch where the enemy is in checkmate. So the solution space is the possibility space of the system. But maybe before you pay attention to the solution space you explore the hypothesis space to determine what exactly the rules of the problem are (Before starting your journey you want to make a map). The hypothesis space is the possibility space of what the rules of the system might be. In triple-space theory there is a third space that is kind of like a “scientific paradigm” space. Where you sit in the paradigm space determines what kinds of hypotheses you think are plausible or possible. For example, when your car won’t start, you don’t immediately think, “maybe I need to kill all the flamingos in the world, and then it will start.” Your conception of what kinds of things are even possible to cause this problem (what hypotheses are plausible) is constrained by your paradigm.

Note that the final option 2B (Adapt – Incomplete Rules) is probably the subtlest and most sophisticated approach to take. It is also the most applicable to real world systems because in reality the laws of nature don’t change. It is only in artificial games that rules can change. By seeking to explain all the data, option 2B entails voluntarily taking on the greatest challenge of all the options. But it is also the wrong approach in the card sorting study. If you go with this option, you will fail because the rules have changed and the old data needs to be discarded. If anything, I’m more interested in discouraging people from simplifying or discarding data, and encouraging people to adopt the attitude of option 2B. I’m still trying to figure out how to do that.

Update: Another study got people to learn how to adjust the variables in a digital water purification simulation to achieve a certain quality of the water out the other end. In this study the rules of the water purifier didn’t change, but instead they told some people (regardless of how well they did) that they were doing great, and in another condition told people that they were doing really sub-par. It seems that when people were told they were doing well, they accepted the variability of their water quality as just random noise that can be ignored (they persisted with their strategy). But when told that they were doing badly, they seemed to experiment and try different things, suggesting that they no longer accepted the variation in their results as acceptable random noise and instead were determined to identify the cause of the variability in order to eliminate it. So it seems that people’s tendency to persist or adapt can be manipulated by telling them that their margin of error is too big or just fine.

 

References:

Improvising in Games

Ultima Ratio Regum. (23/05/2015). “The Problem with the Roguelike Metagame”. Retrieved 24/05/2015, from: http://www.ultimaratioregum.co.uk/game/2015/05/23/the-problem-with-the-roguelike-metagame/

Keogh, B. (2012). “You Know What I Love? Improvising.” Retrieved 16/01/2012, from http://games.on.net/article/14626/You_Know_What_I_Love_Improvising

Costikyan, G. (2009). “Randomness: Blight or Bane?”. Retrieved 15/10/2013, from http://gamasutra.com/blogs/GregCostikyan/20091113/85910/Randomness_Blight_or_Bane.php

Koster, R. (2009). “GDCA: Greg Costikyan’s talk on randomness”. Retrieved 15/10/2013, from http://www.raphkoster.com/2009/09/22/gdca-greg-costikyans-talk-on-randomness/

Burgun, Keith. (15/10/2014). “Randomness and Game Design”. Retrieved 13/09/2015, from http://www.gamasutra.com/blogs/KeithBurgun/20141015/227740/Randomness_and_Game_Design.php

Blumberg, Fran C., Sheryl F. Rosenthal, and John D. Randall. “Impasse-driven learning in the context of video games.” Computers in Human Behavior 24.4 (2008): 1530-1541.

 

Creativity

Baas, Matthijs, Carsten KW De Dreu, and Bernard A. Nijstad. “A meta-analysis of 25 years of mood-creativity research: Hedonic tone, activation, or regulatory focus?.” Psychological bulletin 134.6 (2008): 779.

Hélie, Sebastien, and Ron Sun. “Incubation, insight, and creative problem solving: a unified theory and a connectionist model.” Psychological review 117.3 (2010): 994.

 

Cognitive Flexibility

Moradzadeh, Linda. Components of Cognitive Flexibility in Adults. Library and Archives Canada- Bibliothèque et Archives Canada, 2010.

 

“Roguelike” Effect (Planfulness and Implementation Cost)

de Jong, Ton, Robert de Hoog, and Frits de Vries. “Coping with complex environments: the effects of providing overviews and a transparent interface on learning with a computer simulation.” International Journal of Man-Machine Studies 39.4 (1993): 621-639.

Osman, Magda. “Controlling uncertainty: a review of human behavior in complex dynamic environments.” Psychological bulletin 136.1 (2010): 65.

O’Hara, Kenton P., and Stephen J. Payne. “The effects of operator implementation cost on planfulness of problem solving and learning.” Cognitive psychology 35.1 (1998): 34-70.

O’Hara, K. P., & Payne, S. J. (1999). Planning and the user interface: The effect s of lockout time and error recovery cost. International Journal of Human-Computer Studies, 50, 41-59

Sedig, Kamran, and Robert Haworth. “Interaction design and cognitive gameplay: role of activation time.” Proceedings of the first ACM SIGCHI annual symposium on Computer-human interaction in play. ACM, 2014.

 

“Sandbox” Effect (Goal-Free Effect)

Sirlin. (2012). “Diablo 3’s Ability System”. Retrieved 07/05/2012, from http://www.sirlin.net/blog/2012/5/3/diablo-3s-ability-system.html

Burns, B. D., & Vollmeyer, R. (2002). Goal specificity effects on hypothesis testing in problem solving. Quarterly Journal of Experimental Psychology, 55A , 241-261.

Vollmeyer, R., Burns, B.D., & Holyoak, K.J. (1996). The impact of goal specificity on strategy use and the acquisition of problem structure. Cognitive Science, 20 , 75-100.

Geddes, Bruce W., and Rosemary J. Stevenson. “Explicit learning of a dynamic system with a non-salient pattern.” The Quarterly Journal of Experimental Psychology: Section A 50.4 (1997): 742-765.

Paas, Fred, and Femke Kirschner. “Goal-Free Effect.” Encyclopedia of the Sciences of Learning. Springer US, 2012. 1375-1377.

Künsting, Josef, Joachim Wirth, and Fred Paas. “The goal specificity effect on strategy use and instructional efficiency during computer-based scientific discovery learning.” Computers & Education 56.3 (2011): 668-679.

Horner, Victoria, and Andrew Whiten. “Causal knowledge and imitation/emulation switching in chimpanzees (Pan troglodytes) and children (Homo sapiens).” Animal cognition 8.3 (2005): 164-181.

Bonawitz, E., Shafto, P. Gweon, H. Goodman, N., Spelke, E. & Schulz, L. (2011). The Double-Edged Sword of Pedagogy: Instruction Limits Spontaneous Exploration and Discovery. Cognition 120, 422-430.

 

Regret

Gilovich, Thomas, and Victoria Husted Medvec. “The experience of regret: what, when, and why.” Psychological review 102.2 (1995): 379.

 

Locus of Control

Huber, Oswald. “Complex problem solving as multistage decision making.” Complex problem solving: The European perspective (1995): 151.

 

Perseverative Errors

Hassin, R. R., Bargh, J. A., & Zimerman, S. (2009). Automatic and flexible: The case of nonconscious goal pursuit. Social Cognition, 27, 20–36.

Osman, Magda. “The effects of self set or externally set goals on learning in an uncertain environment.” Learning and Individual Differences 22.5 (2012): 575-584.

 

Performance Orientation vs Learning Orientation

Elliot, Andrew J., and Holly A. McGregor. “A 2× 2 achievement goal framework.” Journal of personality and social psychology 80.3 (2001): 501.

Litman, J. A. (2008). Interest and deprivation factors of epistemic curiosity. Personality and Individual Differences, 44(7), 1585–1595. doi:10.1016/j.paid.2008.01.014

 

Dual and Triple-Space Theory

Kistner, S., Burns, B.D., Vollmeyer, R. and Kortenkamp, U., 2015. The importance of understanding: Model space moderates goal specificity effects. The Quarterly Journal of Experimental Psychology, pp.1-18.

Simon, H. A. 2001. “Problem Solving and Reasoning, Psychology of”. In International Encyclopaedia of the Social & Behavioural Sciences, edited by Neil J, Smelser and Paul B. Baltes, 12120-1212

Advertisements

One thought on “How Do You Adapt: In Games and Life

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s