Data Driven Hero

Data Driven Design Process for Digital Games

By Asher Rubinstein |April 22, 2015

This post was written by Asher Rubinstein after completing an internship at Crackerjack. Ash’s contribution was invaluable, executing on a framework to increase the statistical significance of our data.
I was recruited by Crackerjack Games to design a robust strategy for the data-driven design (DDD) process. DDD is a design methodology that involves making design decisions based on reliable knowledge of the player’s experience of and relationship to the game.
DDD stands in contrast to traditional design methodologies, which rely less on knowledge of the player’s experience and behaviours and more on the designer’s intuitions, past experience, and theoretical frameworks. DDD enables the designer to make decisions based on knowledge rather than assumptions, thereby reducing the risk of biased and irrational design choices.  As such, DDD is an essential tool for the modern game design studio.
In this article, I’ll provide a high-level summary of the DDD process.
The first stage of the process is the planning stage, in which the developer identifies the objectives, timeframe, and limiting factors. The second stage in the process involves making incremental design changes, split (A/B) testing those changes, extracting statistically significant data, then using that data to make well informed design decisions.
This second stage is repeated many times to produce continual improvements in the game design over its’ lifetime. As such, the process is well-suited to games that have already been created, have an active player-base, and are ready for improvement. For instance, the process suits freemium games and MMOs, both of which receive incremental updates over a number of years. Having said that, the process can be easily altered to facilitate the creation of games from the ground up.
It should be noted that the DDD process is complex and requires quite a bit of learning to master. I’ve simplified it for the purposes of this article.
 Part 1: DDD Planning – “A stitch in time saves over 9,000!”
  1. Primary Goal Selection
    • What are you trying to achieve by modifying the game?
  2. Secondary Goal Selection
    • What are the individual components that collectively constitute the achievement of your primary goal?
  3. Identification of Ideal Player Behaviours
    • What are the individual player behaviours that constitute or indicate the achievement of each of your sub-goals?
    • Each of these sub-goals will be the focus of one stage in the data acquisition process.
  4. Overall Evaluation Criteria Selection
    • Assign a value weighting to each player behaviour
  5. Target Quantification
    • What are the minimum values for quantifiable player behaviour outcomes that count as having achieved each of your sub-goals?
  6. Threshold Quantification
    • What are the minimum values for important player behaviours that are not being focussed on that must still be preserved?
  7. Timeframe Selection
    • How long can you afford to test different design alterations in order to achieve your goal before you need to move on to something else?
    • How much time can you afford to spend on testing for each focus area?
  8. Player Acquisition Method Selection
    • How are you going to get players to play your game?
  9. Identification of Maximum Sample Size and Minimum Confidence level
    • How many players can you get to play the game over the specified time period?
    • What confidence level are you comfortable with settling for?
 Part 2: The Iterative DDD Process – “Reliable data always trumps assumptions.”
  1. Select a secondary goal
    • Which is of the sub-goals that were selected in the planning phase is considered to be the most important aspect of the primary goal.
  2. Form a hypothesis relating a design change to a desires player behaviour change
    • What design change does the design team believe will achieve the desired sub-goal?
  3. Acquire player data
    • Release two builds to a fresh batch of players. One build will contain the original design. The other build will include the design variation.
    • Wait long enough for the player behaviour data to mature (eg. 2 weeks)
  4. Convert the data into a meaningful format
    • Translate the player data into the key player behaviour metrics that were selected in the planning phase.
  5. Analyse the data
    • Seek out statistically significant differences in the focus metrics.
    • Understand whether or not they can be attributed to the design change that was tested.
  6. Make the Design Decision
    • If the design decision is shown to improve the key metrics in a statistically significant manner, then adopt it.
    • Otherwise, reject it.
  7. Check whether the key metrics meet the minimum threshold requirements.
    • If so, move on to the next sub-goal, and repeat steps 2 – 7.
    • Otherwise, repeat steps 2 – 7 using the same sub-goal.
Rules Of Thumb
The DDD process is a challenging one to adopt due to its inherent complexity and also due to the nature of human psychology. People tend to be impatient, jump onto good news, and avoid bad news like the plague. Here are a few important rules of thumb that reduce the risk of succumbing to human bias.
  • Plan your response to each data outcome before you read the data. Planning doesn’t mean that you are locked down; Imperfect plans can be changed, but having a less-than-perfect plan is much better than having no plan at all.
  • The temptation to act on positive data that is not statistically significant or ignore statistically significant negative data can be very strong, so it is wise to decide on your response before the data is seen. Thoroughly construct your Overall Evaluation Criteria. This will force the design team to honestly prioritise each of their goals, and each of their sub-goals, in a more objective manner. The data will inevitably force the design team to make trade-offs between preferences. It is tempting to make quick emotional decisions about these trade-offs when the OEC isn’t properly defined.
  • Test big design changes rather than small ones. You are more likely to get statistically significant data that you can act on that way.
  • Test one design change at a time. If you must test multiple design changes at the same time in order to cut costs then be very mindful of the implications for the reliability of your data.
  • If possible, test on fresh audiences. Old audiences will react to the fact that the design has changed, rather than the change itself. This will obscure your data.
  • Ensure that your data is statistically significant. If you don’t then, you have no way of knowing whether or not the changes in your data are due to random chance, as opposed to being due to the design change that you implemented.
  • Don’t let your gut feeling simply override your test results. Facts are often contrary to common sense and personal experience. See to resolve the tension between your results and your intuition by deepening your understanding rather than discounting results without a very good reason.
  • Seek to understand why your metrics are changing. Misinterpreting metric changes will lead to poor design decisions later down the track.
- Asher
Asher undertook an internship at Crackerjack Games. His work in establishing a unique framework and process for validating data to significance was phenomenal. If you would like further details on anything mentioned in this post, please get in touch. You can check out Ash’s website HERE.

Leave a Reply

Your email address will not be published. Required fields are marked *