COMMUNICATION

Watch Out for these 16 Mistakes with Measuring Behavior (and How to Fix Them)

The purpose of this list is to highlight common assumptions or limitations with different methodological approaches to measuring behavior. If you have others please comment and I will add them to the list…

1. THERE IS NO ‘BEST’ MODEL OR MEASUREMENT TECHNIQUE.

There is no gold standard theory or magic bullet method for understanding or predicting behavior. Human behavior involves habits, automatic responses, conscious choices and calculations, and is located in complex social environments and cultures. Behavior is complex and cannot be reduced to a single measure or model. In fact, there are over 115 behavior change theories (Kwansnicka et al., 2016) and over 90 behavior change techniques (Michie et al., 2013).

Take Away: If you have a nail, use a hammer…if you have a screw, use a screwdriver. Theories, models, and measurements all have their benefits when applied in appropriate contexts.

2. DON’T ASSUME INTENTION IS A GOOD PREDICTOR OF BEHAVIOR (OR SALES)

Although intentions are construed as a key predictor of behavior in business and science, evidence and intuition indicates people fail to transcend the ‘intention-behavior’ gap (Sheeran, 2002). Even strong intentions are often NOT translated into action (Sheehan & Webb, 2016). Yet intention is still routinely used because it is easy, cheap and posses some merit. Is intention optimal? No. Can intention ‘satisfice’ for the business question, time and budget constraints – absolutely. To increase accuracy of intention measures see: Chandon et al., 2005 and Morwitz et al., 2007.

Things to Consider: Types of product, framing of the intention question, choosing the right anchor scale, how much experience with the behavior, intention strength, intention stability.

3. DO PEOPLE KNOW WHY THEY DO WHAT THEY DO?

If you ask someone why they do what they do – they most always can produce an answer that is often stated both confidently and incorrectly. Our beliefs about why a behavior was performed may not have any relationship to its actual cause. Read Nisbett and Wilson’s “Telling More Than We Can Know,” which reviewed dozens of studies showing that people often have no idea why they do the things they do. This is a problem for researchers who ask people questions and expect correct answers. Scientists have been finding ways to improve the quality of self-report (e.g., structuring response formats, question wording, preceding questions).

3 Resources to improve the validity of self-reports…

4. DON’T ASSUME NON-CONSCIOUS ARE SUPERIOR TO CONSCIOUS MEASURES.

With much of our mental life occurring without conscious awareness spawned the popularity of ‘System 1’, non-conscious, and implicit social cognition. For those that need elaboration check out – ‘unbearable automaticity of everyday life’. For those seeking a more balanced view of conscious and unconscious, I recommend Baumeister & Bargh. Despite much of mental life involving non-conscious processes our understanding how to reliably measure non-conscious processes is quite limited (Gawronski, & De Houwer, 2014). Surfacing the subconscious ‘system 1’ introduces many obstacles:

  1. Do people know what they are measuring? Everyone claims to tap into the ‘subconscious’ but do they really know what exactly they are measuring – awareness, volitional control, cognition, effort, intentionality, deliberation?
  2. More Room for Error. Even if you know what you are measuring – non-conscious measures increase opportunity for measurement error – select the right tool for the research, use the correct procedure, conduct right analysis, and interpretation).
  3. Pragmatic Limitations. tests are often more expensive, cumbersome, and are slow.

Take Away: Would you want your eye doctor performing your dental work? Or finance manager controlling your next ad campaign? Make sure providers actually understand the scope and limitations of implicit research methods. Applying implicit measures universally is simply a waste of time and money.

5. BLINDED BY OUR OWN BIASES.

Many researchers implicitly assume research designs do not affect their research outcomes. However, this assumption is not consistent with the seminal work of Tversky and Kahneman (1986) on framing. Results are not purely due to the experiment, but are partly influenced by the experimental designs.

I propose amending Kahneman’s “What you see is all there is” (WYSIATI) to “What you BELIEVE is all you see” (WYBIAYS). This means whatever your paradigm or mental worldview is it will shape what you see (Bruner & Postman, 1949). In other words, our beliefs increase the accessibility and availability of instances that fit our world view. We believe in creativity/design, we see stories or if we believe cognitive science, we seek causality. The point is, our research output is simply a product of our mind and methods, which are predictable, hence, not insightful.

For solutions on how to structurally overcome biases; see Biased by Own Biases.

6. CHOICES WITHOUT A CONTEXT.

If I ask you what are you wearing tonight? That answer would hopefully involve some insight into the place we are going and people we will be with. Every decision involves a context and yet research can easily get caught understanding cognition at the expense of the context. Time to bring back Herbert Simon’s bounded rationality scissors and the next wave of contextual research: grounded cognition (Barsalou, 2008)

For solutions on how to improve your choice research; see New Perspectives into How Consumers Choose. 

Cognition is situated. Cognitive activity takes place in the context of a real-world environment, and it inherently involves perception and action (Wilson, 2002).

7. DON’T ASSUME MORE COMPLEXITY IS BETTER

Even when things can be made simple, research always can always formulate a complex answer. Adding complexity can actually reduce predictive power.  How? This is a common issue in behavioral economics with scientists adding parameters to expected utility models which increase data fitting power but generally decreases the predictive power because of increasing estimation error (Brighton & Gigerenzer, 2015). Gigerenzer calls this “Bias-Bias” but the underlying issue is bias-variance trade-off.

What to do?

  • Increase Predictive Accuracy with Machine Learning: Principles and techniques from the field of machine learning can help psychology become a more predictive science – see Yorkani & Westfall, 2017.
  • Use Occam’s razor – start with no change model, incrementally add complexity only if there is experimental evidence to support the complication.

8. DON’T NEGLECT SAMPLE SIZE

All too common do we find our inner “intuitive statistician” determine our sample sizes and generalizability of our results. Whether its quant or qual, sample size matters. Too small a sample may prevent the findings from being extrapolated (qualitative issue), whereas too large a sample may amplify the detection of differences, emphasizing statistical differences that are not meaningfully relevant (quantitative issue).

For a helpful guide on determining sample size see here.

9. DON’T ASSUME TIME IS STATIC

Human time perception does not follow a static, fixed scale. If it were the case, people would never be late, work would never “drag” and vacations wouldn’t “fly by”.  We have all heard, “be there in 5 mins” or “see you in one week” and know people interpret time differently. Empirical data shows “today” doesn’t have the same weight as any other day, and affects the output of decisions (Lucci, 2013).

Take Away: Time perception is elastic and modeling behavior would benefit from accounting for it. Consider quantifying subjective perceptions of time, spatial distances, and how people anchor in time.

10. PREDICTING BEHAVIOR vs. PREDICTING BEHAVIOR CHANGE – THEY ARE NOT THE SAME

People often think behavior and behavior change are one in the same. For example, past data may reliably predict future behavior but not necessarily what is required to change behavior (which often times what we are interested in). A variable that predicts behavior does not necessarily indicate that interventions that change the same variable will cause changes in behavior (Sheeran, Harris, & Epton, 2014). For example, if I routinely buy Crest toothpaste – data may indicate that best predictors of purchase are price and flavor but changing them may not drive change. This may explain why many behavior change efforts fail?

11. DON’T ASSUME THE PAST IS THE BEST PREDICTOR OF BEHAVIOR

It is not hard to think of how much of daily lives involve planning, preparing and behaving for our future. Despite much of our lives centered around the future, we rarely consider how it impacts our present behavior – that is how does our future influence our present behavior (reverse causality) (Buckner & Carroll, 2007). We largely rely on newtonian physics which considers past states (memory) and present states but never allow future states of the system to affect the present changes of state.

Things to consider: systemically quantifying consumer futures including: future preferences, desires, goal states, motivations, anticipatory emotions.

The future influences the present just as much as the past. – Friedrich Nietzsche

12. STATISTICAL vs. BUSINESS SIGNIFICANCE: THEY ARE NOT THE SAME

Our findings were “statistically significant”. So, what??? What is statistically significant may be insignificant to managers. And what is not statistically significant may be strategically important to managers. Statistical significance only tells us the likelihood that a relationship exists amongst what we measure not whether or not it is important for our business purposes. Also, statistical significance level is obtained is strongly influenced by subjective decisions by the researcher (For more see Sawyer & Peter, 1983).

Know the application and limitations of statistics for business; see How to Apply Statistics for Business Decision-Making

“Reliance on merely refuting the null hypothesis…is basically unsound, poor scientific strategy, and one of the worst things that ever happened in the history of psychology.” (Meehl, 1978)

13. NEUROSCIENCE, BIOMETRICS & VOODOO CORRELATIONS

If somebody has a high heart rate did they just finish a run or did they forget to take their blood pressure medication? Falsely interpreted neuroscience and biometric methods have become widespread in practice (Baron et al., 2017) and in science (Vul et al., 2009). We cannot assume brain activations (EEG, fMRI), arousal and excitement (heart rate / galvanic skin response), smiles and laughs (facial recognition), and gaze (eye tracking) are correlated with meaningful outcomes businesses care about (purchase behavior, consumer choice, emotion, sales).

A potential solution is using integrative approaches combining objective and subjective methods – see more here: How to Measure and Manage Emotional Experiences

14. NOT VISUALIZING DATA

Data visualization is not simply “pretty pictures” to help communicate results but also a tool to interpret an analysis. Case in point is the Anscombe’s Quartet, where 4 different data sets each produce the same summary statistics (same mean, standard deviation, and correlation), which could lead one to interpret the datasets are quite similar. However, after plotting the 4 different data sets (with same summary statistics), it becomes clear that the datasets are markedly different. For more on visualization see here.

…make both calculations and graphs. Both sorts of output should be studied; each will contribute to understanding. -Anscombe

15. MIXING MOBILE & COMPUTER RESPONDENT DATA

What role do touch interfaces play in consumer decision-making and behavior? Interface psychology and sensory marketing research show touchscreen interfaces have systematic effects on consumer choices and information processing (Brasel & Gips, 2014. As we move from mice and track pads to touchscreens we need to account for how different device interfaces and touch gestures impact our market research and respondent experience.

Researchers should consider recording: 1) Device interface used in study protocols (smartphone, tablet, etc.), 2) Device ownership, and 3) Haptic attributes of the product under study.

For more see: Touchscreens Change Consumer Choices (and bias mobile research results)

16. NOT ACCOUNTING FOR “BIOMECHANICS” OF CHOICE

When modeling consumer decision-making, the best choice depends NOT only on potential outcomes (economics) but also on the effort associated with each action (biomechanics) (Cos et al., 2014). Don’t confuse biomechanics with convenience! For example, we drive around parking lots to find a closer spot when further spots are available (waste time). We are also willing to wait and pay more for Uber to not move.

Take Away: Start accounting for perceived energy demands (biomechanical costs) of a choice which, in many cases, outweighs conveniences and economic utility.

Citations

  1. Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist.
  2. Baron, A. S., Zaltman, G., & Olson, J. (2017). Barriers to advancing the science and practice of marketing. Journal of Marketing Management.
  3. Barsalou, L. W. (2008). Grounded cognition. Annual Review Psychology.
  4. Brasel, A. & Gips, J. (2014). Tablets, touchscreens, and touchpads: how varying touch interfaces trigger psychological ownership and endowment. Journal of Consumer Psychology.
  5. Brighton, H., & Gigerenzer, G. (2015). The bias bias. Journal of Business Research.
  6. Bruner, J. S., & Postman, L. (1949). On the perception of incongruity: A paradigm. Journal of personality.
  7. Buckner, R. L., & Carroll, D. C. (2007). Self-projection and the brain. Trends in cognitive sciences.
  8. Chandon, P., Morwitz, V. G., & Reinartz, W. J. (2005). Do intentions really predict behavior? Self-generated validity effects in survey research. Journal of Marketing.
  9. Cos, I., Duque, J., & Cisek, P. (2014). Rapid prediction of biomechanical costs during action decisions. Journal of neurophysiology.
  10. Gawronski, B., & De Houwer, J. (2014). Implicit measures in social and personality psychology. Handbook of research methods in social and personality psychology.
  11. Kahneman, D., Diener, E., & Schwarz, N. (Eds.). (1999). Well-being: Foundations of hedonic psychology. Russell Sage Foundation.
  12. Kwasnicka, D., Dombrowski, S. U., White, M., & Sniehotta, F. (2016). Theoretical explanations for maintenance of behaviour change: a systematic review of behaviour theories. Health psychology review.
  13. Lopes, L.L. (1991). The rhetoric of irrationality. Theory and Psychology.
  14. Lucci, C. R. (2013). Time, self, and intertemporal choice. Frontiers in Neuroscience.
  15. Meehl, P.E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting & Clinical Psychology.
  16. Michie, S., et al. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Annals of behavioral medicine.
  17. Morwitz, V. G., Steckel, J. H., & Gupta, A. (2007). When do purchase intentions predict sales?. International Journal of Forecasting.
  18. Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological review.
  19. Nosek, B. A., Hawkins, C. B., & Frazier, R. S. (2011). Implicit social cognition: From measures to mechanisms. Trends in cognitive sciences.
  20. Sawyer, A. G., & Peter, J. P. (1983). The significance of statistical significance tests in marketing research. Journal of marketing research.
  21. Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers.
  22. Schwarz, N. (1999). Self-reports: How the questions shape the answers. American psychologist.
  23. Schwarz, N. (2003). Self-reports in consumer research: The challenge of comparing cohorts and cultures. Journal of Consumer Research.
  24. Sheeran, P. (2002). Intention—behavior relations: A conceptual and empirical review. European review of social psychology.
  25. Sheeran, P., Harris, P. R., & Epton, T. (2014). Does heightening risk appraisals change people’s intentions and behavior? A meta-analysis of experimental studies. Psychological bulletin.
  26. Simon, H. A. (1990). Bounded rationality. In Utility and probability (pp. 15-18). Palgrave Macmillan UK.
  27. Sterne, J. A., & Smith, G. D. (2001). Sifting the evidence—what’s wrong with significance tests? Physical Therapy.
  28. Tversky, A., & Kahneman, D. (1986). Rational choice and the framing of decisions. Journal of business.
  29. Wilson, M. (2002). Six views of embodied cognition. Psychonomic bulletin & review.
  30. Vul, E., Harris, C., Winkielman, P., & Pashler, H. (2009). Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspectives on psychological science.
  31. Wood, W., Quinn, J.M., & Kashy, D. (2002). Habits in everyday life: Thought, emotion, and action. Journal of Personality and Social Psychology.
  32. Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives in Psychological Science.

Liked what you read? Share this article:

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on email
Email
Share on whatsapp
WhatsApp
Jason Martuscello

Jason Martuscello

Jason lives and breathes behavior change. His personal transformation losing over 100 lbs drives his curiosity to source the latest science to deliver cutting edge solutions. His work cuts through the jargon, to provide unique insight, and applied solutions to today's most pressing business problems. Jason holds an MSc and an MBA.

You may also be interested in...