Measuring attitudes that predict behaviours

It’s pretty typical to come across surveys asking about attitudes in evaluations. These survey results are often (not always) used to make inferences about participants’ behaviours. How valid is this approach and are there ways to structure attitudinal questions that are more likely to predict behaviour?

In a lot of circles, it is accepted wisdom that attitudes don’t predict behaviours. The classic study in regards to this is LaPiere (1934). LaPiere, a sociology professor at Stanford University, spent two years traveling in the U.S. with a Chinese couple. Over the two years, they visited 251 hotels and restaurants and were treated hospitably at all but one. LaPiere found this surprising, and when he returned home he mailed a survey to all of the businesses visited asking:  “Will you accept members of the Chinese race in your establishment?” Of the 128 businesses that responded, 92% answered no. This study was seminal in establishing attitudes don’t match behaviours and is still talked about in undergraduate social psychology and sociology classes.

Over the years it has been debated if LaPiere’s study truly shows a discrepancy between attitudes and behaviours or if it simply shows that often surveys only measure general attitudes (e.g., in general, would you allow members of the Chinese race in your business?) rather than specific attitudes (e.g., would you allow this specific Chinese couple in your business?), with specific attitudes being more likely to predict actual behaviour.

This notion is related to what is known as the Theory of Compatibility (Ajzen & Fishbein, 2005). Simply put, this theory states that attitudes are more likely to predict behaviour when they are measured at the same level of specificity. For example, general attitudes toward organ donation are quite positive, but the actual number of people who register as donors is low – a discrepancy that has frustrated and confused researchers. But when Seigel et al. (2014) asked about attitudes specific to registering as a donor, they found that they could explain over 70% more of the variance in actual registration rates. A meta-analysis of over 88 studies provides further evidence: when the theory of compatibility was adhered to, the average correlation between attitudes and behaviours was r = 0.50. When it wasn’t, the correlation was only r = 0.14.

So what does this mean for measurement in evaluation? First, as with most measurement questions, I would suggest looking at the theory of change. What is the program actually trying to accomplish – a change in attitudes or a change in behaviour or both? Often, there is the assumption that providing participants with knowledge on a topic (e.g., what are healthy eating habits) will result in attitude change (e.g., “I should eat more healthy foods”) which will then result in a behaviour change (e.g., the participants increase their intake of healthy food) – this is known as a results chain.

Keeping with the above example, let’s say you are measuring the impact of a healthy eating workshop and you will be delivering a survey immediately following the workshop. This means that you can’t assess the impact on behaviours – your only options are knowledge and attitudes. How can we use the theory of compatibility to increase the chance that our attitude questions will actually predict behaviour? Rather than asking about general attitudes toward healthy eating (e.g., “How important do you think it is to eat healthy foods?”), we should be asking about specific attitudes (e.g., “How important do you think it is for you to eat 7-8 servings* of fruits and/or vegetables per day?”).

I’m curious about how others approach this in evaluations. Do you generally measure attitudes or behaviours or  both?

*For the sake of this example, I used the guidelines from Canada’s Food Guide for an adult female, although that resource is certainly not without its controversy.