Traditional Surveys

At Split Second Research, we also conduct the more traditional approaches to survey design. We can provide self-report questionnaires and time-limited forced choice tests, with a range of response options (multiple-choice, multiple-options, drag-and-drop questions, open text box, and so on), all very useful for market research. As well as for product evaluation, we use this approach for obtaining respondent demographics (age, customer type, and so on) and psychographics (measures of their personality or buying styles, etc).

Traditional surveys certainly have their value, especially when we have no reason to doubt that respondents are answering truthfully and with a good sense of self-insight. Traditional surveys can also be run alongside implicit surveys, so we can get a broad understanding of how people feel. Are they overstating some feelings and understating others? Are they overly influenced by others? It can also answer questions about at what level to appeal to existing customers or new ones to create better marketing opportunities.

However, we do need to be sure that the survey does not lead to biased responses, and this is where Split Second has top drawer expertise – we know what causes bias in surveys. Some of these include:

  • Respondents purely wanting to get through a test as quickly as possible to receive a reward, might use any strategy available to them for doing so. One of these is ticking boxes fairly randomly. This is quite easy to check for and when this occurs we ask respondents to follow the instructions carefully and to start over again or quit the survey.
  • Respondents may merely attempt to present themselves in favourable ways (that is, they always make logical choices and never prone to impulse buying or emotional decisions). The result is that the research is not an accurate account of how people feel and behave when making purchasing decisions. In order to counter this we can introduce a ‘lie scale’ to see if they are trying to present an ideal self (but we may miss impulse or unplanned purchase behaviour and the emotions that lie behind it).
  • Respondents may try to appear consistent in their responses. Although this produces ‘neat’ data, it tends to produce ceiling effects – or just data that is too good to be true. It usually happens when respondents are asked to identify their ‘favourite’ brands and so they simply select one of several that are plausible. Once they have  ‘committed’ themselves to a product they try to be fastidiously consistent. However, psychologists are aware that people’s attitudes often contain inconsistencies and contradictions. We have subtle checks to test for this.
  • Respondents may try to be ‘kind’ to the researcher. Most of the people who participate in market research receive a monetary incentive to do so, and may feel obliged to give brand-favourable responses. This can be counteracted by disguising what the main brand in question is.

Of course, while such biases can be examined in the data, the truth is that in many cases respondents may only be consciously aware of global and generalised feelings, and specific memories (specific instances of their past behaviour). The result is that the feelings expressed may lack detail and be based on the first memory that comes to mind, rather than a more accurate summary of one’s behaviour. When this happens in a survey, the respondents may overgeneralise a positive (or negative) feeling, so that all questions about that product are rated or answered in the same way (e.g, if they rate one feature of a product with the top mark, they rate all features with the top mark). This may be why many self-report surveys yield data where consumers have clearly not discriminated very well among the brands or among the features of brands.

Another problem for self-report surveys is that respondents may simply hide their true feelings. This can happen when one is asking about certain behaviours that might be embarrassing, illegal, or socially less acceptable, but it can also happen if the respondent is concerned about the security of their data and how it might be used. Respondents may want to hide their feelings for no obvious reason other than they just do not want to divulge the information. If they have this stance then there is very little a self-report survey designer can do about it.

Finally, it is worth stating that many attitudes and feelings can also be very difficult to verbalise. A respondent may be aware of their feelings towards a brand but find it difficult to put into words. For example, some people prefer Heinz Baked Beans over other brands, yet may be unable to say why – choosing Heinz has just become a habit. Yet, some people are extremely good at writing about their feelings and can seem to be unusually self-aware (e.g., authors, journalists, scriptwriters, etc, and people who are high on verbal intelligence), though most are not.

If there are too many biases in the survey that cannot be controlled, then each problem is going to contribute a significant amount of noise to the data, resulting in weak discriminations between brands or design routes that are being tested. Drawing conclusions from this kind of data becomes hazardous.

In conclusion, many explicit self-report measures clearly have their place in market and consumer research  and these are when the respondent is genuinely willing to answer the question and can access relevant knowledge for an accurate answer.

Used alongside implicit association tests, this approach promises to offer a deeper understanding of attitudes and feelings towards brands, products, and concepts, and some of the problems mentioned above might be circumvented.

© 2017: SPLIT SECOND RESEARCH, All Rights Reserved | Developed by: GEFTSOFT