!- Google Analytics ->
However, one of the problems in survey research is bias in attitude measurement. In the past several years, a new approach has become available to food scientists to deal with survey topics more deeply and with less bias than focus groups on the one hand and direct question-and-answer surveys on the other.
Rather than directly asking subjects a question about certain attributes and their value to the subjects, conjoint analysis follows a less-direct, more-natural approach to a topic, allowing the topic to be embedded within the context of an issue, i.e., a concept, rather than as a single aspect of a question, such as “Are you concerned about fat in your diet?” The respondent might not know how to answer the question and might give all sorts of answers, objections, and politically correct statements that befuddle the issue. Conjoint analysis can avoid this problem.
Conjoint analysis was first documented by Luce and Tukey (1964). The field of mathematical psychology was exploring the rules of choice and tradeoffs made by individuals. The inquiry led to the development of approaches to set up a conjoint test, and then to strategies to collect and efficiently analyze the data.
Since the method required sophisticated statistical analysis that followed a fairly involved approach to conducting the research, it was initially reserved for high-budget issues, such as pricing research.
Paul Green and associates at the Wharton School of Business began to apply these methods to concepts in the 1970s (Green and Srinivasan, 1978; Rosenbaum, 1987). Advances in computer and software technology began to change things in the mid-1980s, when Sawtooth Software Inc., Ketchum, Idaho (www.sawtoothsoftware.com) began to popularize a software approach to conjoint analysis and a body of researchers began to perform more analyses. In the early 1990s, Sawtooth’s IdeaMap® conjoint analysis software appeared and significantly expanded the range to hundreds of concept elements. Moskowitz Jacobs, Inc. (www.mji-designlab.com) subsequently offered on-line access to this software via its application service provider, www.ideamap.net.
Conjoint analysis has been used widely, “battle-tested” in numerous applications, and validated (Hunt et al., 1995; McLauchlan, 1992). It enjoys an enviable level of acceptance in marketing, marketing research, experimental psychology, and econometrics (Wittink and Cattin, 1989).
Migration into the R&D world for product development has occurred only recently. This has to do with cost issues mentioned above and with the more “hands-on” approach of food researchers. With a continuing need for the product development function of companies to drive innovative and creative product solutions, it is only appropriate that this “new” world of complex problem solving begin to attract the food product developer.
Conjoint analysis is an approach to finding out what components in a mixture do, through the measurement of the mixture. This approach enables a structured way to think about responses to stimuli and to look at the components systematically. The researcher systematically deconstructs any idea or message into its components and discovers the unique contributions of each component.
The traditional conjoint analysis that began to be practiced in the 1970s involved a fairly cumbersome approach, called “pair-wise tradeoff ” analysis, in which the respondent evaluates pairs of items and for each pair chooses one of the two. For the analysis to work well, many pairs of items would need to be evaluated, the evaluations tabulated, then extensive modeling conducted on the tabulations to generate a set of scale values that would represent the choices. The mathematical model would thus reveal the level of contribution (“utility value”) for each of the items in the study.
--- PAGE BREAK ---
Conjoint analysis is more rapid, more user-friendly, and more intuitive. A common approach today is a “full-profile” conjoint study, in which systematically varied combinations of features/benefits (“concepts”) are presented to respondents, who rate each concept. The term “full profile” refers to the fact that respondents evaluate a full set of elements together rather than different elements one at a time or as a paired comparison. After the evaluation, the data are analyzed by ordinary least-squares regression, a standard procedure available in most statistical packages.
The remainder of this article will illustrate the use of conjoint analysis by developing a study and analyzing it by the full-profile approach. There are other approaches such as hybrid conjoint models and self-explicated choice models, but they are beyond the scope of this article.
Application to Product Development
For this example, we will use data from a study conducted to help a food scientist to develop a new chocolate product that can meet certain marketing objectives (Moskowitz et al., 2002). In this early stage of product design, the food scientist wanted to understand what a really good chocolate product would be and what would make the product even more desirable (“craveable”).
There are five steps in creating the conjoint study:
1. Choose the Elements of the Study Design. The product developer must consider what questions need to be answered, to select the specific elements to include in the design. In this example, we considered four categories (collections of related elements) with nine different feature options in each, for a total of 36 elements (Table 1). This array of elements provides a rich matrix of features to explore. The elements can be such things as a name, photo, benefit, taste, package, emotional response, etc. The categories were key product descriptor, secondary descriptor, emotional descriptor, and brand/benefit descriptor.
This example assumes that the food scientist has gotten some marketing terms and brands to consider and is thinking about how the product will be created for the consumer. The terms used must be meaningful to the food scientist and interesting to the consumer (to maintain interest and get valid data).
Elements for categories 1 and 2 were provided by product developers and focus on elements of the product design that the product developers can control, whereas elements for categories 3 and 4 came from marketing and market research who need to understand how to manage the more emotional components of the product design and how brands or benefits might work in conjunction with the product.
2. Design the Experiment. The 36 elements are combined into 60 different concepts to allow statistical evaluation of the results. The concepts are prepared by combining two or more elements (one or none from each category) into a concept, e.g., E1 + E10 + E19, or E19 + E30, or E5 + E10 + E22 + E31.
The specific set of 60 combinations differs for each person, even though they are based on the same set of 36 elements. This approach reduces the bias that might occur if by coincidence one particular combination does really well or poorly because of the accidental combination of elements. The approach also allows the researcher to measure interactions among elements later.
The experimental design provides greater insight into the output results, since the performance of individual elements of the study can be revealed, similar to the principles used for product development projects. Whether the researcher deals with words, images, “sound bites,” or other stimuli, the researcher discovers the contribution of individual elements from the respondent’s perspective, even if the respondent can not articulate the reasons for that performance.
The respondents do not know that the elements are being systematically varied. Rather, they see the overall product concept and react to it. This approach allows the respondents to react in a way that approximates a measure of the tradeoffs that respondents consider, and which elements they value the most, in a way that is natural. The design structure ensures that the elements are statistically independent of each other.
--- PAGE BREAK ---
3. Present the Stimuli and Acquire the Data. Today’s computer-aided tools and computerized screens have replaced the old “manual card approach,” in which the researcher created cards listing the concepts, one card per concept, and presented them to the subject. In this conjoint study, the subjects logged onto a Web site and accessed the survey from their home computer. The consumer sample was nationwide, and the study took 15–20 minutes for consumers with normal connect speeds and a current-model computer.
In this study, 329 consumer responses were collected across the entire United States. The subjects were presented with test concepts and asked, “How intense is your craving for this chocolate candy?” on a scale of 1 = not at all and 9 = very intense.
For example, if a respondent is presented with a concept comprising elements E1 + E10 + E19, Classic taste . . . the way you remember it. Premium quality . . . the best chocolate candy in the whole world. So delicious, just thinking about it makes your mouth water, the respondent might give it a rating of 2 on the 9-point scale, indicating that it is not particularly compelling.
However, if the respondent is presented with the concept E1+ E20, Classic taste . . . the way you remember it. Premium quality . . . the best chocolate candy in the whole world. When you think about it, you have to have it . . . and once you have it, you can’t stop eating it, the respondent might give it a rating of 8, indicating that the concept presents a compelling idea about chocolate.
4. Create the Model. The model created for this example uses regression analysis. This analysis is possible because in step 2 we laid the study out using experimental design, just as a food scientist does for classic experimental design projects. Therefore, off-the-shelf statistical software can be used for the analysis.
The systematic variation of the test elements allows the researcher to create an equation relating the presence/absence of the elements (independent variables) to the ratings. Since a classic experimental design has been used, the analyst can use off-the-shelf statistical software. The most popular statistical estimation routines for calculating the element values (called “part worth utilities”) is ordinary least-squares regression.
From the data, the researcher creates a simple equation of the form:
Interest = k0 + k1(E1) + k2(E2) + . . . + kn(En)
This equation is created for each respondent, and the parameters averaged across respondents.
The additive constant k0 shows the percentage of respondents in the study who have a basic craving for chocolate candy. This is clearly impossible in an actual study, since all test concepts comprised two or more elements. However, the value of k0 is validly estimated and can be used to measure the baseline interest in the overall idea. For chocolate, the additive constant is 46, meaning that even without elements, 46% of the respondents would rate a concept about chocolate between 7 and 9 on a 9-point craveability interest scale.
The utilities k1 . . . kn show the additive conditional probability (or proportion of times) that a respondent will be interested in a chocolate product if the element appears. By sorting the elements from high to low, the researcher rapidly identifies the most important (and the least important) elements. The former are the “drivers” of acceptance.
--- PAGE BREAK ---
5. Interpret the Results. Once the data are summarized, the results can be analyzed and incorporated into other aspects of the project.
Table 2 shows the results of the study. Note that some elements score very well. For example, the element Chocolate that melts slowly to release delicate, intense flavor and has a rich, silky texture that just melts in your mouth . . . so sinful! has a utility value of 17, meaning that an incremental 17% of the respondents would be interested (i.e., would rate the concept 7–9) if that attribute were included in the product.
Based on past experience, we can interpret utility values as follows: ≥15 = excellent, include in product design; 9–15 = very good, include in product design; 5–9 = good, include in product design; 0–5 = fair, not particularly compelling for product development; and <0 = avoid, may have negative impact on product design.
Thus, our product developer learns what the consumers want, from their reactions to concepts, even if the consumers can’t articulate those wants directly. For example, if having a velvety texture (E6) is rated very high, the developer may not want to use crunch cookie pieces (E13) for this candy, and if the product is not branded as Hershey (E29) the marketer is going to have to work harder to have the brand add value—i.e., the product may deliver the same product experience but may not enjoy the same market success because of factors beyond the product developers’ control (see Table 2, E2 and E3). All of this information is critical to more innovative and competitive product development.
Going Beyond Standard Survey Data
What makes conjoint analysis more interesting at times than regular surveys is that the results surprise, and occasionally go against what people believe to be the case. For example, in Table 2 “organic” is a big negative (–9), even though organic has been a popular theme in the media and a sign of wisdom for some respondent groups. If respondents were asked directly about organic, they might try to be politically correct and say that they are “all for” organic chocolate bars.
However, conjoint measurement does not allow the respondent to be politically correct. Conjoint measurement juxtaposes different ideas against each other, forcing each to struggle. The respondent cannot be politically correct because there are different ideas. The respondent has to respond to the totality of the idea.
Furthermore, it is hard for the respondent to second-guess the approach, because the combinations keep changing. What emerges, what wins, what loses, must therefore be what the respondent really feels, rather than what the respondent thinks the researcher wishes to hear. Again, this type of information can be very helpful to product developers during various phases of the product development cycle. It differentiates this form of testing from most traditional forms.
Knowledge-Based Decision Making
Today’s product developers are fortunate to live in an era when fact-based decisions are sought after and readily accepted. Conjoint measurement fits right into this emerging trend. With the pressure on corporations to create newer and better products comes the pressure to identify product elements that consumers will like and that may differentiate one manufacturer and one offering from another.
Whereas in previous years the creation of compelling new product ideas was left to marketers, gurus, and serendipity, we hear today the need to create new concepts that are consumer accepted and have a chance in the marketplace. The conjoint measurement approach—disciplined, comprehensive, iterative, and simple to use—fits right into the call by management to increase business through knowledge-based decision making.
Authors Moskowitz and Katz are, respectively, President and Brand Manager, Moskowitz Jacobs Inc., 1025 Westchester Ave., White Plains, NY 10604. Authors Beckley and Ashman are, respectively, President and Chief Strategist, The Understanding & Insight Group, 3 Rosewood Ln., . Denville, NJ 07834. Authors Moskowitz and Beckley are Professional Members of IFT. Send reprint requests to author Moskowitz.
Green, P. and Srinivasan, V. 1978. A general approach to product design optimization via conjoint analysis. J. Mktg. 45: 17-37.
Luce, R. and Tukey, J. 1964. Simultaneous conjoint measurement: A new type of fundamental measurement. J. Math. Psychol. 1: 1-27.
McLauchlan, W. 1992. The predictive validity of derived versus stated importance. In “1992 Sawtooth Software Conference Proceedings,” ed. M. Metegrano, pp. 285-311. Sawtooth Software, Ketchum, Idaho.
Moskowitz, H., Ashman, H., Gillette, M., and Adams, J. 2002. Moving closer to the customer through understanding the mind: The Crave It! Study and the specific case of chocolate. Ingrediente Alimentari 1(1): 13-21. In Italian, with summary in English.
Rosenbaum, H. 1987. Redesigning product lines with conjoint analysis: How Sunbeam does it. J. Prod. Innovation Mgmt. 4: 120-137.
Wittink, D. and Cattin, P. 1989. Commercial use of conjoint analysis: An update. J. Mktg. 53: 91-96.