The essence of futility is doing the same thing over and over and expecting a different result.
[ Attributed to Albert Einstein ]
The conjoint method is very efficient in obtaining information on psychological trade-offs that consumers make when evaluating several attributes of a product together. Nevertheless, the results are sometimes difficult to use. Often heard complaint is concerned with no indication of perceptual equivalence between attributes. A way to convert perceptions about actual features to perceptions about a reduced set of features needed for product positioning is also missing.
The mentioned problems are due to part-worth reference (zero) points set in attributes. A reference point is chosen either subjectively as one of the attribute levels (when dummy coding is used) or as a middle point set so that average value of all tested levels of an attribute is zero (when effects coding is used). This makes effects of aspects from different attributes incomparable. However, the managerial understanding of an aspect effect is related to effect of any other aspect.
Difference scaling methods, on the other hand, use the same scale for all tested aspects. It is well known that batteries of evaluation questions have small discrimination power and too many features come out as important. MaxDiff (MXD - Maximum Difference Scaling) method does not suffer from this drawback. However, it is not almighty. First, in many practical cases with aspects organized in attributes with mutually exclusive levels, it is nearly impossible to generate a balanced design with satisfactory efficiency. Second, even if the former requirements were achieved, within-attribute precision of estimated part-worths would be heavily impaired. Third, very many tasks would have to be answered to obtain sufficient accuracy.
CSDCA - Common Scale Discrete Choice Analysis has evolved as an amalgamation of work of many authors in the last 20 years of DCM application in practice. It is based on ability of DCM - Discrete Choice Model to fuse discrete choice data from different sources. The method is fully described in the technical paper Common Scale Hybrid Discrete Choice Analysis [Bryan Orme, Sawtooth Software, Inc., 2013]. It relies on fusing data from two major interview sections, PRIORS and MOTIVATORS, and, optionally, CHOICES:
The preference order of aspects in each attribute is used as soft constraints in the estimation. For a nominal attribute, the order is the result of sorting. For a quantitative attribute, the order is often natural, but not always. Not everybody prefers a 500 Hp car engine to a 200 Hp one.
When aspects of an attribute have a natural order of preferences, typically in a quantitative benefit or detriment such as price, the constraints can be implemented in the estimation procedure directly. If a threshold acceptance level is to be obtained, an interviewing procedure similar to the Gabor-Granger pricing test can be used.
This section known as the Best-Worst Case 2 (Louviere et al., 1995) is the heart of the method. It makes it possible to put part-worths of aspects of all considered attributes on the same scale. Respondents state which aspects of a randomized product profile are most and least preferred or motivating. The exercise may be viewed as a "self-explicated conjoint" transformed from evaluations into a choice format.
The best-worst approach has a theoretical reason but it may may have an additional advantage. E.g. healthcare researchers have commented that learning which conditions or outcomes patients want to avoid is perhaps even more important than learning what they want.
The method has remained out of the scope of practicing researches for some time as it did not provide satisfactory results when used as a standalone tool. However, when completed with the soft constraints from the previous section, it has been shown to give results very similar to those from the "golden standard" CBC - Choice Based Conjoint.
Depending on the number of attributes, the choice tasks can be designed with aspects of all or reduced number of attributes. In contrast to CBC, it seems reasonable to rely on a (reduced) set of no more than 6 attributes. As there is no "profiles considered jointly" effect a lower number of aspects can make the task easier for respondents. Conversely, a higher number might require additional second best/worst choices with unpredictable influence on the overall performance of the method.
If the purpose of the survey is to identify the main motivators (and/or deterrents) and their relative influences on customer decisions, as in a positioning study, this section is not required. However, if the goal is to estimate market performance of yet non-existent or modified products, their competitive potential and other characteristics available only from a market simulation, choice preferences among fully specified products should be determined.
When a short conjoint exercise is added a refinement of part-worths is achieved by fusing the information obtained from statistical analysis of the whole product preferences. A CBC exercise can be useful if performance of products in full attribute ranges is to be simulated and inspected. If the new product alternatives are given and immutable, a SCE exercise with a set of current and new market products will be sufficient to estimate their market performance.
When estimation of stated acceptances and/or competitive potential of products is desired the choice exercise may be completed with a calibration section.
The main advantage of the method is a common scale for all attributes. Also simplicity and easiness of answering the survey may be important in deployment on the web. It is well known that web respondents are prone to simplify their decisions in a conjoint exercise if it is long or profiles have many attributes. Tasks in the main two sections of CSDCA do not tempt respondents to shortcut or evasive answers.
By our knowledge, standard CBC works well for up to about 6 attributes. With higher number the compensatory effects of attribute levels are very hard to estimate and respondents tend to resort to simplified decisions, typically to the lowest price. Use of incomplete profiles in CBC cannot be recommended as it leads to excessive equalization of attribute importances (as is common in ACA - Adaptive Conjoint Analysis). In contrast, higher number of attributes is possible in CSDCA as only the best and worst aspects in a single profile are selected.
The combined PRIORS and MOTIVATORS sections are not intended as a fully equivalent substitute to CBC. While all attributes work together (jointly) in conjoint, aspects in common scale analysis are predominantly referred to and chosen separately. An added CHOICE section can have substantially less tasks than when interviewed alone.
Krátkou prezentaci v češtině lze stáhnout na odkaze OBIMA_Object-Image-Analysis.pptx
CSDCA part-worth represent additive increments to the product utility but do not reflect perceptual properties of levels in an attribute. Similar to the OBIMA - Object Image Analysis, the level "None of the [shown] levels [left after the previous choices, if any] is acceptable" can be added to the levels of each nominal attribute tested in the PRIORS section of CSDCA. The part-worth obtained for this level is the value between the just acceptable and just unacceptable levels, and can be interpreted as a stated perceptual threshold. Positive and negative "stated perceptions" computed as acceptances relative to the threshold can be useful for understanding views of the customers. The thresholds can be the basis for non-compensatory modeling and simulation.
Sorting of the levels by the degree of acceptance, attractiveness or suitability (desirability) can be done using SCE - Sequential Choice Exercise. For quantitative attributes, the Gabor-Granger method, originally developed as a price test, is an option. Data from these procedures are equivalent to those from SCE, and can be converted into a format suitable for analysis of discrete choices.
Different threshold values can be obtained from CSDCA based product utilities calibrated by purchase intentions statements for complete products. The statements reflect the joint influence of all product attributes. Such thresholds can be called evoked, and the perceptions computed from them called evoked perceptions. Comparison of stated and evoked perceptions can be useful for understanding the differences between statements about isolated features and their projection in complete products.
A generalized model of bundling or packaging options into a single product is still a problem to be solved. However, whatever modeling approach is adopted, a set of reliable part-worths of isolated options is essential. A full profile CBC with about up to 5 medium or up to 7 easily understood and evaluated attributes can provide reliable estimates of level part-worth. A reduced set of the base attributes, most important for decision makers and, therefore, making the core of the product, can be tested in the CHOICES section, typically a CBC exercise. Many more attributes can be tested in the PRIORS and MOTIVATORS section. The part-worths of the additional, less important attributes, will be dispersed among, and correctly related to, part-worths of the base attributes. It is anticipated that at least some problems with a large number of attributes can be handled this way.
It is believed the hybrid CSDCA method that places attributes on a common scale thus allowing direct comparisons of the part-worths across attributes, might make conjoint-like results accessible and appealing to more users. Relative influences of attribute levels on willingness to purchase a product can be presented and compared. The ability to estimate stated and evoked perceptions of attribute levels allows to reveal what respondents think of the isolated or jointly considered attribute levels.
CSDCA have appeared to be very effective. From researcher's view, more attributes can be tested without putting increased load on respondents compared to the standard CBC. Complaints about annoying repetitive tasks were not encountered. In our practice, CSDCA is already used more often than the standalone CBC method.
A few examples of results are on the page CSDCA Examples.
Debug: Goto to PRIORS
Debug: Goto to MOTIVATORS
Debug: Goto to CHOICES
You may experience the look and feel of a (radically shortened) CSDCA live questionnaire for a simple hypothetical product.