|
|
|
|
|
Analytics should never be replaced with gut instincts or vice versa.
www.marketingpower.com
What others think ...
Under very controlled conditions such as markets with equal information and distribution, market simulators often report results that closely match long-range equilibrium market shares. However, conjoint part worth utilities cannot account for many real-world factors that shape market shares. Therefore, the share of preference predictions usually should not be interpreted as market shares, but as relative indications of preference. Divorcing oneself from the idea that conjoint simulations predict market shares is one of the most important steps to getting value from a conjoint analysis study and the resulting simulator. While "external effect" factors can be built into the simulation model to tune conjoint shares of preference to match market shares, we suggest avoiding this temptation if at all possible. No matter how carefully conjoint predictions are calibrated to the market, the researcher may one day be embarrassed by differences that remain.Freely by Sawtooth Software, Inc.
A conjoint exercise is a probe into the respondents' minds rather than into the current market. It is aimed at the possible future decisions based on the past experience and future expectations rather than a retrogressive view. The data reflect just statements without being accompanied by any actions. A piece of software for presenting a what-if simulation based on a conjoint study is often, but mistakenly, called a "market simulator". Despite all the deficiencies a simulator is a useful tool.
An analyst or manager can investigate strategic issues such as a new product design, product positioning, and pricing strategy. By specifying each level on each attribute of each real or hypothetical product, user defined scenario is created. The computed product utilities are used to estimate strengths of preference in terms of acceptance (sometimes called purchase likelihood) or perceptance, share of choices, sensitivity measures, strategy profiles, etc., for each product. This is done by accumulating the individual estimates over respondents to predict aggregated interests in different product concepts. Several types of simulation and segmentation by (predefined) customer groups are usually available. Technical details can be found on help pages of Sawtooth Software, Inc.
There are some major types of what-if simulations. |
|
Any simulation, based on statements made in a research study, should be taken for a hypothetical idea rather than an image of the current or a future reality.
In the early applications of DCM (Discrete Choice Modeling) only aggregated utilities were available. It was soon recognized that this approach could be used only for a sample sufficiently homogeneous in preferences. However, it is difficult to find effective criteria that would achieve that. In most cases the heterogeneity remains unrecognized until the study is carried out. Since efficient methods for obtaining individual-based utilities became available the aggregated approach has been discouraged.
Time to time there are attempts to interpret conjoint data by averaging individual utilities. We can demonstrate the detrimental effect of such a procedure in a trivial example.
Imagine a "complete" segment of two car market customers, A and B. Person A loves French cars, hates German ones, and is compliant to accept a Japanese one. Utility values 4, -4 and 2 logit units can be assigned to those preferences, resp. The other member of the segment, person B, hates French cars, loves German ones, and is also compliant to accept a Japanese one. Utility values -4, 4 and 2 logit units can be assigned to these preferences. As shown in the table below, averaging individually estimated shares gives result quite close to the intuitive one. On the other hand, the prediction based on averaged utilities is completely misleading.
Person | Individual utility [ logit ] | Individual share simulation | ||||
---|---|---|---|---|---|---|
Fra | Ger | Jap | Fra | Ger | Jap | |
A | 4 | -4 | 2 | 88.1 % | 0.0 % | 11.9 % |
B | -4 | 4 | 2 | 0.0 % | 88.1 % | 11.9 % |
A + B | 44.0 % | 44.0 % | 11.9 % | |||
Average utility [ logit ] | Aggregated share simulation | |||||
A + B | 0 | 0 | 2 | 10.7 % | 10.7 % | 78.7 % |
Interpretation of aggregated utilities (and of measures derived from them) depends on the way the aggregation is done, and requires a lot of caution and foresight. Measures based on aggregation of additive characteristics of individuals (such as acceptance) computed from individual utilities are more reliable, namely when acceptability thresholds of aspects are respected.
There are several methods of (total) share simulation that may be used in either a clean (axiomatic) or modified form. A theoretically substantiated method should always be preferred. Any introduced "improvements" and "corrections", if ill justified, not thought through well or misunderstood, may cause more harm than benefit.
First choice share |
|
Preference share |
|
Derived methods |
|
When used judiciously, preference share simulations are robust and statistically reliable for many practical problems. Despite hardly ever replicating market data they come under the most useful outputs from the market research. The threshold-based simulation models are sensitive to the way of the threshold estimation. However, they can often reveal the context not seen in a standard compensatory preference share simulation. Stage-wise or nested models are useful for simulations involving not more than two or three attributes (beside the class attributes), e.g. product lines.
It is generally accepted that both stated and revealed preferences obey the axioms of preferences (as formulated by Paul Samuelson in 1947). However, the preference order of the two kinds of preferences may differ since the survey conditions differ from those on the market. A simulation based on survey preference data would correspond to the market data if at least (sic!) the following conditions were met on the market:
Results from a conjoint survey are forward-looking and imply an equilibrium (a dynamic steady state) estimated from the currently assumed needs and expectations of respondents. The obtained information is static. There is no information about the speed on the path of changes. The equilibrium may be achieved only after a sufficiently long time but the conditions may (and probably will) alter in between. Nevertheless it is believed that forward-looking estimates are more credible than predictions based on historic data.
End users of a conjoint-based research often ask for a simulator that would reflect the current state of the market and show the changes in demand induced by changes in product properties. A researcher has no choice but to make up with the request. External information must be introduced in the simulation so that it is tuned the desired way. At the same time, the tuning should lead to a minimal distortion of the relationships obtained from the research which is contradictory. Each type of simulation allows slightly different methods of tuning.
Preference share simulators usually allow for modification of the underlying utilities. Modifications are justified if possible consequences are understood and the detrimental ones can be avoided. A close cooperation of the simulator user with the research analyst is strongly recommended. The following modifications are the most common.
The traditional choice alternative "None" often used in CBC - Choice Based Conjoint studies cannot be identified with or transformed to a reference or an outer product. This is one of the reasons why Sawtooth recommend not to include the alternative "none" in the scenarios for simulations and comparisons of the products in their choice simulator. Similarly, using an additional item with fixed utility as an outer product in a preference share simulation cannot be recommended. The outer product is, in fact, a complex entity dependent on the products in the simulation, outer market conditions and other factors. The better the product category is covered the lower the outer product utility is, provided the sample is composed of category users.
When products in a simulation are obvious substitutes, the traditional "None" constant alternative can be considered a virtual standalone product with acceptance of 50%. The choice probability of the product from a binary set composed of a product with the highest utility and the "None" alternative, is partitioned by odds of all products in the simulated choice set. The method is based on the two-level nested logit model as described on page Constant Alternative in Simulation. To provide reasonable results the utility of the "None" alternative should have value reflecting the real behavior of individuals on the market. The method is easy to implement in a spreadsheet software.
In some situations it is useful to modify results from a conjoint study with the results from an independently carried test, e.g. SCE - Sequential Choice Exercise or MBC - Menu Based Choice, designed as a shopping event with profiles that represent both the currently available and new products at realistic prices. Another way is to ask a question on probability of purchase of the choices in a CBC instead of showing the traditional "None" alternative. These ways are examples of a calibration with the aim to obtain acceptances of the current and new products in a more market-like manner.
In case when the assortment of products or, alternatively, a unique subset, were satisfactorily covered, a calibration can be replaced by weighting individuals by their consumption. This approach is mostly suitable for CPG/FMCG market category. One should check and verify that the neglected "outer product" represents only a negligible part of the market share. In other words, the scenario should comprise such items that will guarantee that each individual included in the simulation will find a fully acceptable product for them. At the same time it must be remembered that a product utility value is a single number that cannot fully reflect both choice and repeat purchase probabilities.
The standard acceptance is not a measure of a product potential. If a threshold (cutoff) of product acceptability is known, product acceptance can be rescaled so that it is zero when it is at or below the threshold. The value obtained this way has been named perceptance, and can be considered as a non-competitive potential of the product. For most practical uses, however, consideration of competition is required.
Competitive potential of a product can be obtained as its choice probability from the binary set containing the product and a properly selected reference product. The reference product is a hypothetical product with utility equal to the expected utility of the portfolio (a choice set) complementary to the product in question. In practice, a well established product with the highest utility can be selected. Similar to the previous case, the potential obtained this way does not reflect possible changes in the competing environment.
Probability of selling a product is influenced by its acceptability and acceptability of competing products in the choice set at the time of making the decision. This process can be modeled as a two-stage chain of events leading to a two-level model. In the first step, the overall purchase probability is set as proportional to the potential (as from effective utility) of the whole choice set (see page on portfolio optimization). In the second step, choice probability conditional to the set is estimated (e.g. using the standard preference share model as the simplest option). The resulting conditional probability of the product choice is product of the two probabilities.
The stage-wise approach relaxes the assumption of the choice set to be exhaustive. Also need for inclusion of the outer product is avoided. The computed choice probabilities do not sum to 100% but to a lower value reflecting potential of the considered set of products. Compared to the standard preference share model, the simulation is less sensitive to the set-up of the choice set due to the fact that the products unacceptable for an individual add only little to the share. This is the most distinct difference from the standard preference share model where a set composed of all unacceptable items for an individual contributes with 100% to the total of share values which is evidently a false assumption.
The potential-based simulation is different from the usual preference share simulation namely in prediction of price influences. For example, if all items in the set had the same price elasticities then the same relative change in all prices would not lead to any change in preference shares. However, the potential-based choice probabilities estimated in the two stage procedure will change in the expected direction. E.g., when all prices are increased, the total demand potential is decreased and the potential of cheaper products is usually found to have increased on account of more expensive products.
This approach has been developed especially for the studies where assortment of the tested products is limited, and, therefore, the assumption of the simulated products having close to 100% market share is not realistic.
The model can be extended to a nested model with each nest being assigned to a class of products, e.g. a line of products of a brand, class of cars, range of services related to a product, etc. In a simulation, the choice set is partitioned to subsets according to the present class members. The potential computed for each subset independently is partitioned among the members of the subset. This approach attacks the IIA problem between the classes, but not inside the classes. As managers can easily avoid similarities inside a class, e.g. brand, or the similarities can be excluded using the portfolio optimization procedure, this should be seldom a problem.
The share-based and potential-based simulations mentioned above are best suited for products being chosen independently and bought as separate items. A volumetric simulation is better, at least in theory, for products purchased concurrently, typically from several FMCG/CPG categories (e.g. meals and drinks, breads and spreads, hammers and nails, etc.). While the model of the volumetric CBC is the principle of MBC - Menu Based Choice conjoint, a real volumetric interviewing method is used only rarely. It is very demanding for respondents who have to state what their consumption would be in a given time interval under the given conditions.
The main advantage of the volumetric approach is possibility to incorporate saturation effects in consumption of the goods, and thus eliminate the exaggerated share prediction for low priced products often found in a standard preference share simulation.
In some justifiable cases the volumetric interviewing can be replaced with the standard CBC provided the questionnaire is completed with a set of questions concerning the scope of purchased products and the related consumption saturation values. A volumetric simulation based on such data usually requires an additional tuning.
The most important thing to remember is that any conjoint exercise is in principle a relative method. Only differences between simulated responses to different settings of the simulated products have an informative value. Absolute values from a simulation should always be taken with a caution. A simulation providing more realistic values can be obtained only by introducing external pieces of information such as stated and/or revealed preferences, perceptional thresholds, etc. Reliability of a simulation depends on so many factors that is impossible to name all. Some of them are mentioned on page Conjoint Method Overview and in the external sources referenced from there.
Nevertheless, some interpretation aspects deserve mentioning.
The only proven method for a reliable estimation of cannibalization in the range of products sought for and bought as a single item, is the first choice method. Since only one product is counted in the results per individual, this method requires a generous sample size. When there are data only from several hundred individuals available, a compromise is to use RFC – Randomized First Choice simulation available in the choice simulator or older CCS – Client Conjoint Simulator from Sawtooth Software, Inc. In our experience, results from RFC simulation do not differ significantly from a preference share simulations at usual circumstances. A two-stage simulation based on product potential values seems to be more promising. Another method of simulation might be based on cross-effects between the products but there is little experience with this method.
In FMCG/CPG category, concurrent purchases are common. A volumetric approach is desirable to estimate cannibalization effects, but the exercise is quite demanding for respondents. A perspective can be seen in a hierarchical (mother logit) method of the simulation where the products in a simulation are grouped in nests by their ability to be mutually substituted. The method is partly resembling the old "correction by similarity", but with an important difference. The similarity is derived (1) from the inherent properties of the products and (2) from the observed behavior of an individual independently.
In practice, estimation of maximal possible cannibalization between a pair of products may be sufficient. In such a case, the estimation based on CBC data is practicable. It is based on an assumption the two products are perfectly correlated with utilities obtained from the standard multinomial model.
The simplest way to locate an optimal price for a product is to multiply the simulated product shares or volumes with the corresponding unit prices (giving a relative revenue prediction) or margins (giving a gross profit prediction) for a given market scenario, and plot the results against the price. Unfortunately, such a curve based on a preference share simulation reaches maximum at a price that is often too low. There are several reasons for this finding. Responsiveness to an artificial price setup in a conjoint study is usually much higher than that on the real market. A repeat-based component is missing from the model. A scenario hardly ever covers the full scope of products on the market, nor it reflects the weights of distribution channels, etc.
A usable approach has been based on a comparison of the product indirect utility value with the values of most closely competing products at their current market prices. The method is described on the page Optimal Competitive Price.
Part-worths from virtually any type of conjoint study can be converted for use in CCS - Client Conjoint Simulator (Sawtooth Inc.; unavailable since 2016; replaced with an integrated simulator version). The 2011 license price was US $ 300 for an end user. The license was permanent and could be used for any number of studies without any additional fees for the client.
Sawtooth CCS simulator was suitable for most common uses where no prohibitions between attribute levels would be imposed. It produced structured text output that could be copied to any other software accepting tab-separated data, e.g. MS Excel. The need for repeated opening and closing several modal windows for the scenario definition, setting of the simulation type and its options, and running the simulation, made use of the simulator a bit less friendly but was tolerable.
Examples of on-line simulators are highly ephemeral, e.g. on sawtoothprogrammer.com or sawtoothsoftware.com (the links were active in September 2017).
In case a customized simulation is required, e.g., if there are prohibitions between attributes, acceptability thresholds are respected, etc., an Excel-based simulator can be developed. A screenshot of scenario and results from several types of non-compensatory simulation are shown on on page Non-compensatory Modeling and Simulation. Another example is on page Relaxed Non-compensatory Simulation where the simulator can be downloaded from. A general description of the simulator concept and properties is on page DCM Excel Choice Simulator.
Despite caveats of many kinds, simulation is a powerful and one of the best prediction tools in MR. It can reveal possible market effects given proper promotion, distribution and time. Especially valuable is the "what if" simulation property that can often alert to and preclude inefficient decisions. At the same time one should be aware of a simulation not being the only and 100 % safe one.
A user of a basic what-if simulator should not expect a reproduction of the current market shares. The market has developed to the current state due to the involvement of millions of customers paying real money for thousands of products in course of several years, and is still changing. One can hardly expect that it can be reproduced from the data obtained from about 12 to 20 simulated choices from a limited range of products, often rather dissimilar to the existing ones, all done in 5 to 15 minutes by a few hundreds individuals who agreed to participate in the study (and be possibly rewarded for it). An unwisely forcible tuning of a simulator to reproduce the current market can (and often does) lead to a substantial distortion of the preferences.Any known characteristic depicting the real market is a back-looking one. In contrast, the information provided in the conjoint experiment is forward-looking. Respondents react to the offered stimuli in a way conditioned by their previous experience, current knowledge and needs, and future expectations. Since these reactions are aimed to the future, the obtained data can hardly be suitable for a reproduction of the past.
In some very special cases with many favorable conditions met, namely no new product included in the test, and the design and execution of the study being the current market oriented, it is possible to obtain results that reflect the current market. Below is an example from a pricing study of bottled beers on the very conservative Czech market.
Czech bottled beer market (2009)
This simulation uses only the data collected in the interviews with no additional corrections. Prices have been set to the values provided by the client. The absolute error of the estimated shares for most of the products did not exceed 3 % delimited by the red dashed lines. The red point stands for a recently introduced variety of a strong brand that seems to have a high potential. The dark blue point stands for a low-price and nationwide distributed variety of another strong brand often sold under the simulated price in promotion campaigns of a retail chain in order to attract customers. The reason for the low estimate for the beer represented by the green point has not been explained. Had this result been confirmed independently we would predict the green-point product might have faced problems in the future.
The achieved accuracy of raw estimates in the given example is rather an exception than a rule. The strength of a conjoint study should be seen in the prospect to evaluate influences of possible changes in a product characteristics rather than to insist on assertion of the known past or current state.