Dear Professor Bliemer and Ngene users,I am dealing with a labelled design where there are two agri-food products and an opt-out alternative. I have no idea of any priors from previous researches but I have up to some extent a deep knowledge of this agri-food market. Nonetheless, I have some concerns regarding the best strategy to follow.

1) What is the best approach? Using priors from my own knowledge or employing priors quite close to 0 for the pre-test?

2) From the code below and using my own priors I am getting a relatively high D-error (0.30). I am aware that if I increase the number of rows I could improve the efficiency of the design. In this regard, as I am going to use real products (one-litre bottles of olive oil) to be relabelled I cannot increase the number or rows due to logistic issues. Thus, I am keeping the rows up to a limit of 36 (48 at the most). Is there anything I can do to improve the D-error?

3) On the other hand, I am considering the constants into the design using as priors the current market shares of both categories of olive oil (0.6 y 0.4). Do you find this a suitable approach?

4) Likewise, to estimate the priors of both utility functions (two categories of olive oil) I have distributed around 100 units of utility between the attributes for each alternative (categories of olive oil). Would it be a suitable approach? Or it would be better to distribute the 100 units of utility between the whole set of attributes belonged to both alternatives?

5) Having a look at the choice probabilities of the alternatives I have noticed that there are small differences which is in correspondence with the relatively high D-error. Any suggestion to improve the choice probabilities?

6) On the other hand, as I am going to modify the information displayed into the labels, it is likely that the choices are polarized towards one alternative (the one of best quality once the information is more accessible and understandable by the consumers) so the trade-off between alternatives could be not very informative. Is there any solution to this phenomenon when it happens?

7) At the beginning I thought in using a Bayesian design with uniform distributions around the priors but I think the results would be the same as using a non-Bayesian design... Different would be if I were able to set, for example, normal distributions but as I said I have no idea about the priors except my own knowledge.

8) Lastly, something that I understand it is the same but I would like to confirm… I have used this two syntax indistinctly, is there any difference?+b4.dummy[0.15]*X4[0,1]+b4[0.15] *X4[1,0]

Thank you so much in advance to the Ngene communityHappy summer time!MAC

1) You could adopt an expert judgement strategy as outlined in Bliemer and Collins (2017), but always try to be conservative in setting prior values, i.e. they should not be too large.Bliemer, M.C.J., and A.T. Collins (2016) On determining priors for the generation of efficient stated choice experimental designs. Journal of Choice Modelling, Vol. 21, pp. 10-14.

2) A D-error of 0.30 is not necessarily high. D-errors are case specific and there is no specific value that is considered high or low. Sometimes 0.3 is high, sometimes it is low. Increasing the number of rows will not really help you since that also means that you need to give more choice tasks to a single respondent or create even more blocks such that your sample size needs to be larger. I think that 36 is more than enough. Often you can increase the efficiency of a design by making the attribute levels wider, but this is not possible for dummy coded variables, so there is not much else you can do. I think there is not much else you can do to further improve the efficiency of your design. I think your design will likely be fine as it is.

3) Yes constants can be set according to observed market shares, that would be OK.

4) I am not sure what you mean with 100 units of utility. I refer to Bliemer and Collins (2017) which will hopefully give you some ideas about setting priors.

5) Your choice probabilities look fine to me.

6) Alternative dominance typically occurs in unlabelled choice experiments, but can also occur in labelled choice experiments if one of the attributes becomes dominant. This can be captured by setting an appropriate prior value that reflects the strength of this attribute.

7) If you could do a pilot study you would automatically obtain normally distributed Bayesian priors. Also in Bliemer and Collins (2017) Bayesian priors are discussed. I would prefer Bayesian priors over fixed priors in order order to make the design more robust against prior misspecification.

8) In case there are only two attribute levels, the following three are the same:+b4.dummy[0.15]*X4[0,1]+b4.dummy[-0.15]*X4[1,0]+b4[-0.15]*X4[1,0]Note that the last level in Ngene corresponds to the reference level when dummy coding is applied.

Greetings from Sydney where it is currently winter (although with 22 degrees today not a bad winter day)

Dear Professor Bliemer,Thank you so much for your willingness to help and let me learn from you. Please find below my thoughts and doubts about your answers. At the bottom of the email four new inquiries were formulated and named with upper-case letters. 1) Thank you so much for both papers. I will have a look at them to lean on them.2) If the D-error is always case-specific and context dependant, which seems logical, how could I assess the goodness of efficiency?3) I guess the market shares will be different once the consumers have available and understandable information (through the choice experiment) about for example the obtaining process of both categories of olive oil, etc. Nonetheless, for the pre-test I think there is no better alternative.4) I meant for fixing the priors I used as reference 100 units of utility and I divided them by the attributes depending on their importance into the utility function under my own judgement. For example, for the alternative A:

+b2(0.35)+b3(0.15)+b4(0.15)+b5(0.15*2=0,3)= 0.95 (more or less 1). The same procedure was applied for the alternative B.

5) Good to hear that from you…6) I meant that what can be done in a hypothetical situation where consumers chose massively, for example, the alternative B regardless of its attributes. That is a situation where the size of the alternative-specific-constant of that alternative would be massive in such a way that the trade-offs would become uninformative and irrelevant.7) Again professor, I did not explain myself adequately. I completely agree with you but I meant that in the pilot stage I do not see the improvement between using fixed priors versus uniform distributions of these priors. In this regard, if in my code I had used uniform distributions in the following way:

BayesianU(B) = b6[(u,0.4,0.8)] +b7[(u,0.1,0.3)]*X7[1,0] +b8[(u,0.2,0.4)]*X8[1,0] +b9[(u,0.1,0.2)]*X9[1,0] +b10[(u,-0.4,-0.2)]*X10[0,1,2,3]$FixedU(B) = b6[0.60] +b7.dummy[0.2]*X7[1,0] +b8.dummy[0.3]*X8[1,0] +b9.dummy[0.15]*X9[1,0] +b10[-0.3]*X10[0,1,2,3]$I think the results would have been the same (more or less) as using fixed priors since the mid-point of the uniform distributions is just the fixed prior in the second syntax.

I hope not being abusing your trust and your willingness to help researchers. In this regard, I would formulate four questions:A) Once the priors have been obtained from the pilot study how you proceed to define the normally distributed Bayesian priors. I mean what is your strategy to define the uncertainty around the normally distributed Bayesian priors. Could you show me a simple example and the reasoning behind?

B) Does it make any sense to use the balance property in a labelled design? I personally do not see any advantage but as I am used to employing the * (;alts=A*, B*, C) for unlabelled designs… I would like to know your opinion.

C) In case of a dummy coding with only two attribute levels you only must be aware about the last level is the reference one (so you can do both inverting the sign of the priors or reassigning the levels to the 0 and 1). Nonetheless, in case of more than two levels I see the reasoning in the same way so one could use the following as interchangeable: +b3.dummy[0.15|0.10]*X3[0,1,2] ;+b3.dummy[0.15|0.10]*X3[2,1,0] as long as the levels are inverted when coding in such a way that: 0 --> 2 ; 1 --> 1 ; 2 --> 0.

D) I would like to explore some interaction effects but I do not want to complicate the design. Do you think is too risky to rely on the correlations between parameters will not be high to estimate those interaction effects?

Greetings from Córdoba (43 degrees today so I would go for your winter…)MAC

2) It is always hard to judge the relative efficiency, but the most informative are usually the S-estimates. However, the S-estimates only make sense if your priors are sufficiently close to the true values, so typically if you use priors coming from a pilot study the S-estimates would be good to look at. In your case, I am not sure how much value the S-estimates would have, and there is not really a good measure to judge relative efficiency. Ngene is usually very good in optimising your design, so I suppose you just have to trust Ngene; based on the priors you have set, there likely do not exist much more efficient designs, so this is really the best you can do.

2) Yes such a procedure is fine, the total utility is not more than 1 so that is OK. I was afraid you were trying to sum them to 100 "utils", which is far too large.

6) If one of the labels is dominant, then there is not much you can do, unless you may be able to reformulate the model into an unlabelled experiment in which you put the labels as an attribute into the utility function as a dummy coded variable, which replaces the constants. In a labelled experiment you will always see options where you have exactly one option with label A and one with label B, but when it is converted into an unlabelled experiment and in case one of the labels is dominant, then also options with Label A versus Label A, and Label B versus Label B will appear in the design such that the respondent makes trade-offs on the other attributes.

7) They may be similar, but they may also be different. You can test the robustness of your design but saving your design with fixed priors, and then evaluate your design using ;alg = eval (i.e. let Ngene calculate the D-error) under a different set of priors, and do the same with the Bayesian efficient design. If both designs provide a similar D-error under prior misspecification, then the locally optimised design is sufficiently robust. However, in many cases you may find that the Bayesian efficient design will exhibit lower D-errors under different priors, since they were optimised over a wider range. Again this is case specific.

A) From estimating your model on pilot data you obtain parameter estimates (betas) and standard errors (se). Your se's indicate the unreliability around your parameter estimates, and are usually pretty large in case of a pilot study. Your Bayesian priors can then be defined as b1[(n,beta,se)].

B) The asterisk checks for dominant alternatives and repeated choice alternatives, which is only necessary in unlabelled experiments. So there is no need to do this in your situation.

C) I believe this is correct.

D) Your design is still relatively simple, so I think you can include interactions. But if you leave them out, the risk of any two attributes being perfectly correlated, especially with 36 choice tasks, is almost zero. You can easily inspect the correlations in your design by clicking on correlations when you inspect your design. As long as your attributes are not more than 90% correlated i would not worry. Note that correlations by themselves are not a problem, it is just that almost perfect correlations are problematic as you will end up with multicollinearity.

As you can see for the labelled alternative B I was able to specify three interaction effects since all the individual parameter have a positive sign. In this regard, for the labelled alternative A, I would like to specify another interaction effects between X2 and X4. Nonetheless, this time the sign of the interaction effect is not that intuitive as X2 has a negative parameter and X4 positive. In this regard, I was wondering if it would be possible to reverse the sign and accordingly the coding for making the interaction effect more intuitive and easier of specifying it.

All the dummy variables were specified in such a way that:The level 0 reflects the current labelling information for that attribute and the level 1 reflects an alternative labelling information. As in the attribute +b2.dummy[-0.35]*X2[1,0], the code 1 means an alternative labelling which will make easier for the consumer to identify the oil as non-extra virgin, and therefore it is supposed that will help to choose the alternative B and that's why the negative sign. So my doubt, in order to make easier the interaction effect formulation is whether I could do the following:

+b2.dummy [-0.35] * X2 [1,0] = +b2.dummy [+0.35] * X2 [0,1]

Regarding your suggestion, in the above response, about converting the design into a unlabelled one it would be impossible due to legal constraints about the labelling. Si, I guess I will have to cope with, if it happens, the issue of selecting the B labelled alternative intensively regardless of its attributes.

1) it holds that b1[1] + b2.dummy[2]*x1[1,0] = b1[3] + b2.dummy[-2]*x1[0,1]. So instead of merely changing the sign of the dummy coded coefficient, it also influences the constant since the constant is confounded with the reference level.

2) I do not understand the issue of needing positive parameters to create interaction effects. An interaction effect can have a negative or positive coefficient, independent of the coefficients of the main effect attributes.For example, consider:b1[-0.2]*price[5,10] + b2[0.8]*quality[1,2] + b3[0.1]*price*qualityIn order to understand what impact the interaction price*quality has on behaviour, this equation can be rewritten as:{ b1[-0.2] + b3[0.1]*quality } * price[5,10] + b2[0.8]*quality[1,2]In other words, b3 is assumed to be positive because a higher quality can make people less price sensitive.So you should interpret the interaction effects in this light, and not look at the signs of the coefficients of price and quality.

Dear Professor Bliemer,Thank you so much again for your help and stimulating comments which make me want to learn more about experimental design.After reading your paper (On determining priors for the generation of efficient stated choice experimental designs) about how to set priors for the pilot stage I was not be able to transfer your procedure for an unlabelled design to my labelled one. In fact, I am a little concern about my previous reasoning to set priors on my own. The way I set the priors was dividing a unit of utility between the attributes of each alternative (A, B) according to my on criterion about the relative importance of the attributes in the consumer utility function (the priors for the ASC represent the actual market shares). Nonetheless, my concern is related to the fact that I assigned these priors independently for each alternative A and B in this way:

The idea is that the sum of the utilities in absolute value was around 1 in function of the expected relative importance of each attribute into the utility function of the alternative A. Thus:+b2(|-0.35|)+b3(0.20)+b4(0.15)+b5(|-0.15*2=-0,3|)= 1

The same reasoning for the alternative B:+b7(0.15)+b8(0.35)+b4(0.10)+b10(|-0.2*2=-0,4|)= 1

But now after reading your paper I am not sure if I proceeded correctly since maybe it could be better to assign the priors comparing the relative importance of each attribute of the alternative A regarding the attributes of the alternative B, even considering simultaneously the ASCs, and then taking some average. Or for example, dividing the unit of utility between the whole set of attributes and the ASCs for both alternatives. I would like to know your opinion if possible.

On the other hand, regarding the above-mentioned paper which I find very stimulating I have a doubt. In the Table 3 (page 4) the researcher has to guess the expected probabilities (fsj) for the selected choice sets but I would like to know how you estimated the adjacent column Pjs. For example, in the choice set 4 the fsj for alternative 1 is 0,6 while Pjs is 0.296; and the fsj for alternative 2 is 0,4 while Pjs is 0.704.

Your thinking about looking at the relative importance of attributes is a good way of thinking about priors. Therefore, setting your priors in the way to describe could be appropriate. The main thing you may want to check is the scale, i.e. scaling all priors up or down with a certain value. Given that you have normalised the total utility to one, scale in your case should be OK I think.

I believe the 0.6 and 0.4 in Table 3 should be swapped around, I believe this is a mistake in the table.

Dear Professor Bliemer and NGENE users:I am almost about to launching the research to get the pilot sample and estimate with a better degree of reliability the priors. Nonetheless, I would like to know if you identify something erroneous in the syntax. On the other hand, as probably I will use random parameter logit and could have to estimate around 14 mean parameters and maybe another 14 standard deviations I have decided to run 36 rows. Nonetheless, due to logistic issues it would be preferable to reduce this number as long as the reliability of the estimations are not at risk (I do not see the way to do it getting at the same time enough degrees of freedom).Finally, the most important thing is that I would like to introduce two treatments in the choice experiment basically (and maybe another two):1. Choice experiment non-incentive compatible2. Choice experiment incentive compatible3. …4. … So, lets to consider that we have two or four treatments, how would you operate in case you weren’t be able to get treatment-specific priors due to logistic issues (for being very complicated to manage real products with different experimental designs)? Would it be suitable (or too risky) to implement all the treatments and pool all the choice to get in some way averaged priors?Thanks in advanceHappy new year to everyone! MAC

Note that you have 3 alternatives (with C the no choice alternative), which means that with 36 rows you can estimate 36*(3-1) = 72 parameters. Therefore, you could also use 18 rows. While this is enough to satisfy the degrees of freedom, you will likely want a bit more variation in your data and therefore perhaps stick with 36 rows.

I assume that with treatments you are referring to different scenarios in which you change the context of the choice experiment. You can directly include scenario variables into the utility functions and estimate a single joint model. For example, if one of the scenarios is that you show people an actual product, then you can include variable "showproduct" as a dummy variable and create interaction effects with it in the utility functions. Ngene will then also optimise on this variable and create choice tasks in which the product is shown and choice tasks where it is not shown.

Setting priors to average values in case priors for some scenarios are not available sounds fine to me. If you are unsure, then it is best to use conservative values, i.e. closer to zero, by for example dividing all priors by two.