In terms of the topics, there are some of them I like and some of them I don’t like. The presentation slides will eventually be posted on conference’s website, however, I would like to give one or two sentences commenting on each topic.

“Sample size estimation incorporating disease progression” – the key issue is the adequacy of the study endpoint. A good endpoint will incorporate the impact of the disease progression.

“Predicting accrual in ongoing trials” – utilizing the complicated statistical model to predict the accrual is a waste of time. Accrual in ongoing clinical trials is 95% clinical operations issues, 5% related to statistics. Is it worth to modeling the accrual?

“New incentive approaches for adherence” – money incentives including lottery is a sensitive topic and ethic issue could follow no matter it is incentive for adherence or for study visit compliance. Money incentives are different depending on participants’ social economic status (family income). $100 lottery may be very incentive to some, but not to others.

“Efficient source dada verifications in cancer trials” – I always thought that all data fields had to be 100% source data verified. It is not entirely true in large scale trials in oncology or in studies with cardiovascular endpoint. In industry, we are rather conservative.

“Estimation of effect size in trials stopped early” – trials stopped early due to efficacy is not very common and should not be encouraged. Difficulty in estimating the effect size still exists for trials stopped early.

“Accounting in analysis for data errors discovered through sampling” – Unreliable data or large % of missing data is always a concern, even for observational studies. Statistical approach may not be a good option. When data is garbage, the results we draw from the data will also be garbage – so called ‘garbage in, garbage out’ no matter which statistical model is utilized to address the data issues.

“Some practical issues in the evaluation of multiple endpoints” – It is so correct that we should play down the importance of differentiating ‘primary endpoint’, ‘secondary endpoint’, ‘tertiary endpoint’,… Multiple comparison has been expanded so much and is everywhere now (co-primary, primary and secondary, co-secondary, secondary superiority test after non-inferiority test, interim analysis, meta analysis, ISE,…). Are we overdoing this?

1 comment:

Why do people talk about estimating sample size? The sample size is not an unknown quantity, like mean treatment effect, but instead a DECISION that one has to make. The word "estimate" is completely out of place.