thank you very much for organizing CASP8, you are doing a great service to the community! I think CASP has an important role to play in fostering research in structure prediction. This is why I would like to throw a few suggestions into the discussion that might help to make the conference even more effective in promoting progress in our field.

* Poster talks: The posters and associated discussions are definitely the most valuable part of CASP in my view. In contrast, there was too little added value from presentations and round tables. There are many more promising or inspiring ideas around than the talks and round table discussion made one believe. I would like to make the following suggestion: Until the end of the second day everyone can vote for up to five different posters to be presented. The ten posters with the highest number of votes cast would get presented on the fourth day of the conference in 20+5 minute talks. The time could be gained by shortening round table discussions and assessor talks. I am fairly sure that this session would be the most valuable one of the conference. Two hundred conference attendees are better in spotting the most interesting or original work on the basis of their poster discussions than the organizers or assessors can be by looking at the half-page abstracts (under the usual time pressure). This would have the additional advantage of encouraging even more high-quality posters to be presented.

* Poster discussion appointments: A suggestion to increase the effectiveness of poster session discussions: You could put a form on the web a time table containing 15min time slots showing all the poster session times (e.g. dayly 17:00-18:30 and 21:30-23:00). Poster presenters could mark every day when they will in principle be available for discussions and will pin this time table near their poster. Those interested to discuss the poster may then put down their names next to the preferred time at which to meet with the presenter.

* Assessment criteria: I agree with Michael Sternberg that, in order to get more inspiration from the CASP conference and to acknowledge contributions fairly, assessors and organizers should adopt different criteria on how to select speakers. Despite the generally accepted fact that automatic methods have become THE driving force for progress in contrast to human input, CASP organizers/assessors still use the same criteria as in the first CASPs, namely performance in *anonymous* evaluations. This puts automatic methods at a strong disadvantage, since humans can use plenty of time and have access to all server models, a situation that can hardly be called realistic and that requires little or no human expertise to beat servers (e.g. by consensus and MQA methods plus maybe some alibi expert fiddling). Regarding the assessment, I was most happy with some straight and simple Livebench-like assessment by 5 or so alternative quality measures that could be selected on the web page (CAFASP6) to rank servers.

* ROC analysis: What about the discussions in CASP6 about evaluating servers' selectivity or precision instead of just their sensitivity? In other words, how good are the servers' quality numbers in telling the user the reliability of the model? I liked the ROC analysis in Livebench and CAFASP a lot and think this is sorely missing from CASP evaluations. I would also welcome very much a serious assession of the model B-values the next time.

* Ranking: The availability of an official server ranking from CAFASP and Livebench was very valuable to users and server developers. Please make the rankings distributed at the meeting in hard-copy available on the web.

* Engineering versus grand ideas: I do not agree with John Moult that we need grand ideas to advance. Lesser ideas are sometimes necessary to prepare the ground for grander ideas. The important point is: we need new ideas, fullstop. For instance, if we could make Rosetta-type sampling ten times faster, the lucky inventor might still not be able to sample on his few machines more than David Baker on his folding@home and consequently might not generate better models. But he will enable tens of other groups to work on similar approaches, significantly increasing the chances for a grand (enough) idea to be born.