Post navigation

ASC 40: Reflections

Well, I have blogged about the results of the American Solar Challenge, and produced this summary chart (click to zoom):

I would like to supplement that with some general reflections (as I did in 2016). First, let me complement the ASC organisers on the choice of route. It was beautiful, sunny, and challenging (but not too challenging). Brilliant planning!

Second, the FSGP/ASC combination worked well, as it always does. Teams inevitably arrive at the track with unfinished and untested cars (App State had never even turned their car on, I am told). The FSGP allows for testing of cars in a controlled environment, and provides some driver training before teams actually hit the road. The “supplemental solar collectors” worked well too, I thought. I was also pleased at the way that teams (especially the three Canadian teams) had improved since 2016.

If one looks at my race chart at the top of this post, one can see that the Challenger class race was essentially decided on penalties. This has become true for the WSC as well. It seems that inherent limits are being approached. If experienced world-class teams each race a world-class car, and have no serious bad luck, then they will be very close in timing, and penalties will tip the balance. For that reason, I would like to see more transparency on penalties in all solar racing events.

I was a little disappointed by the GPS tracker for ASC this year. It was apparently known not to work (it was the same system that had failed in Nebraska in 2016), but people were constantly encouraged to follow teams with it anyway. It would almost have been better to have had no tracker at all, instead just encouraging teams to tweet their location regularly.

Cruiser Scoring

I though Cruiser scoring for ASC 2018 was less than ideal. A great strength of the ASC Challenger class is that even weak teams are sensibly ranked. This was not entirely true for the Cruisers. I would suggest the following Cruiser scoring process:

Divide person-miles (there’s no point using person-kilometres if everything else is in miles) by external energy input, as in existing scoring

Multiply by practicality, as in WSC 2019 scoring (for this purpose, it is a good thing that practicality scores are similar to each other)

Have a target time for Cruiser arrival (53 hours was good) but no low-speed time limit – instead, calculate a lateness ΔH (in hours) compared to the target

Convert missing distance to additional lateness as if it had been driven at a specified penalty speed, but with no person-mile credit (the ASC seems actually to have done something like this, with a penalty speed around 55 km/h)

Multiply the score by the exponential-decay term e−ΔH/F, where F is a time factor, measured in hours (thus giving a derivative at the target time of −1/F)

Scale all scores to a maximum of 1

The chart below applies this suggested process to the ASC 2018 Cruisers, for various choices of penalty speed and time factor F, drawing a small bar chart for each choice. Sensible choices (with a grey background) give each car a score of at least 0.001. It is interesting that all sensible choices rank the cars in the sequence Onda Solare, Minnesota, App State, and Waterloo.

Applied to the WSC 2015 finishers (with a target of 35 hours), penalty speed is obviously irrelevant. A time factor of F = 10 preserves the rankings awarded in that event, while higher time factors would have put Bochum in second place. In that regard, note that regulation 4.4.7 for WSC 2019 is equivalent to a very tough time factor of around 1.66 hours.

Of course, another option would be to return to the additive scoring systems of WSC 2013 and WSC 2015, and this has been suggested.

Strategy

I have posted about basic Challenger strategy. This race illustrated the fact that Cruiser strategy can be more complex. First, it is inherently multi-objective. Teams must carry passengers, drive fast, and conserve energy. Those three things are not entirely compatible.

Second, even more than in the Challenger class, the Cruiser class involves decision-making under uncertainty. In this event, teams could build up a points buffer early on (by running fully loaded without recharging, planning on speeding up later if needed). Alternatively, and more conservatively, teams could build up a time buffer early on (by running fast and recharging, in case something should go wrong down the track). Both Minnesota and Onda chose to do the former (and, as it happened, something did go wrong for Minnesota). In the Challenger class it is primarily weather uncertainty that requires similar choices (that was not a factor in this wonderfully sunny event).

Third, even more than in the Challenger class, psychological elements come into play. Onda were, I think, under some pressure not to recharge as a result of Minnesota not recharging. In hindsight, under the scoring system used, Onda could have increased their efficiency score by recharging once, as long as that recharge made them faster by at least 3 hours and 36 minutes (not that it mattered in the end, since all teams but Onda were given a zero efficiency score).

Together, factors such of these underscore the need to have a good operations analyst on the team, especially in the Cruiser class.

Thanks Tony once again for your excellent coverage of this event.
The participation of more overseas teams made it a race to remember and I hope that other teams from outside the USA will take notice and attend in 2020. I also hope that this will encourage the non-WSC teams to apply themselves to improving their cars, the gulf between WSC and non-WSC is all too apparent.
Regarding Cruiser Class scoring – this is the fourth major event to date and, as yet, a satisfactory scoring system has not been found. My discontent stems from the fact that the results do not give a true reflection of the events or the achievements of the teams. In each case it is probably true that the correct winner has been called, certainly here at ASC, but beyond that how could an onlooker understand the achievements of the remaining teams. This scoring formula has given the impression that the Italian car is five times as good as the car from Minnesota. It also shows that there is nothing to choose between UMN and the other two teams. In each case nothing could be further from the truth. The superb achievement of ALL of the teams cannot be appreciated without looking in detail at how the race unfolded over the week.
My opinion is that the use of multiplication rather than addition is the factor that distorts results so much. When combined with severe timing penalties, and the rather crude measurement of p/km, the differentials in the scores becomes frankly ludicrous. I foresee no improvement at WSC next year with the formula they are using, in fact it would not surprise me at all if every team scored badly there, ie under 20%.
Rant over, it is most important to thank the organizers for a great event – two great events in fact – and congratulate the teams on their efforts, not just this week but over the past months and years. Well done all.

Thanks for those kind words, Nigel. You make a valid point when you say “the differentials in the scores becomes frankly ludicrous.” For purely multiplicative scoring systems, it’s really the logarithms of the final scores that are the important thing (because the final scores are effectively the result of taking the exponential of a sum). So for one of my suggested ASC scoring options, which yields the scores [1, 0.1108, 0.0267, 0.0047], taking logarithms and adding 7 gives [7, 4.80, 3.38, 1.64], which may seem more appropriately balanced. I have already decided that any attempt to visualise WSC 2019 results will need to be logarithmic in nature, precisely so as to give a more balanced view of performance.

Thanks Tony for your complete and exciting coverage of the ASC. You bring the people together! Solar racing through remote areas needs your detailed view.
I agree, the race has been well organized. The two day stages are a very good idea to keep the crowd together. Solar racing has a message to tell. It is “Solar Impulse” on the ground. Therefore, I seek after better media coverage. The teams invest millions of Dollars in high-tech solar cars, but it is not possible to install a well working GPS-system?
Also thank you very much for starting a discussion on the rules. In my opion the idea of scoring should be to bring information besides the actual geographic position easy to understand and in time to the rest of the world. Imagine: A soccer game, where the commentator loses track of the game and does not know the actual score, due to an intransparent rule book. Penalties for unknown reasons that might occur after the finish of the game/race are a complete annoyance. Since I like the idea of an efficiency-race in the cruiser class scoring logarithms seem indispensable to me. Best would be to keep them as comprehensible as possible.
Keep it up!
Dietrich

My goal is to communicate, using pictures and diagrams, the flow of events in races like ASC and WSC. For complex scoring this is difficult, which is why for major yacht races the bulk of media coverage focuses on the first across the line, not the actual winner.

And, yes, the staging is essential at ASC, where CalSol, for example, was 16 hours behind the leader. In remote Australia you can have cars scattered across half a continent. This is not such a good idea in the USA.