I've tried several runs in this question. Howver, I still get the lowest cross-validation error for C=0.1, rather than 0.001. Their errors are pretty close though. I did 100 runs for each of the C's and then took an average value of the errors.
I'd like to know what everyone else has done in the question and whether they got results similar to mine. Would it make a difference if I compared the errors after 1 run with each of the C's and then, finally selected the C which gets selected most often?

I've tried several runs in this question. Howver, I still get the lowest cross-validation error for C=0.1, rather than 0.001. Their errors are pretty close though. I did 100 runs for each of the C's and then took an average value of the errors.
I'd like to know what everyone else has done in the question and whether they got results similar to mine. Would it make a difference if I compared the errors after 1 run with each of the C's and then, finally selected the C which gets selected most often?

The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.