Of these 3 and 16 are the same problem, and 15 is close enough.And so are 11 and 14.9, 10, 12, and 13 are not real problems.2 is not an existential risk.11, 14, and 17 are not existential problems in themselves, although they could limit our ability to deal with a real existential problem if one arose.

6 is not likely and the only way to prevent it is deliberately impose 11/14, which while not an existential risk itself will increase the difficulty in handling an existential (or other) danger that may eventually occur.

7 and 8 are so unlikely within any given time span that they are not worth worrying about until the other dangers can be handled.

I used to think 1 was most likely and 5 next, but Eliezer Yudkowsky's writings have convinced me that unfriendly AI (3/15/16) is a nearer term risk, even if not necessarily a worse one.

Libertarianism is the best available self-preservation mechanism. I am using libertarianism in a general sense of freedom from government interference. It is the social and memetic equivalent of genetic behavioral dispersion; that members of many species behave slightly differently which reduces the likelihood of a large percentage falling to the same cause. The only possible defense against the real risks is to have many people researching them from many different directions - the biggest danger with any of these only occur if someone has a substantial lead in the development/implementation of the technologies involved.