As I understand it, the sensor was designed so it ‘couldn’t’ be installed upside-down. But there are few things that can’t be installed upside-down with a large enough hammer and an even larger dose of stupidity.

Any company operating under ISO-9000 would have a verification process that would have caught the upside-down sensor error. Triply-redundant sensors would eliminate single-sensor points of failure (the shuttle had quintuply-redundant computers). Roscosmos isn’t operating even at an ISO-9000 level, much less the milspec-or-better regime that American space companies have to use to enter the market.

Having single points of failure like that would preclude man-rating on any commercial spacecraft or LV. They also require extensive testing in space before carrying crew, including unmanned full-up mission, and also an in-flight abort test. Therefor, in the interests of safety, NASA has no choice but to continue using Soyuz while all the red tape for commercial crew is dealt with. The fact that using Dragon 2 now would be safer than Soyuz (Have the Russians figured out how a hole got drilled in the Soyuz hull a few months back?) is of course irrelevant to NASA.

One part of the article bothers me immensely;
“It is inconceivable that the more cautious US space agency would suffer a total rocket failure in October and fly humans on that same rocket less than two months later.”

That, to me, is horrendous as well as true. And it illustrates much that is wrong with NASA.

In a case where a US launch suffered even a fatal failure, if the cause is clear, and the fix straightforward and fast, there is no rational reason why they would not fly again so soon. Indeed, needless delay and dithering probably increases risk.

It’s a matter of bureaucratic mindset. Back in 1985-86, the US had a string of launch failures, including a Titan 34, Shuttle Challenger, a Delta, and then another Titan 34. The Shuttle program was grounded for over 2 years. The other rockets were grounded for several months.

When SpaceX was trying to achieve orbit with their Falcon I, they suffered three failures in a row, each with a different cause. They saw that the problem n the third launch was a very simple fix, they quickly flew their fourth attempt and were successful. They didn’t need months of bureaucratic review to find and fix the problem.

Back around 1986-87, I read a classified report on Soviet space activity. This was back in the time when they were launching about 60 flights per year, with about half of them on Soyuz/Molynia boosters. The report mentioned a rare Soyuz launch failure. They took the next Soyuz and the next payload of that type and launched the replacement within two weeks of the failure. That’s a robust space program.

The really ironic part is that NASA knows more about Dragon/Falcon 9 and whatever the Boeing thing is called/Atlas V than they do about Soyuz – both the spacecraft and the launch vehicle. A few years ago, I attended a panel in the Senate Conference Center chaired by Frank Culbertson, and attended by a number of NASA astronauts and safety officials, to discuss Commercial Crew. They admitted that the Russians would not allow NASA to examine any design information, build or QC documents, test data, or pretty much anything else on Soyuz and it’s launcher. But they “became comfortable” with it, somehow. Jim Oberg echoed that to me once, noting that the Russians were so startled at what we had deduced about Soyuz that they thought we had spies within Roscosmos. And maybe we do. Maybe that’s the only way NASA was comfortable sending astronauts up on Soyuz.

But they have even more intrusive spies in SpaceX and Boeing. NASA has to know far more about those two craft than they can ever know about Soyuz. I think their reticence about flying is less about what they know than their lack of appreciation on the limits of reason. Adam Smith warned about the “man of system” who thinks he knows enough about an economy to direct it, when in fact no one can possibly know enough about an economy to even forecast it day to day. Every organization that sports “system safety” is deluding itself mightily. They will expend vast resources, slow progress to a glacial pace, and in the end have no positive effect on safety whatsoever.

ISO 900… would have absolutely nothing to say about a single sensor point of failure or even one installed upside down, as long as all was properly documented in a report with all the required headings and sub-headings. A long as a corrective action for the upside down sensor was made, it wouldn’t matter if it was scheduled for three days after launch, it would only be a problem if the next audit discovered that it hadn’t been properly closed out after the crash.