jpl

There is a population of retrocomputing enthusiasts out there, whose basements, garages, and attics have been taken over by machines of years past. Most of the time, these people concentrate on one make; you’re an Apple guy, or you’re a Commodore guy, or you’re a Ford guy, or you’re a Chevy guy. The weirdos drive around with an MSX in the trunk of an RX7. This is the auction for nobody. NASA’s JPL Lab is getting rid of several tons of computer equipment, all from various manufacturers, and not very ‘vintage’ at all. Check out the list. There are CRT monitors from 2003, which means they’re great monitors that weigh as much as a person. There’s a lot of Sun equipment. If you’ve ever felt like cleaning up a whole bunch of trash for JPL, this is your chance. Grab me one of those sweet CRTs, though.

Last week, we published something on the ‘impossible’ tech behind SpaceX’s new engine. It was reasonably popular — actually significantly popular — and got picked up on Hacker News and one of the Elon-worshiping subreddits. Open that link in one tab. Now, open this link in another. Read along as a computer voice reads Hackaday words, all while soaking up YouTube ad revenue. What is our recourse? Does this constitute copyright infringement? I dunno; we don’t monetize videos on YouTube. Thanks to [MSeifert] for finding this.

Wanna see something funny? Check out the people in the comments below who are angry at a random YouTuber stealing Hackaday content, while they have an ad blocker on.

Teenage Engineering’s OP-1 is back in production. What is it and why does it matter? The OP-1 is a new class of synthesizer and sampler that kinda, sorta looks like an 80s Casio keyboard, but packed to the gills with audio capability. At one point, you could pick one of these up for $800. Now, prices are at about $1300, simply because production stopped for a while (for retooling, we’re guessing) and the rumor mill started spinning. The OP-1 is now back in production with a price tag of $1300. Wait. What? Yes, it’s another case study in marketing: the best way to find where the supply and demand curves cross is to stop production for a while, wait for the used resellers to do their thing, and then start production again with a new price tag that people are willing to pay. This is Galaxy Brain-level business management, people.

What made nerds angry this week? Before we get to that, we’re gonna have to back track a bit. In 2016, Motherboard published a piece that said PC Gaming Is Still Way Too Hard, because you have to build a PC. Those of us in the know realize that building a PC is as simple as buying parts and snapping them together like an expensive Lego set. It’s no big deal. A tech blog, named Motherboard, said building a PC was too hard. It isn’t even a crack at the author of the piece at this point: this is editorial decay.

And here we are today. This week, the Internet reacted to a video from The Verge on how to build a PC. The original video has been taken down, but the reaction videos are still up: here’s a good one, and here’s another. Now, there’s a lot wrong with the Verge video. They suggest using a Swiss army knife for the assembly, hopefully one with a Philips head screwdriver. Philips head screwdrivers still exist, by the way. Dual channel RAM was completely ignored, and way too much thermal compound was applied to the CPU. The cable management was a complete joke. Basically, a dozen people at The Verge don’t know how to build a PC. Are the criticisms of incompetence fair? Is this like saying [Doug DeMuro]’s car reviews are invalid because he can’t build a transmission or engine, from scratch, starting from a block of steel? Ehhh… we’re pretty sure [Doug] can change his own oil, at least. And he knows to use a screwdriver, instead of a Swiss army knife with a Philips head. In any event, here’s how you build a PC.

Hackaday writers to be replaced with AI. Thank you [Tegwyn] for the headline. OpenAI, a Musk and Theil-backed startup, is pitching a machine learning application that is aimed at replacing journalists. There’s a lot to unpack here, but first off: this already exists. There are companies that sell articles to outlets, and these articles are produced by ‘AI’. These articles are mostly in the sports pages. Sports recaps are a great application for ML and natural language processing; the raw data (the sports scores) are already classified, and you’re not looking for Pulitzer material in the sports pages anyway.China has AI news anchors, but Japan has Miku and artificial pop stars. Is this the beginning of the end of journalism as a profession, with all the work being taken over by machine learning algorithms? By vocation, I’m obligated to say no, but I have a different take on it. Humans can write better than AI, and the good ones are nearly as fast. Whether or not the readers care if a story is accurate or well-written is another story entirely. It will be market forces that determine if AI journalists take over, and if you haven’t been paying attention, no one cares if a news story is accurate or well written, only if it caters to their preexisting biases and tickles their confirmation bias.

Of course, you, dear reader, are too smart to be duped by such a simplistic view of media engagement. You’re better than that. You’re better than most people, in fact. You’re smart enough to see that most media is just placating your own ego and capitalizing on confirmation bias. That’s why you, dear reader, are the best audience. Please like, share, and subscribe for more of the best journalism on the planet.

Few things build excitement like going to space. It captures the imagination of young and old alike. Teachers love to leverage the latest space news to raise interest in their students, and space agencies are happy to provide resources to help. The latest in a long line of educator resources released by NASA is an Open Source Rover designed at Jet Propulsion Laboratory.

JPL is the birthplace of Mars rovers Sojourner, Spirit, Opportunity, and Curiosity. They’ve been researching robotic explorers for decades, so it’s no surprise they have many rovers running around. The open source rover’s direct predecessor is ROV-E, whose construction process closely followed procedures for engineering space flight hardware. This gave a team of early career engineers experience in the process before they built equipment destined for space. In addition to learning various roles within a team, they also learned to work with JPL resources like submitting orders to the machine shop to make ROV-E parts.

ROV-E

ROV-E Team

Once completed, ROV-E became a fixture at JPL public events and occasionally visits nearby schools as part of educational outreach programs. And inevitably a teacher at the school would ask “The kids love ROV-E! Can we make our own rover?” Since most schools don’t have 5-axis CNC machines or autoclaves to cure carbon fiber composites, the answer used to be “No.”

When planning a trip by car these days, it’s pretty much standard practice to spin up an image of your destination in Google Maps and get an idea of what you’re in for when you get there. What kind of parking do they have? Are the streets narrow or twisty? Will I be able to drive right up, or will I be walking a bit when I get there? It’s good to know what’s waiting for you, especially if you’re headed someplace you’ve never been before.

NASA was very much of this mind in the 1960s, except the trip they were planning for was 238,000 miles each way and would involve parking two humans on the surface of another world that we had only seen through telescopes. As good as Earth-based astronomy may be, nothing beats an up close and personal look, and so NASA decided to send a series of satellites to our nearest neighbor to look for the best places to land the Apollo missions. And while most of the feats NASA pulled off in the heyday of the Space Race were surprising, the Lunar Orbiter missions were especially so because of how they chose to acquire the images: using a film camera and a flying photo lab.

In the early 1970s, the American space program was at a high point, having placed astronauts upon the surface of the moon while their Soviet competitors had not taken them beyond an Earth orbit. It is however a simplistic view to take this as meaning that NASA had the lead in all aspects of space exploration, because while Russians had not walked the surface of our satellite they had achieved a less glamorous feat of lunar exploration that the Americans had not. The first Lunokhod wheeled rover had reached the lunar surface and explored it under the control of earth-bound engineers in the closing months of 1970, and while the rovers driven by Apollo astronauts had placed American treadmarks in the lunar soil and been reproduced on newspaper front pages and television screens worldwide, they had yet to match the Soviet achievements with respect to autonomy and remote control.

At NASA’s Jet Propulsion Laboratory there was a project to develop technology for future American rovers under the leadership of [Dr. Ewald Heer], and we have a fascinating insight into it thanks to the reminiscences of [Mike Blackstone], then a junior engineer.

The aim of the project was to demonstrate the feasibility of a rover exploring a planetary surface, picking up, and examining rocks. Lest you imagine a billion dollar budget for gleaming rover prototypes, it’s fair to say that this was to be achieved with considerably more modest means. The rover was a repurposed unit that had previously been used for remote handling of hazardous chemicals, and the project’s computer was an extremely obsolete DEC PDP-1.

We are treated to an in-depth description of the rover and its somewhat arcane control system. Sadly we have no pictures save for his sketches as the whole piece rests upon his recollections, but it sounds an interesting machine in its own right. Heavily armoured against chemical explosions, its two roughly-humanoid arms were operated entirely by chains similar to bicycle chains, with all motors resting in its shoulders. A vision system was added in the form of a pair of video cameras on motorised mounts, these could be aimed at an object using a set of crosshairs on each of their monitors, and their angles read off manually by the operator from the controls. These readings could then be entered into the PDP-1, upon which the software written by [Mike] could calculate the position of an object, calculate the required arm positions to retrieve it, and command the rover to perform the required actions.

The program was a success, producing a film for evaluation by the NASA bigwigs. If it still exists it would be fascinating to see it, perhaps our commenters may know where it might be found. Meanwhile if the current JPL research on rovers interests you, you might find this 2017 Hackaday Superconference talk to be of interest.

We have a lot of respect for the hackers at NASA’s Jet Propulsion Laboratory (JPL). When their stuff has a problem, it is often millions of miles away and yet they often find a way to fix it anyway. Case in point is the Curiosity Mars rover. Back in 2016, the probe’s rock drill broke. This is critical because one of the main things the rover does is drill into rock samples, collect the powder and subject it to analysis. JPL announced they had devised a way to successfully drill again.

The drill failed after fifteen uses. It uses two stabilizers to steady itself against the target rock. A failed motor prevents the drill bit from retracting and extending between the stabilizers. Of course, sending a repair tech 60 million miles is not in the budget, so they had to find another way. You can see a video about the way they found, below.

Unless you’ve got your ear on the launch pad so to speak, you might not be aware that humanity just launched a new envoy towards the Red Planet. Estimated to touch down in Elysium Planitia on November 26th, the InSight lander is relatively low-key as far as interplanetary missions go. Part of the NASA’s “Discovery Program”, it operates on a considerably lower budget than Flagship missions such as the Curiosity rover; meaning niceties like a big advertising and social media campaign to get the public excited doesn’t get a line item.

Which is a shame, because not only are there much worse things to do with tax money than increase public awareness of scientific endeavours, but because InSight frankly deserves a bit more respect than that. Featuring a number of firsts, the engineers and scientists behind InSight might have been short on dollars, but ambition was in ample supply.

So in honor of the successful launch, let’s take a look at the InSight mission, the unique technology onboard, and the answers scientists hope it will be able to find out there in the black.

The future of humans is on Mars. Between SpaceX, Boeing, NASA, and every other national space program, we’re going to Mars. With this comes a problem: flying to Mars is relatively easy, but landing a large payload on the surface of another planet is orders of magnitude more difficult. Mars, in particular, is tricky: it has just enough atmosphere that you need to design around it, but not enough where we can use only parachutes to bring several tons down to the surface. On top of this, we’ll need to land our habitats and Tesla Roadsters inside a very small landing ellipse. Landing on Mars is hard and the brightest minds are working on it.

At this year’s Hackaday Superconference, we learned how hard landing on Mars is from Ara Kourchians (you may know him as [Arko]) and Steve Collins, engineers at the Jet Propulsion Laboratory in beautiful Pasadena. For the last few years, they’ve been working on COBALT, a technology demonstrator on how to use machine vision, fancy IMUs, and a host of sensors to land autonomously on alien worlds. You can check out the video of their Supercon talk below.