While working at MAKE and before joining Raspberry Pi, Matt attended the NYU ITP program.

It was at ITP that he developed the descriptive camera. That uses a BeagleBone, a webcam and an interface to Mechnical Turk. In high pressure, long duration showcases of the project, Matt redirected the images to his partner, who supplied funnier than normal descriptions of the things being captured.

Matt’s focus has been on implementing other technology blocks, which gave him a good view on how people interact with technology, specifically the UI.

Dave mentioned that engineers used to pay a fortune for GUI library ($10K+) for something like an 8086 PC104 board, because it was worth it to have something that works.

Matt did a bike lamp project that tells you how fast you’re going using a Raspberry Pi and a picoprojector.

It also enables interesting new applications like using Robot OS (ROS). That can only run on an ARMv7.

The new board uses the same B+ header but has some small physical differences.

The hardware is not open source, but the foundation is working on opening up the software used in the graphical side of things.

The main focus was on low cost (hitting a $35 price point) and the tradeoff is using high volume chips that are not open to the public. This is similar to Bunnie’s take on the MediaTek chipsets (and how he got around it).

Matt will be working on getting the RPi into the US education system, which is very fragmented. There is a good footprint in the UK. They run workshops for teachers called PiCademy.

There is also a more commercially viable piece called the compute module. It is on a SO-DIMM card and is meant for low volume applications as engineers transition over from using the main board to more custom hardware. There was a Kickstarter called Otto that did just that.

Chris talked about the viability of using a common computing platform for something like Contextual Electronics, where all of the software could be preloaded onto a Raspberry Pi, removing variation.

The RPi can run KiCad smoothly. Matt talked about wanting to design a HAT on a Raspberry Pi, have it cut out by an Othermill or similar (controlled by an RPi) and then have the parts placed by a device that uses the RPi.

(hit enter too fast)
BTW, preventing access to the datasheet will not reduce the price of the device. At the end of the day, a datasheet is created irrespective of the sensitive of its specifications, and the decision to provide mass market support is not dependent on the availability of documentation. Companies do this all the time – the familiar sentence: “this product is provided as-is without support” is true for several devices across manufacturers.

RPi foundation took a conscious decision to go with a manufacturer that traditionally closes everything and used a core that was probably already on its way out of the market (ARM11 back in 2012 was already being severely offset by CortexA). This was a sweet deal for both parties: BCM had a low-risk gamble and RPi could get their parts at an unreachable pricepoint for the common man. Two million parts sold only helps erode the price even more.

There is nothing wrong with that at all, especially given the enormous success of RPi, but in my opinion there is no way to spin this in a different way.

Regarding the baremetal programming: this is perfectly possible in a Cortex A8 core, as there is shared work for other devices out there (something very hard to do on an ARM11).

Similar to outsourcing the artificial intelligence to Amazon’s Mechanical Turk for the photography project , BeMyEyes’s app looks to connect the visually impaired with volunteers that can help them read labels, etc!

For example, you can’t pull raw images from the Pixy’s SPI/USB. Instead it trasmits the objects tracking data. Maybe OpenMV can do both?!

It’d be nice if it works with OpenCV or pixython? But I always thought OpenCV could only work on a desktop (intel, full linux), but I think recently it’s been made to work on an embedded arm platform (BeagleBone).