Stephen.Evanczuk

Editor

Stephen Evanczuk has more than 20 years of experience writing for and about the electronics industry on a wide range of topics including hardware, software, systems, and applications including the IoT. He received his Ph.D. in neuroscience on neuronal networks and worked in the aerospace industry on massively distributed secure systems and algorithm acceleration methods. Currently, when he's not writing articles on technology and engineering, he's working on applications of deep learning to recognition and recommendation systems.

Stephen.Evanczuk

Robot vision applications can bring a complex set of requirements, but open-source libraries are ready to provide solutions for nearly every need. Here are some of the many open-source packages that can help developers implement image processing capabilities for robotic systems.

Machine learning has evolved to become a practical engineering method if approached with a suitable appreciation of its associated requirements and current limitations -- and it's more accessible than you might think.

You'd think that the dissonance between excitement over IoT opportunities on one hand and concern about IoT security on the other would yield a rich breeding ground for companies targeting IoT security.

With the industry's laser focus on the IoT, wearables, or anything expected to run on rechargeable batteries, we're going to hear a lot more about MCU power consumption as a differentiator as vendors search for the bottom in MCU power consumption.

It sure seems that way and a plethora of programming languages can seem like overkill, but we're not likely to see fewer anytime soon. It's like screwdriver bit sets: The different bits might seem like overkill but they each have their intended purpose. Yes, you can sometimes jam a Phillips head into a torx socket etc but.... :-) So languages like Rust emerge to make up for others' shortcomings in memory safety, Erlang or Go for concurrency, ... Cobol is still in use... as is Prolog, but you'd probably not want to try to swap their use... my point being there's a right tool for every job.

The dev environment is the Xilinx SDK and it looks like you'll be able to pull support software and reference designs from the krtkl and snickerdoodle sites. I'd agree that getting into development with a multicore ARM/FPGA device can be challenging but this looks like a (relatively) low-risk way to gain experience with devices of that class and level of functionality!

Oh, as much as the spirit of what you say reflects a basic truth, my mental state way predates my working with Max... but it's nice to work with someone else equally grounded - even if it's a floating ground. LOL

Sounds like it will be even easier soon. Now Amazon says it will be offering Alexa Voice Service specifically designed to add a voice interface to connected hardware devices. [https://developer.amazon.com/public/solutions/alexa/alexa-voice-service]

Computing lore has nevertheless assigned a distinction (particularly in reference to memory) between kB and KB as 1000 bytes and 1024 bytes, respectively - the latter having "... originated as compromise jargon...." according to the relevant wikipedia entry [https://en.wikipedia.org/wiki/Kilobyte].

Maybe somebody more familiar can answer, but a quick look at the dev page suggests it could done through a custom Android app or as html5. Basically, if a thing has a web-facing api supporting the device controls you want, you can use the Alexa AppKit to work those controls using a web service or an AWS Lambda function.

On echo.amazon.com, go to settings / connected home. But that activates a discovery mode so if you don't have any devices, Alexa lets you know. You can also find info on supported devices going to "things to try" / "connected home device commands". That page has a link to supported connected devices.

One of the quirky things about the Echo is that when it's in listen mode and the top ring goes blue, a cyan section of that ring appears on the section of the ring closest to you. It's kind of like in the old Battlestar Gallactica, where the cylon beady red eye would stop cycling back and forth and stop toward something. This however is a nice friendly blue, so I think we're ok. You don't really notice that in the video unless you look closely, and as a user, you're not really standing there looking at it, but when I first noticed it going from rotating to still, all I could think was "cylon!"

When TI briefed me on the MSP432 earlier, I asked about this from the point of view that developers are a suspicious lot and see an industry rife with examples of coding errors even from large organizations using formal methods, so if a bug is found in the ROMed DriverLib, how will it be handled and they later responded:
"TI has developed its DriverLib in parallel with RTL design, thorough verification and validation at simulation, FPGA, and on-silicon to ensure everything is at maximum robustness. In other words, TI has such high confidence in its DriverLib that it goes in ROM. That being said, in the rare event a bug affects one of the ROM functions, TI would release a DriverLib update that patches only that affected function with a new version available in source code format for users to load into Flash."