@Rich, so far the v6 stacks are still in their infancy. They'll get smaller as soon as we figure out the bare minimum for comms and offload most of the bookkeeping to a gateway. That's what 6LoWPAN does.

@Rick, I agree. The term Embedded has lost it's meaning and charm. E.g., it's not the Embedded System Conference any longer is it? So, the Universities have got to feel that there is a reason to teach small RAM footprint development again. If we call it industrial internet, maybe they'll respond.

@caleb, true that connectivity will become commonplace, but the IoT helps separate the things that are human controlled from those that are gathering and sending data autonomously. They're different software architectures, or should be.

@Rich, the numbers I've heard so far are that you can't even consider v6 in less than 64MBs of RAM. On the SoCs with M3/M4 they're coming with 256M. So, it's not much of an issue as long as the application is frugal.

@Taichi: For sure IoT and IoE are marketing terms. As is to some extent Industrial Internet. ARM CTO said this week that its just a sexy term for embedded so someone will get excited and write about embedded again.

Yes, there are several projects underway for the "Industrial Internet" where v6 and small footprint devices intersect. The Term "IoT" is a marketing thing. In fact, Cisco now calls it the "Internet of everything". So, as M2M applications become more prevalent, the erm will be replaced by something else.

@max I disagree with that too. We create "memes" every day and they go away all the time. Especially in tech. every new feature gets its spotlight for a little bit, but I think in the near future we just won't even bother with IoT. Everything will just have communication.

@Max, the issue of IPv6 address size is actually not one to be glossed over lightly. The definition of DNS getHostInfo calls return IPv6 *before* IPv4 addresses accroding to the RFC. These addresses are dynamically allocated in the spec. If I simply send a few thousand DNS responses to the device following a request, I can exhause device memory (DoS attack). Then the user is "supposed" to remember to free the allocated memory using another call. One missed deallocation and you've got a substantial memory leak.

Of course, the really interesting think about Kickstarter is that all they have to do is provide the platform ... and take a cut of the funds that are raised ... basically Kickstarter is a "license to print money" by th efolks who own th eKikstarter website (I wish I'd thought of it :-)

In the old days, we could see a 4K IPv4 stack. But, v6 has pushed to simplify the router code by pushing protocol code into the endpoint. Router discovery, neighbor discovery, multicast (v6 doesn't do broadcast) have all been pushed to the end device. This takes a lot of RAM. So, we'll have to go to larger processors to make it function.

As for the Hackenstein period, it will likely start to taper off as legacy units get replaced due to attrition. The silicon manufacturers can't make much money on 8051s, they're driving the customers up the processor scale so the silicon folks can turn a profit.

@caleb, I would think that the big wave of IoT would first come from small startups, which are the greater risk-takers. Big companies will follow with polished versions, perhaps, but mostly derivatives.

@Mike: surely the issue of IPv6 (16-bytes) vs. IPv4 (4-bytes) is a non-issue in terms of memory size -- even in a memory constrained system.

My understanding is that a bunch pof end devices will trpically communicate with each other using a 6LoWPAN (passing data packets that are only 127 bytes in size) -- the trick comes when the 6LoWPAN interfaces with the Gateway, which connects to the Internet ... now we have to take big Internet packets and split them up into lots of smaller 6LoWPAN packets, or take lots of small 6LoWPAN packets and bundle them up into a big Internet packet ... but all of thsi is done in the Gateway .. so no memory footprint impact on the end devices...

In fact, the US is dead last in IPv6 adoption. That's because we owned the majority of the address space. But, IANA ran out of V4 addresses in 2011. New users get v6. China currently has 7 major backbones that are v6 only. Even the 2008 Olympics was broadcast with v6. Netflix, Google, Amazon and Akamai now all support v6.

As to IPv4 vs IPv6, even the Zigbee folks are bowing to the inevitable. Zigbee IP uses 6LoWPAN which is a compressed IPv6 address space. The issue is not that IPv4 will go away any time soon, the issue is that v6 will happen regardless of our ability to ignore it. We surpassed 4 Billion devices on the Internet in 2009. There are expected to be 15 billion on the Internet by 2015 and 50 billion by 2020. NAT and CIDR only go so far. Eventually, we'll all have to adapt.

According to calculations and estimations performed by the folks at the University of Hawaii (who obviously have far too much time on their hands), if we account for all of the beaches around the world, together they contain around 7.5 x 10^18 grains of sand. Thus, the addressing space of IPv6 is sufficient to give each grain of sand its own unique IP address – and to do this for approximately 5 x 10^19 Earthlike worlds – so I don't think we're going to run out of IPv6 addresses in the foreseeable future.

@Duane: Yes, there will be places where cost sensitivies are such that languages like Java, Python, Lua, etc. will be fine. I'm more concerned for the billions of devices that will not have that luxury. Look at the cost differnetial between an ARM Cortex A8 and a Cortex M0. The M0 is less than a nickle and the A8 is several dollars. Multiply that by 40,000 sensor points and you've got a sizable chunk of change.

@Max: The folks at Synapse Wireless (www.synapse-wireless.com) use Python in their wireless modules -- my understanding is tha tit's very memory efficient because of the use of byte code. True, for what it is, Python is very efficient. However, look at the cost of their module compared to the cost of a SCADA sensor. Their wireless module is 2-3 orders of magnitude more expensive and can support the RAM and MMU needed for full Linux.

Ah, yes. Cost cutting. In the mid-80s I had to design the control system of a surgical laser around the 6502 because an Apple II+ was the only thing available to use as a development tool - no budget for new tools. Yes, I see your point

@Max. Well, they should write new code. But, management is always looking to cut costs. If I'm upgrading the critical infrastructure in a power plant because my Windows 95 box (no kidding they actually use these) can run any longer, then management wants to port the legacy code because it's "proven". Of course porting to a new environment will introduce new bugs. But, that's not taken into acount in the calculus.

The approach taken by Android for VMs is significantly different than that found in the typical J2EE app on the desktop/server. In the Dalvik case, each app gets it's own VM. This differs from J2EE where one VM runs everything.

Mike: if VMs encourage the dynamic creation and deletion of objects that ultimately leads to excessive memory use and potential errors, will that problem be compounded as we add more and more VMs to the network? In other words, would an error in one VM tend to compound an error in another, perhaps leading to a third? Or do all VMs function independently in that respect? (In which case, I suppose we could still see a collective weakness if several generated errors at the same time.)

If the obtaining of an IPV6 address involves a procedure call -- why not just get someone to create an open source procedure that handles the memory allocation and deallocatoion and that everyone can look at and say "this is good" and then everyone can subsequently use in their applications...

Which code are we talking about here? There is the embedded code in the endpoint device, the code running gateways and servers, the apps on the mobile devices (where needed) and the code on the data crunchers handling all the M2M data being streamed to them.

Small memory footprints will be the lion's share of devices (sensors) placed on the IoT. This requires a significantly different development skill set than can be found in Java EE and Python Apps. For example, a typical SCADA implementation may have 40,000 sensor points. Sheer costs will preclude the use of devices capable of running a full O/S like Linux or Android. The proliferation of ARM Cortex M0/3/4 devices as replacements for 8051/Z80/68HC11 also have limited memory foot prints with no MMU for protection. Anyone remember pointers?

Max, Mike - I would think that a substantial potion of code has to be re-written for the purposes of security. Applications going from big external systems to embedded will also need pretty substantial re-writes to accomodate the embedded architectures.

Well, the memory skills issue is certainly a debateable subject. The issue is that the universities are not generating folks with skills like memory conservation. The use of VMs encourages the dynamic creation and deletion of objects that ultimately leads to excessive memory use and potential errors. The dependency on the garbage collection cycle is also fraught with peril for safety-critical applications.

I thenk the term "semantic web" has also been coined to address the IoT. Essentially, devices being able to derive information based on what they find lying around in search engines and your browser cookies...

The way I think of the IoT is that we have the Internet as we know and love it -- and then we have a bunch of "things" plugged into it ... and a lot of these "things" are going to be really small and memory-constrained and bandwidth-constrained ...

Hmm... Good question. Ostensibly, the IoT is the collection of devices that are on the Internet. These may or may not have humans associated with them. Hence the IoT has a lot of Machine-to-machine characteristics that make it unlike the Web 2.0 world we've been living in up to this point.