Managing a large scale VoIP platform. Developing security measures that identify and squash fraud. Creating useful user interfaces that get not so technical people information that they desire in an autonomous manner.

Nice example. The user has some control - by specifying http or shttp and many embedded standards/protocols. The user has to be spared from having to understand the underlying technology and when that happens more "users" will be able to create apps.

I just thought of another example, HTML. I create a web page with HTML, it goes through 100s of differnet protocols but in the end it gets to my browser and renders correctly (usually). This is only one way, but maybe helps to explain what I mean.

OK Jeff - I agree in that a reliable system should probably dever depend on a single standard. That is what I mean by higher levels of protocol also need to be defined that are transport independent. SHould be able to set at the API what level of security required, what level of importance etc, and then have the software decide the best comms to use.

As networks get to be higher bandwidth, they also tend to be longer latency. Part of this is because of data aggregation. Pack more data together so that every packet takes longer, but now lakes longer to get to the next packet. Success depends on more reliable transport than for smaller packets.

Ability to take the data from say a CAN bus and ethernet and make it available to the system. I may have two systems that use different transports but I should not require any particular transport to connect the data from them. The idea is to separate the data from the transport type.

Jeffery: As I consolidate "upwards" I consolidate in custom Ethernet packets and then send them on to an SQL Server/Database (Usually Interbase -- because of small memory/disk footprint) -- usually to a Linux server. Is that what you are thinking?

@link2sriram - go back and take a look at the slide on dash7 from yesterday. While I dont agree with some of their determinations, it laid out many of the things that can be considered in terms of wireless connection. But those requirements may not be suitable at all for use in a car.

Yes - I think we will have stanbdardization in vertical markets - suchy as automotive that develops standards that are tailored for the noise that they typicall see and for the data requirements they have. Some say that CAN is reaching the end of its life because the needs have changed since its definition.

I think we need additional standards that are a level up from the transport protocols. We need things that all content to be more standadized as well. Just think of the number of audio formats. It is a waste of effort to have everyone able to code/decode from all of the formats.

Different protocols implement different requirements - deterministsm vs. very high bandwidth for exmple. Everything is a trade-off depending upon what your requirements are. So I don't think there can ever be just 1 - maybe a handful will eventually emerge

Instead of these many protocols being used (LIN,CAN,I2C), why can't there be a single universal protocol which can adapt almost any application's requirement? This would enable developers from one company migrate to the other without too much of protocol updation right

Cell phones are authenticated - yes. To ensure they belong on that network. But how do YOU ensure that this is a person you want to extract data from your phone? With a call, you filter based on caller-id, which can be manipulated.

For people needing wireless -- if info qty is small and reliability is important just use the 433Mhz and 868-915 MHz packages -- like TI sells -- see CC110L in TI store for example. It's worth evaluating. and relatively cheap to try.

Before recommending a netwotk for a noisy environment, I would suggest finding out wheere the noise is in the specturm. Analyze first, then it may just be a matter of going to a frequency band that is less noisy - so a switch from 802.11n to a maybe.

I have seen examples where a single sensor has been replaced by several muchlower resolution ones where collectively they can provide more accurate data and provide a level of reliability. If a sensor goes down, the other reconfigure.

You asked earlier which protocols are we using and many answered with physical transports such as I2C, profinet, etc., but shouldn't we have etensible protocols that allow for connections that may not have been forseen initially as you talked about yesterday?

Need support across the ecosystem. But, one-word bullet items are dangerous. Your "Manageability" may not be what I think "Manageability" is. Need common understanding. If I offer a "Managed Runtime", who would automatically think that I was talking about Java?

Instead of these many protocols being used (LIN,CAN,I2C), why can't there be a single universal protocol which can adapt almost any application's requirement? This would enable developers from one company migrate to the other without too much of protocol updation right.