Operating since 1992, the Global Forum/Shaping the Future is as an independent, high profile, international, non-for-profit think-tank dedicated to Business and Policy issues affecting the successful evolution of the Digital Society. Evolving agenda HERE

Among the topics this year are Incentive for Investment, Cross-Boundaries Services Challenges, Broadband/4G Infrastructures, Evolving Mobile Technologies. These are relevant considering the the Digital single market. Here are some thoughts which balance Open systems, infrastructure investments , innovation and growth.

The goals of the single market are : “In the face of the deep crisis affecting its economy and society, Europe needs to tap into new sources of growth in areas that will reinforce its competitiveness, drive innovation and create new job opportunities.”

So, it’s a question of balancing investments (public and private) to create a viable ecosystem to create growth

The stewardship and use of forests and forest lands in a way, and at a rate, that maintains their biodiversity, productivity, regeneration capacity, vitality and their potential to fulfill, now and in the future, relevant ecological, economic and social functions, at local, national, and global levels, and that does not cause damage to other ecosystems.

In simpler terms, the concept can be described as the attainment of balance – balance between society’s increasing demands for forest products and benefits, and the preservation of forest health and diversity. This balance is critical to the survival of forests, and to the prosperity of forest-dependent communities.

So, if we take an ecosystem perspective to achieve a balance for infrastructure, investment, innovation and growth – we have to consider that any finite resource – whether forestry, spectrum, capital investment etc would behave the same.

So, considering a Pan European perspective to create investment, growth and jobs for the Telecoms sector – we need to compare markets where investments have worked.

According to the CTIA – Since 2000, wireless providers invested more than $296 billion, not including the more than $35 billion in spectrum auction revenues paid to the U.S. government.

So, if spectrum is considered as the limited resource to drive investments, growth and jobs – the question is – how to encourage additional investment (beyond the cost of the spectrum) to create more growth and jobs

Comparing to the American market, we need

a) More flexibility to reduce the market fragmentation. This means fluidity in the secondary markets (allowing trading, aggregating etc)

b) Harmonizing spectrum by creating a more homogenous footprints across European markets

c) Within guidelines – encourage long term ownership for companies who have proven that they see the buying of spectrum only as a first step. To compare to the CTIA stats – the follow on investment of 296 Billion$ is more than 8 times the license (35billion $). This shows commitment beyond the fee (and discourages short term speculators

In other words, any long term investments have some basic fundamental truths – and they apply to forestry and spectrum in the same way

I will probably blog about this event findings after I am back from the USA

At my edtech start-up Feynlabs – we take a Computer Science approach which naturally leads to STEM education because Computer Science relates to applying Computing to other Scientific and technical domains.

1) The difference between science and engineering At a high level, it is useful to think of science as the study of the “found,” and engineering is the study of the “made.” Scientists concern themselves with the advancement of knowledge in the realm of natural phenomena. Even the most abstract theoretical scientists are concerned (at their core) with the explanation of natural phenomena that might be observed under the proper conditions. Engineers, on the other hand, use scientific knowledge for another purpose: the design and fabrication of objects for the advancement of mankind. Whether it is the design of a new telescope, or crafting a more flexible space suit, engineers generally have a specific goal in mind when they start their projects: a goal that relates to having something fabricated (rather than discovered as naturally occurring).

2) At the core, science involves the “scientific method,” a process of hypothesis formulation and verification that is taught to students at multiple grade levels. Engineering, on the other hand, has at its core the more flexible notions of creativity and innovation – attributes that are harder to quantify and teach, but that are essential in the engineering domain nonetheless. The creative process can be nurtured, but it takes a special effort and classroom climate to stimulate creativity.

3) Computers are technology, but technology is more than computers In the K-12 world, our tendency is to think of “technology” and “computers” as synonymous. While it is true that personal networked computers are powerful technologies, there are myriad other technologies of benefit to education. Some of these (e.g., telescopes) are high-tech marvels, and others (e.g., duct tape) are not. The point is that they are all technologies. It is essential, when thinking about the development of STEM skills, to be sure that “technology” is not restricted to computers, but, in fact, expanded to include all kinds of devices, instruments, and tools that can be applied in both domains of science and engineering.

4) And most importantly .. This brief look at the interrelationships among the four STEM topics reveals something of great power: they all reinforce each other in support of the overall growth of each topic

So, when it comes to the Pi – how does it play out?

Firstly, because the Pi allows us so much freedom to explore Computing, it allows us the freedom to apply Computing capabilities to different domains

However, the most significant area where the Pi can be applied for STEM is simply the possibility to create interconnections between the disciplines and to explore across the stack which will highlight the interplay between the STEM domains (science, technology, engineering and mathematics)

Essentially the two components enable the Raspberry Pi and the Arduino to work together and it’s important to understand why this is significant and how it compares

The Raspberry Pi and the Arduino

The Arduino and Raspberry Pi are both inexpensive, small electronics boards, that’s where the similarity ends. The major difference is technology, the Pi is a computer and the Arduino is a microcontroller. A microcontroller is a much lower powered and simpler device than a computer. You find them all around us, from your microwave, washing machine, car ABS sytem to your TV or DVD player etc.

The Pi is powered by a 700Mhz 32 bit processor that’s similar to what drives most smart phones, the Arduino by a 16Mhz 8 bit processor has roughly the equivalent processing power to an 80’s Sinclair spectrum. The Pi has an operating system where the Arduino does not.

Why chose on over the other?

A microcontroller is the perfect tool for doing a single task very well, with utmost reliability for the entire life of the product, the Pi has a whole operating system to run so is impossible to pare down to just a single process (there are tens to hundreds of processes running even when idle). The Pi would not for example be the best choice for calculator (single task, low power) but a microcontroller is perfect. The Pi by having a full operating system has support for sound, video, a keyboard, mouse and networking. It makes the perfect decision engine and user interface. And the Arduino makes for the perfect end node.

Having now established that the Pi and the Arduino are beneficial working together – there are several ways in which we could connect them together – which brings us to the Raswik approach ..

Using radio to communicate between the Pi and Arduino

Raswik uses a radio approach to enable the Pi and the Arduino to speak to each other.

Raswik has two components which make this radio communication possible

The Xinorf 100 arduino uno r3 based dev board with radio transciever - At the Arduino end, the Xinorf 100 is a digital electronics development board composed of a hybrid of the Arduino UNO R3 and a wireless module called SRF-U. The combination provides a built in wireless (which means you don’t need an XBee shield plus radio module or similar)

Analysis

This approach of using radio to communicate between the Pi and Arduino is interesting. It reduces complexity (no need to install drivers). It provides accessories, sensors and actuators in a box – which means you can quickly start doing real physical measurements like temperature sensing etc and others ex a series of Raspberry Pi experiments based on sensors and actuators

Using Raswsik we (my son Aditya and I) interfaced the Pi to Arduino using a one meter radio link (through Raswck) and then used a temperature sensor to create a temp graph. This is all fascinating stuff Its amazing how much you can learn and teach .. and how much I have learnt myselves through experimentation We will be demoing this at the feynlabs launch in Miami

Accelerating the Open IOT ecosystem

Here, we are primarily speaking of Royalty free, non proprietary, Open source software for the Internet of Things. Of course, that does not exclude other software paradigms – which are a part of the ecosystem.

In this longish blog post, I will discuss how the webinos project fits in with Open source IOT especially in the context of its role for node.js

I have been leading the webinos IOT hub efforts – the blog comprises of insights and contributions from others at webinos especially Dr Paddy Byers, Dr Nick Allott, Dave Raggett(W3C) and Giuseppe la Torre

It’s hard to describe webinos .. and I once jokingly said applying a Star Trek analogy that ‘webinos boldly takes node.js where no node.js has gone before!’

So, I will use the paradigm of node.js to explain these ideas

Node.js and webinos

Essentially, in webinos we embed an agent into a device that allows them to be part of the Personal Zone of devices managed by a person. The agent is implemented with Node.js and it enables secure mutual authentication of devices in the Zone. Thus, webinos extends the traditional web runtime with a suite of APIs for discovery, messaging etc.

The analogy of an email server is applicable here. Like an email server, messages are stored in the ‘cloud’ but can be accessed by local devices. But webinos also adds distributed functionality i.e. services owned by one person can be shared with others (under policy limitations). In an IOT sense, that means a sensor owned by one user can be discovered and shared with another user

Webinos has the following characteristics:

Non-proprietary

Cross-device

Secure

Distributed

Privacy enabling i.e. which helps users in re-establishing control over your devices and personal data.

webinos can be applied to many industries and applications and is initially focussed on four specific areas or gateways: TV, Automotive, Health and Home Automation Gateways. Note that – this blog and discussion relates only to the Home automation/ IOT areas of webinos.

The description of webinos (non-proprietary, cross-device, secure, distributed platform which helps in re-establishing control over your devices and personal data) sounds daunting but in practice, it means :

a) Devices you own can be translated into a service that can be discovered and shared with others (based on policy settings) and

b) Similarly, devices owned by others can be discovered by you as a service and can be accessed (again subject to policy approval).

This has implications for IOT/Home automation/Smart cities

Consider a Smart city scenario:

One department of the city has deployed pollution sensors and temperature sensors. Another department of the city wants access to the same real time information. Indeed, considering Open Data principles, it could be any person – for example developers running a hackathon. In this scenario, the department which owns the sensors can grant access to the sensors to third parties based on Policy scenarios. Indeed, these sensors could become ‘discoverable’ and could be accessed by any third party as needed.

This is achieved through three ways:

Open technologies(specifically node.js)

Implementation of Personal zones and

The webinos Dashboard

Significance of Node.js

(this section – acknowledgements to Dr Paddy Byers)

node.js, or just node, is a runtime environment based on Javascript (JS). It uses the V8 JS engine from Google – the same one as in Chrome – and exposes a series of APIs needed to build networking applications. Libraries include basic things like filesystem and network access, but also HTTP, crypto, SSL, streams – all of the building blocks for apps that either serve or consume network services.

Most important of all, though, is the ability to build apps using external modules – not built in to the core – provided by third parties. There is a very active ecosystem of developers of these node “modules” which gives you access to a massive catalogue of libraries and frameworks. By having this structure, node can concentrate on maintaining a focussed, high performance, stable and common core, and the module ecosystem can provide huge diversity in libraries and frameworks. Unlike other environments – say Ruby with Rails – there isn’t a single framework architecture that becomes an encumbrance or constrains how things are built. There is diversity in the ecosystem and it isn’t held back by centralised coordination or the need for a single view on how things are done.

Node was one of the first projects whose community engagement was fuelled by Github and that mindset – free, decentralised, and open – has been the core ethos of the developer community for node’s core, the module ecosystem and end-user developers. Although node is now owned by Joyent which has its own commercial mission, node remains open and sees contributions from many individuals and organisations.

node is primarily used for building the “front end” for web sites (i.e. the part that directly handles incoming requests and sends responses). Some organisations use it just for the front end but many sites are built top-to-bottom with node.

Node has a number of key advantages.

1) The principal advantage is scalability. node is based on JS; it is event-driven and single-threaded. While this might at first sight seem to be a disadvantage – running an inherently parallel service on a single-threaded runtime – it turns out to be its key advantage. The reason is that the cost of handling each new request, and in particular the cost of each outstanding request, is very small compared with systems that spawn threads or processes to handle each request. Each request is handled – processing the request, resolving the request path and parameters, triggering database or other IO – but then instead of waiting the system then returns to the idle state ready to handle a new request. The resources occupied by the pending request are simply a few objects and buffers, so many thousands of requests can then be pending on a single server. Secondly, state is easier to share between requests, which minimises the state that needs to be persisted somewhere. A single server can therefore handle tens of thousands of connections and concurrent requests.

2) The next key feature is its accessibility. node is small at its core – which means a small learning curve to get started – but has a rich ecosystem of modules that enable you to add functionality quickly. The openness of the platform and modules, the support available from the community, and the sheer diversity of things being created, mean that you’re rarely on your own when trying something new. If you look through the various testimonials on the nodejs.org site you see multiple organisations using node to power their mobile apps or mobile sites. There are several reasons why it is well-suited to this.

3) Suitability for mobile apps – First, these mobile backends – whether serving html or APIs – require huge scale. Any mobile app with even modest adoption can generate hundreds or thousands of requests a second. node allows these services to scale to this level much more readily and cheaply than with competing platforms. Many organisations, even though they have an existing backend for their mainstream website, will take a “clean sheet” approach to building their mobile platform and node is then a natural choice.

4) Further, mobile apps are increasingly dependent on realtime connections where data can be pushed from the server to the device (egg with long polling or websocket connections), rather than being solely conventional sites or http APIs. node provides ready support for realtime connections (either directly or with helpers such as socket.io) and realtime push-dependent systems can be built far more easily than would be possible with Rails or PHP, say. LinkedIn, for example, have built their entire mobile backend in node and you can see other examples on the node.js site

5) You can also run node on the mobile itself. node is inherently portable – V8 supports multiple CPU architectures and Chrome itself obviously runs on ARM and MIPS and other CPUs as well as x86. node’s footprint on the OS API surface is small – networking, filesystem and events essentially – so it makes it readily portable to multiple environments. There is a port of node to Android and a framework that allows you to build Android apps with node, and there is also an experimental port to iOS.

As devices grow ever more connected, they will increasingly be simultaneously both clients and servers for network services. That doesn’t necessarily mean they will be serving web apps, but your phone has a wide range of data sources that are interesting to exploit – location, camera, proximity via Bluetooth, say, as well as the personal information in contacts, etc. node is a framework that allows you to create servers very quickly for all sorts of functionality.

You would think performance is an issue, but it’s not; modern devices are so powerful that they have plenty of processing power for the kinds of services you would think of. Anywhere you can run a browser you can also run node.

Having services that are always on is an issue for battery life. There needs to be a way of ensuring that an idle service is really idle and doesn’t drain the battery.

Webinos and node.js

Webinos has gone further than most other projects in exploring node.js in different platforms. Specifically, it is addressing two separate issues:

a) How to expose device functionality as network-accessible services, and

b) How to create a portable application environment based on JS.

These have implications for IOT
The main technical contribution of Webinos has been that of privacy and access control for services exposed by a device such as a phone or car or TV. Webinos has the idea that a “personal cloud” can be augmented by devices and the services they can each expose; and has created a framework for access to those services, both peer-to-peer and via the cloud.

This is similar to a distributed “plug and play” for personal services; it’s not just about enabling discovery and access, but enabling the owner of the device to give access selectively and to set policies for access. Webinos addresses the range of trust scenarios on which that access might be based – social network relationships, physical proximity, etc.
Webinos is itself built with node and you can download the specifications from the webinos site and from github for webinos

Webinos technology

An overview of webinos technology

Within this context now, it’s easier to understand the significance of webinos for IOT

Today companies provide services, but require centralization of personal data over which you have little control, making it hard to switch companies

Personal Zones provide an architecture for reclaiming control

You decide what/when to share with 3rd parties

This facilitates intent based smart search

Your data is managed within your zone, by the services you install

This works well for IoT devices

webinos Personal Zone Hub (PZH)

The Personal Zone is a conceptual construct, that is implemented on a distributed basis from a single Personal Zone Hub (PZH) and multiple Personal Zone Proxy (PZP)s

The critical functions that a Personal Zone hub provides are:

An fixed entity to which all requests and messages can be sent to and routed on – a personal postbox as it were

A fixed entity on the web through which requests and messages can be issued, for security and optimisation reasons.

An authoritative master copy of a number or critical data elements that are to synced between Personal Zone Proxy (PZP)s and Personal Zone Hub (PZH), specifically

A webinos service host: a Personal Zone Hub (PZH) can host directly Services/APIs that other applications can make use of.

Context sync: the Personal Zone Hub (PZH) should act as the master repository for all context data

A webinos executable host: a Personal Zone Hub (PZH) will be able to run a server resident webinos applications (these will be JavaScript program files wrapped in a webinos application package)

webinos Personal Zone Proxy (PZP)

The webinos Personal zone satellite proxy, acts in place of the Personal Zone hub, when there is no internet access to the central server.

In order to act in its place, certain information needs to be synchronised between the satellites and the central hub.

This information has already been listed above.

The Personal Zone Proxy (PZP) fulfils most, if not all of the above functions described above, when there is not Personal Zone Hub (PZH) access

In addition to the Personal Zone Hub (PZH) proxy function, the Personal Zone Proxy (PZP) is responsible for all discovery using local hardware based bearers (bluetooth, zigbee , NFC etc)

Unlike the PZH, the PZH does not issue certificates and identities.

For optimisation reasons PZPs are capable of talking directly PZP-PZP, without routing messages through the PZH

webinos Application

A webinos application runs “on device” (where that device could also be internet addressable i.e. a server).

A webinos application is packaged, as per packaging specifications, and executes within the WRT.

A webinos application has its access to security sensitive capabilities, mediated by the active policy.

A webinos application can expose some or all of its capability as a webinos service

webinos Service

A webinos service is a collection of functions and events, that are accessible by an webinos application

These functions and events are always presented to the application developer as a sets of JavaScript functions, no matter where the implementation resides.

An webinos service must take note of the following parts of the webinos specifications

Discovery: a service must be discoverable and be able to describe itself to the application in accordance with the discovery specification

Messaging : a service must be able to receive and respond to incoming RPC messages

Local Connections

One of the critical innovations of webinos, is the virtual overlay network that allows different applications and services to talk to each other over many different interconnect technologies. Not only are the interconnect technologies for local messaging, there are three different scenarios in which this communication can take place. These are highlighted in the diagram below.

Connecting to a full smart device, that hosts both a PZP (therefore can host native APIs presented as services) and a WRT (so can host webinos applications exposing webinos services)

Connecting to a dumb device, it hosts a PZP but not a WRT. This means that it can expose only native APIs, not webinos applications

Connecting to a super-dumb device, it hosts neither a PZP nor a WRT, but can expose webinos services – if the client PZP hosts a customised driver

Two other aspects complete the webinos vision – the microPZP and the Dashboard

MicroPZP is an implementation of the PZP when the device is too low spec to deploy a full PZP. A device supporting a MicroPZP has a target to 2mb device range

Dashboard

(acknowledgements to Giuseppe la Torre for this section)

The dashboard brings it all together for the user. In the near future, our houses will be “populated” of several “smart objects” which can be remotely controlled by users. Some efforts will be necessary to create a common platform for the “physical object virtualization”. Webinos provides support for the IoT domain, defining and implementing APIs for generic sensors and actuators.

One of the most important feature we will expect from the iot ecosystem is the physical mashup.

The webinos home controller is a web application which, relying on the webinos platform,

allows users to

i) Create customizable UI to display information from user’s sensors. Using the drag&drop paradigm the user can create its own user interface with charts, gauges, text label, and so on. And display information about all the sensors which belong to his personal zone. This UI can be saved and then displayed in each kind of user’s device (TV, in-car, tablet). This part of the app could be easily extended, recently improved with the possibility to display user’s position (a webinos service) on a Google map.

ii) Add “logic” among the the smart objects by means the definition of rules – This is a good example of physical mashup: sensors and actuators (but theoretically each type of webinos services) can be used together to create logic rules of type: if CONDITION then TRIGGER.

Using the drag&drop paradigm, user can move on the UI

-) Input elements (sensors, user input textfields)

-) Condition elements (<,>,AND,OR)

-) Output elements (actuators)

An important webinos feature which has been integrated into the home controller application is theExplorer.

The explorer is a common interface for webinos applications which allows them to get access to the services exposed by user’s devices inside the personal zone.

In the case of home controller app, the explorer allows users to pick services (sensors or actuators) among those inside his personal zone or those belonging to a friend’s personal zone.

Considering my work in contributing to the EIF Digital world in 2030 report and the impact of Big Data insights – we focus these newsletters on Data with a Policy slant.

Data affects us all and it will continue to impact many policy matters in future. I have been tracking Big Data trends on social media – especially Twitter. I then provide a perspective/edited view for policy matters

Let’s start with Smart cities. A trend which brings many other trends together.

Should happiness become a general measurement of city life? The Hedonometer project sets out to map happiness levels in cities across the US using data from Twitter.

Using 37 million geolocated tweets from more than 180,000 people in the US, the team from the Advanced Computing Centre at the University of Vermont rated words as either happy or sad.

“Cities looking to understand changes in the behaviour of their citizens, for example to locate ads for public health programmes, can look to social media for real-time information,” said Chris Danforth, one of the project leaders.

The article also provides some interesting data points for policy makets

In 2013 internet data, mostly user-contributed, will account for 1,000 exabytes. An exabyte is a unit of information equal to one quintillion bytes

Open weather data collected by the National Oceanic and Atmospheric Association has an annual estimated value of $10bn

Every day we create 2.5 quintillion bytes of data

90% of the data in the world today has been created in the past two years

Every minute 100,000 tweets are sent globally

Back in 2010 Google chief executive Eric Schmidt noted that the amount of data collected since the dawn of humanity until 2003 was the equivalent to the volume we now produce every two days.

In Norway, more than 40,000 bus stops are tweeting, allowing passengers to leave messages about their experiences, and in London the mayor’s office has just begun a project to tag trees so that people can learn about their history.

Supermarket chain Tesco is installing sensors across its stores to reduce heating and lighting costs.The records of the fridge systems in one store alone produce 70 million data points a year.

Vancouver is making sense of data using a 3D visualisation of the city

Computer-aided design company Autodesk has been working with San Francisco, Vancouver and

Bamberg, in southern Germany, to build 3D visualisations over which government can overlay data sets to see how a city is performing at any time.

Presenting data in new ways has had surprising consequences for example In Germany the model was used to show people what the impact of a new railway line would be.

And finally the quote: “We are basically building a digital copy of our physical world and that is having profound consequences.”

The Harvard business review which asks if is data visualization actionable by looking at a large data set

How big? Massive: We are documenting every tweet, retweet, and click on every shortened URL from Twitter and Facebook that points back to New York Times content, and then combining that with the browsing logs of what those users do when they land at the Times.

Link all these data sources together and what do you get? Timely, if not crucial, contextual information about markets, trends, competitors, products and consumer opinions.

This is the promise of DOPA, a project funded under the umbrella of the European Union’s Seventh Framework (a made-for-HBO series title if I’ve ever heard one) implemented to further European research and economic development.

DOPA’s goal is to semantically link massive amounts of open economic and financial data — quantitative, qualitative, structured, unstructured and polystructured (as in audio, video, images, free-form text, tables and XML files) — and make it available through a framework that standardizes data sets. Its hoped-for outcomes include a bevy of innovations based on new ways of looking at publicly available data.

The clinical data analytics market is about to get red hot. With the shift toward new payment models and the sheer amount of clinical data contained in electronic health records, more and more healthcare groups are looking to analytics solutions for population health management, according to a new report released Tuesday.

That is also something for the governments to support and finance — business models or research, for instance, to improve the tools for self-protection for the internet user, and possibly to develop a kind of European cloud model which is less [vulnerable] to detection by the intelligence services. There could also be acompetitive advantage for European businesses.

After a year or so, I have made some progress on the idea of Big Data algorithms for Smart cities and I will try and elaborate here in this longish blog post which you can download also as a pdf. In addition to my Oxford university course on Big Data for Telecoms, from Jan 2014 onwards I am pleased to be also teaching a course about Big Data Algorithms for Smart Cities. This also includes IOT, Mobile and M2M data.

One of the reasons for this blog post is to reach out to companies and other researchers who are working in this space (ex IBM Smarter planet, SAP, GE(Industrial Internet) are all doing some interesting work in this space – as are research institutes like fraunhofer FOKUS ). I am already doing some interesting work in this space especially at Liverpool Smart cities projects – Connected Liverpool - so we are already looking at real world applications

we then apply these to optimization problems based on data streams from Smart city verticals(like transportation), IOT, Mobile data and Open Data streams all within the context of the R programming language – albeit there is some great work on Python as well ex scikit learn

And ofcourse wearable mobile data technology could create its own data streams

What makes a city Smart?

How do we bring this all together?

The ex Chinese Premier Wen Jiabo once said “Internet + Internet of things = Wisdom of the earth”

Indeed the Internet of Things revolution promises to transform many domains ..

As the term Internet of Things implies (IOT) – IOT is about Smart objects

For an object (say a chair) to be ‘smart’ it must have three things

- An Identity (to be uniquely identifiable – via iPv6)

- A communication mechanism(i.e. a radio) and

- A set of sensors / actuators

For example – the chair may have a pressure sensor indicating that it is occupied

Now, if it is able to know who is sitting – it could co-relate more data by connecting to the person’s profile

If it is in a cafe, whole new data sets can be co-related (about the venue, about who else is there etc)

Thus, IOT is all about Data ..

By 2020, we are expected to have 50 billion connected devices

To put in context:

The first commercial citywide cellular network was launched in Japan by NTT in 1979

The milestone of 1 billion mobile phone connections was reached in 2002

The 2 billion mobile phone connections milestone was reached in 2005

The 3 billion mobile phone connections milestone was reached in 2007

The 4 billion mobile phone connections milestone was reached in February 2009.

So, 50 billion by 2020 is a large number

Smart cities can be seen as an application domain of IOT

In 2008, for the first time in history, more than half of the world’s population will be living in towns and cities. By 2030 this number will swell to almost 5 billion, with urban growth concentrated in Africa and Asia with many mega-cities(10 million + inhabitants). By 2050, 70% of humanity will live in cities.

That’s a profound change and will lead to a different management approach than what is possible today

Hence, if IOT is seen as a part of a network, then it is a core component of GDP.

So, what makes a city ‘smart’?

Building upon the previous discussion,my view is a Smart city is a city that behaves like the Internet i.e. is a platform/enabler for its citizens. Thus, the citizens make the city ‘smart’ by adding knowledge, value, data etc. This is a part of a wider socio economic trend to go from ‘mass production’ to ‘smaller individualized services’ – ex in music, in urban farming, in the Bristol pound, in local sourcing of food etc.

Laura is a part of group of dedicated teachers with the objective of creating an inclusive opportunity for learning computer science regardless of gender, race, socio-economic status, SEN or disabilities.

I will be attending this event as a developer considering my interest with feynlabs

More details below

hosted by #include

in partnership with the University of Warwick

9 November 2013 10.00 – 17.00

This is no ordinary hack – instead of creating a piece of software,
the aim is to create resources for use in the teaching of Computer
Science in the classroom.

Teachers, developers and academics will team up to tackle the new
curriculum, sharing their expertise to produce interesting learning
opportunities. We want to support diversity so the resources should
aim to be inclusive for as many students as possible.