~ 5G, IoT, NFV, Cloud & Identity tech blog

I try and fit components together logically so that they can make the most of what the technology offers. I work predominantly in the OSS world on new access technologies like 5G and implementations like the Internet of Things. I want to achieve not just the deployment of these capabilities but to also to let them operate seamlessly. The following is my view of the opportunity of closed-loop remediation.

For closed-loop remediation there are two main tenets: 1. you can stream all network event data in a machine learning engine and apply an algorithm like K-Nearest Neighbour 2. you can expose remediation APIs on your programmable network.

All of this requires a lot of technology convergence but: What’s actually needed to make everything convergent?

Let’s start with Streaming. Traditionally we used SNMP for event data, traps & alarms and when that didn’t work we deployed physical network probes. Now it’s Kafka stream once implementations where a streams of logs of virtualised infrastructure and virtualised functions are parsed in a data streaming architecture into different big data persistence.

The Machine Learning engine, I’m keenest of FlinkML at the moment, works on the big data persistence providing the largest possible corpus of event data. The ML K-NN can analyse network behaviour and examine patterns that are harder for human operation teams to spot. It can also predict timed usage behaviours and scale the network accordingly.

I am increasingly looking at Openstack and Open Source Mano as a NFVO platform orchestrating available virtualised network functions. The NFVO can expose a customer facing service or underlying RFSs. But to truly operate the ML should have access to the RFS layer. This is the hardest part and is dependent upon the underlying design pattern implementation of the Virtual Network Functions. This though is a topic for another blog post.

Mobile Edge Computing (MEC) is a key piece of the 5G architecture (or 5G type claims on a 4G RAN). MEC can already make a huge difference in video latency and quality for video streaming multiple feeds within a sporting environment. For example Intel, Nokia and China Mobile video streams of the Grand Prix at Shanghai International Circuit.

A 5G mobile operator will be introducing virtualised network functions as well as mobile edge computing infrastructure. This creates both opportunities and challenges. The opportunities are the major MEC use cases included context-aware services, localised content and computation, low latency services, in-building use cases and venue revenue uplift.

The challenges include providing the Mobile Edge Compute Platform in a virtualised 5G world. Mobile operators are not normally IaaS / PaaS providers so this may become a challenge.

The ETSI 2018 group report Deployment of Mobile Edge Computing in an NFV environment describes an architecture based on a virtualised Mobile Edge Platform and a Mobile Edge Platform Manager (MEPM-V). The Mobile Edge Platform runs on NFVI managed by a VIM. This in turn hosts the MEC applications.

The ETSI architecture seems perfectly logical and reuses the NFVO and NFVI components familiar to all virtualisations. In this architecture the NFVO and MEPM-V act as what ETSI calls the Mobile Edge Application Orchestrator” (MEAO) for managing MEC applications. The MEAO uses NFVO for resource orchestration and for the element manager orchestration.

The difficulty still lies in implementing the appropriate technologies to suit the MEC use cases. Openstack (or others) may provide the NFVI and Open Source Mano (or others) may provide the NFVO; however what doesn’t exist is the service exposure, image management and software promotion necessary for a company to on-board MEC.

If MEC does take off what is the likelihood that AWS, GCP and Azure will extend their footprint into the telecom operators edge?

The European Payment Services Directive (PSD2) will be transposed into member state law by 2018 and will have a transformative effect on nation state and cross border electronic payments. The Directive aims to increase the convenience of security of electronic payments. This is achieved by promoting payment innovation, for example by Open APIs, and by deregulation of financial service roles. PSD2 will allow new payment service providers to enter the market. Technology firms and mobile operators may be the greatest beneficiaries.

The Directive will transform the way users access their bank accounts during digital commerce. For example, the user may choose a mobile network operator’s payment mechanism as part of a contactless payment.

Opportunity for Mobile Operators

PSD2 mandates the use of robust authentication standards. Any technology provider with authentication and authorisation capabilities can take advantage to PSD2.

The advantage for Mobile Operators is their ability to support network authentication and service location functions. These functions are all particular to mobile networks, making operators a valuable partner in the development of new identity and authentication solutions.

£249.9 million – payment card gross fraud in the six months to (June 2016)

12 million Apple Pay monthly users globally (Q1 2016)

71% – proportion of UK adults with a smartphone (Q1 2016)

Electronic Identification and Trust Services

Electronic Identification and Trust Services (eIDAS) regulation is a tenet of the EU’s Digital Single Market. Mobile Operators have already launched pilots for eIDAS compliant cross-border authentication solutions for the use of public sector services.

PSD2 together with eIDAS give a unique opportunity to operators to support identity for both the private and public sectors. This identity management capability will be critical to all Open APIs in any new PSD2 mobile banking platform.

Some likely use cases

The freedom to “delegate” bank account access is the first major shift that users will see. Under PSD2 an account holder will be able to allow a licensed Payment Initiation Service Providers (PISP) or Account Information Service (AISP) access to their bank account for the purposes of initiating a payment or evaluating the user’s ability to pay.

Online commerce is likely to become simpler through such rules as it will allow all banked consumers to buy online using just their bank account, removing the reliance on debit or credit card ownership. This represents a leap forward for consumer and merchant alike, since direct bank transfers can typically clear in two hours or less with some services offering instant settlement.

Discounts for mobile cash?

For merchants wanting to ease cash flow this is a benefit and service for which they may be willing to offer incentives. Direct bank transfers and instant settlement provide simplicity for the user and immediacy for the merchant that may be equivalent to when merchants offered “discounts for cash”?

The power to delegate bank account access is set to trigger major changes in the way digital commerce is conducted. The appearance of new innovative payment services that rely on the powers conveyed by PSD2 is highly likely; as is the anticipated reaction from traditional card schemes whose profitability may well be curtailed by PSD2’s cap on interchange fees and merchant surcharging. Either way the consumer will benefit.

Payment security

With increased openness comes issues that relate to “security”. To address these PSD2 is demanding the use of strong authentication. The European Banking Association (EBA) has been tasked with defining a standard that achieves this and first drafts are out for review now. From the application designer’s perspective traditional authentication systems that employ one time passwords (OTP) or static personal identification numbers (PINs) may be deemed unfit for use within future digital commerce applications as the banks and other service provider’s latch on to the EBA’s regulatory technical standard.

The EBA is asking for two factor authentication where the user has to be in possession of two things, for instance, a password and an access token to prove their identity. Mobile phone based services like GSMA Mobile Connect will become more prevalent in the future digital market. Advances in smart phone will also increase the use of biometrics as an authentication factor..

Direct Carrier Billing

All the impacts of PSD2 will not come just from easier access to bank accounts or added security. PSD2 has tightened the rules on Direct Carrier Billing (DCB). Consumer accustomed to buying digital content via their mobile phone and charging it to their phone bill will see their options curtailed.

Under PSD2 single DCB transactions will be capped to a maximum of €50 per transaction with a maximum monthly limit of €300. PSD2 continues to allow Electronic Money Institutions (EMIs) to extend the reach of DCB from digital content to the purchase of physical goods.

Mobile Operator Opportunities and Partnerships

Mobile Operators have a PSD2 advantage through service location functions and authentication. SIM & eSIM based authentication can be extended to provide security for customers and merchants by implementing Electronic Identification and Trust Services. With 5G, new network slices may be able to provide a Quantum Encryption Network Slice that would guarantee merchant to bank transactional security.

The greatest opportunity may be through partnerships. The GSMA Mobile Connect and mobile payments projects are likely avenues for greater partnerships. The advent of contactless payment cards in the late 2000s saw early attempts by UK mobile operators to act in partnership as a bank. The advantage of PSD2 is that it removes the requirement for mobile operators to become banks as they can instead focus on interactions with payment processing companies.

Finally any potential European Commission regulation on Anti-Trust on mobile device payment solutions could further open the market for mobile operators (or mobile industry bodies) to provide payment solutions. Such a change in regulation may allow the handset vendor to offer their services as part of the initial contract sale.

The Internet of Things, as distinct from the internet of people, requires communication between devices which enable tracking, monitoring and metering etc… This intercommunication is dependent upon semantically structured and shared data for enabling functions such as identification, authentication, authorisation, bootstrapping and provisioning. Standardising both the semantically structured data and the enabling functions across M2M applications and devices would reduce the cost and extend the life of M2M devices. Standardisation for the Internet of Things is the aim of a common service layer for M2M.

The oneM2M group aims to develop technical specifications that address the need for a common M2M Service Layer that can be readily embedded within various hardware and software, and relied upon to connect the myriad of devices in the field with M2M application servers worldwide. The common M2M Service Layer should be agnostic to underlying network technology (yet leveraging the unique features of these underlying networks), and it will use network layer services (such as security (encryption and authentication), QoS, policy, provisioning, etc.) through an adaptation layer/APIs.

In order for an embedded common M2M service layer to operate it must support AAA (authN, authZ & accounting) for smart devices that is agreeable between multiple device manufacturers and network operators. The Telecommunications Industry Association (http://www.tiaonline.org) are defining a functional standard for Authentication, Authorization and Accounting for Smart Device (AAA-SD TIA) The functions proposed by the common M2M service layer that include Policy & Resource Management

TIA TR-50 Functional architecture for M2M Smart Device Communication System Architecture describes AAA-SD as ” provide authentication, authorization and accounting services to other entities in the network to establish and enforce security policies. The services may include generation of keys, generation and validation of certificates, validation of signatures, etc”

JSON Web Token (JWT)

The OpenID Connect protocol [OpenID.Core] is a simple, REST/JSON- based identity federation protocol layered on OAuth 2.0. It uses the JWT and JOSE formats both to represent security tokens and to provide security for other protocol messages (performing signing and optionally encryption). OpenID Connect negotiates the algorithms to be used and distributes information about the keys to be used using protocol elements that are not part of the JWT and JOSE header parameters.

iss REQUIRED. Issuer Identifier for the Issuer of the response

sub REQUIRED. Subject Identifier

aud REQUIRED. Audience(s) that this ID Token is intended for. It MUST contain the OAuth 2.0 client_id of the Relying Party as an audience value

exp REQUIRED. Expiration time on or after which the ID Token MUST NOT be accepted for processing

iat REQUIRED. Time at which the JWT was issued

auth_time Time when the End-User authentication occurred

nonce String value used to associate a Client session with an ID Token, and to mitigate replay attacks

omr OPTIONAL. Authentication Methods References. JSON array of strings that are identifiers for authentication methods used in the authentication. For instance, values might indicate that both password and OTP authentication methods were used

azp OPTIONAL. Authorized party – the party to which the ID Token was issued. If present, it MUST contain the OAuth 2.0 Client ID of this party

JSON Object Signing and Encryption (JOSE)

In the OpenID Connect context, it is possible for the recipient of a
JWT to accept it without integrity protection in the JWT itself. In
such cases, the recipient chooses to rely on transport security rather than object security.
For example, if the payload is
delivered over a TLS-protected channel, the recipient may regard the
protections provided by TLS as sufficient, so JOSE protection would
not be required.
However, even in this case, it is desirable to associate some
metadata with the JWT payload (claim set), such as the content type,
or other application-specific metadata. In a signed or encrypted
object, these metadata values could be carried in a header with other
metadata required for signing or encryption. It would thus simplify
the design of OpenID Connect if there could be a JOSE object format
that does not apply cryptographic protections to its payload, but
allows a header to be attached to the payload in the same way as a
signed or encrypted object.

I work as an architect at a big telco that has recently become a quad-player. Part of my job is to think of what services come next. My previous interest has always been distributed computing, either networking or large data-sets. Also as part of my job I attend IT conferences on the internet of distributed devices.

My key questions & my current thoughts are:

What will become the distributed identity standard for device authentication?

OpenID Connect (OIDC) (like SAML) is not an AuthN mechanism but extends the OAuth2.0 model. The identity attribute API can be used for profile loading to define a user’s identity onto the device. This can be a lightweight equivalent of a SIM Profile & also support the eUICC flows for ownership switch (similar to a Profile Content Update Function)

Any AuthN & identity solution must support the limitations of loading profiles on smaller memory devices & requiring an authN flow over HTTP.

What will be the numbering & addressing standard for massively distributed devices?

This is more of an open question relating to the history of the service so that eUICC enabled devices will require an international mobile subscriber identity and LPWA & WIFI enabled devices will require a MAC addressing / IPv6 registry with the service provider.

The support for these addressing mechanisms and near field communication devices will have an impact of the network operator’s OSS IT architecture.

The GSMA proposal for eUICC uses the START-IMSI required for profile loading which supports roaming and allows for profile swap on change of ownership.

IPv6 offers a highly scalable address scheme. It provides 2128 unique addresses, which represents 3.4 × 1038addresses. In other words, more than 2 Billions of Billions addresses per square millimetre of the Earth surface. It is quite sufficient to address the needs of any present and future communicating device.

6LoWPAN provides a simple and efficient mechanism to shorten the IPv6 address size for constrained devices

Will the smart device co-ordination be through an embedded chip-set in the main home internet router?

Probably not but I would have said probably not 5 years ago and I still have not seen Zigbee co-ordinators or Thread border routers catch on as stand-alone devices.

I’ve not been blogging for a while, too much work is not an excuse, but will be updating more on these topics soon.

The Internet of Things requires multiple Reference Architectures which can map capabilities to specific technology domains. This is a challenge because there is no single unifying industry definition for the Internet of Things. For the purpose of this presentation it is assumed that:

“Things” have semantic representation in the Internet

“Things” can be acted upon in a structured manner (e.g., status, capabilities, location, measurements) or can report in structured data or can communicate directly with other “Things”

There are many different usable protocols for communication with M2M devices for the Internet of Things. Specific protocols are more appropriate for different devices (e.g. memory & power profiles) and specific protocols are more appropriate for different communication needs (e.g. State Transfer Model & Event Based Model)

The GSM Association (GSMA) put the M2M market size at $1.2tn revenue with 12 billion connected mobile devices by 2020. These numbers alone are enough to excite the most conservative of operators and wobble the subscriber-centric business models that currently prevail. The existing model adopted by MNOs that the more subscribers it has, the more successful and profitable it is considered to be, is about to be tested by this massive new market. This is mainly because the Average Revenue Per User (ARPU) in the M2M business is on average below ten cents per device, but on the other hand the connection density can be virtually endless. So success will depend on how dynamically the CSP reacts to provide new and flexible platforms to support the every-day new devices, applications and verticals that M2M will address.

Because of the low ARPU and massive market multiplier many MNOs should be prepared for a shake-up of their OSS which will have to fulfil and provision at bulk and at low cost.

IPv6 addressing will also make M2M services not just a mobile proposition, but applications that can work seamlessly across both mobile and wired broadband connections. eUICCs and wifi hand-off will have to be included in the new OSS. Furthermore Near Field Communication will require its own billing model.

Credit and debit cards stolen from bricks-and-mortar stores sell on the black market for at least ten times the price of cards stolen from online merchants. There are plenty of TOR accessible card shops that will happily buy the cards from hackers and resell them on the open market. A card stolen from a bricks-and-mortar store can be reused in a real store to buy high value electronic goods or gift cards that can be easily converted into cash. Cards stolen online require a card verification value (CVV) and can only be used by online stores willing to send high value goods to a different shipping address from the billing address.

The two most recent major bricks-and-mortar store card breach stories that have appeared in the news recently are Home Depot where 56m customers’ card details have been stolen and Target where 70m customers’ card details have been stolen. In both cases the point of sales (POS) systems were been breached by variants of the same memory scraping malware. BlackPOS / Kaptoxa in the case of TARGET and Framework POS in the case of Home Depot. Both malwares run inside the POS system (running on Windows OS) and are registered as a service. Both malwares read card data before it is encrypted and then collate and later output the card data which is then made available as ‘dumps’ on black market stores.