“Banking is necessary. Banks are not.” Yep. Bill Gates said it. Back in 1994. And 28 years later, it’s it’s set to become reality. From the 1st January 2018, banking will no longer be the exclusive domain of banking institutions because PSD2 is going to drastically alter the way in which we bank.

The biggest consequence is that more than 4,000 European banks will need to open their legacy (mainframe) data stores to Third Party Players (‘TPPs’) and allow them to retrieve account information (‘AIS’) or initiate payments (‘PIS’). Both capabilities will be facilitated through APIs. I wrote about the scope and ramifications of PSD2 a few months ago, and I’ve been thinking ever since about the implications for existing banks and whether they’ve got reason to be scared.

It would be surprising if some of the traditional banks weren’t nervous about the extent to which they’ll have to open their kimonos under PSD2. And even if the Facebooks, Googles or Amazons of this world don’t become banks overnight, I expect the traditional, lifelong bank-customer relationship to slowly evaporate as a result of PSD2 (and subsequent versions of PSD).

Facebook could easily decide to become an AISP (Account Information Service Provider – see Fig. 2 above), which would enable them to offer an aggregated view of a user’s bank accounts. As a result, they would be able to analyse spending behaviour, understand their users’ financial profiles and personalise a user’s banking experience. This isn’t that revolutionary, as virtual assistants like Cleo and Treefin have already starting offering this functionality, and I believe it’s highly likely that we’ll see it roll out across Facebook Messenger or WeChat in the near future. If you need more convincing, Facebook made their first move two years ago by appointing David Marcus, former CEO of PayPal, to head up Facebook Messenger, so watch this space. Similarly, US bank Capital One integrated with Amazon’s virtual assistant Alexa last year. This integration enables Capital One customers to pay their credit card bills and check their balances, by talking to their Alexa devices.

In addition, any remaining doubters about the power of APIs are likely to be converted as a result of PSD2. In the current Fintech landscape, there already are large number of banks that are either using APIs to hook into existing banking infrastructures (e.g. Varo Money) or offer additional services (e.g. N26). PwC recently conducted a study into the strategic implications of PSD2 for European banks and they listed no less than six API-powered banking business models (see Fig. 3 above).

Main learning point: It will be interesting to see what the actual impact of PSD2 will be, but if I were a traditional European bank, I’d be working as hard as I could to open up my APIs from today and start working on the creation of strong alliances with 3rd parties and their developers. As Nas once rapped on “N.Y. State Of Mind”, “I never sleep cause sleep is the cousin of the death.” If I were a traditional bank I’d follow Nas’ advice and give up on sleep completely …

I recently heard Shamir Karkal, Head of Open APIs at BBVA, talking about open platforms and I was intrigued. In the podcast episode Shamir talked about the power of APIs, but at the same time stressed the importance of having a strong platform that these API end points can hook into.

Shamir talked about building a product with a platform attached. Instead of just building a set of APIs, we should treat APIs as a way in for customers, developers and third parties to hook into the capabilities of our business. For example, hooking into all the things that banks typically tend to do well: compliance, risk management and customer support.

My ears really perked up as soon as Shamir started talking about Dwolla. Dwolla is US based peer-to-peer payments company, whose mission it is to facilitate “Simple payments. No transaction fees.” Dwolla is powered by APIs, making it easy for US users to link their Dwolla account to a US bank account or credit union account to move money. Setting up a Dwolla account is free, and there’s no per transaction fee. Users can collect payment on an invoice, send a one-time or recurring payment, or payout a large number of people at once. Dwolla also offers this a white label solution (see Fig. 1 below).

In essence, what Dwolla does is enabling real-time payments between Dwolla accounts and another bank account that users want to send money to. Dwolla are integrated with banks such as BBVA, having Dwolla APIs ‘talk’ to the bank’s APIs. Dwolla has created some form of a protocol in the form of FiSync which aims to make it more secure for users to transmit information between accounts. FiSync enables the use of secure authentication and tokenisation in the comms between Dwolla and accounts like those of BBVA Compass. This way, BBVA Compass account holders don’t have to share their account info with Dwolla (see Fig, 2 below).

Main learning point: I love how Dwolla’s proposition is almost entirely API based, making it easy for its users to transfer money to bank accounts and credit union accounts. Dwolla definitely feels more seamless, secure and cost-efficient compared to the way in which users traditionally transfer money from one account to another.

The other day I wrote about blockchains, looking into this new technology. I then came across a company called Elliptic that specialises in “identifying illicit activity on the Bitcoin blockchain.” It made me realise how blockchains can be used for all kinds of illegal activity. Also, I can now see a clear link between digital identity management and blockchains.

Transparency is a key aspect of blockchains and, going back to the original purpose of blockchains, it helps Bitcoins to complete financial transactions through the chain. Naturally, there are lots of users who don’t like the transparency aspect and use anonymizing services to cover tracks when doing transactions through the blockchain.

I read an interesting article about how anonymous users and their transactions can still be identified, tracking users’ activity both in real-time and historically. There are a number of centralised services within the blockchain e.g. wallets and exchanges which have access to user and transaction info. Also, by doing an activity or user network analysis, one can find out more about the type of transaction and the identity of the users involved (see example in Fig. 1 below).

The majority of Elliptic’s clients seem to be either law enforcement agencies or financial institutions. For example, one of the uses cases that Elliptic caters for is making sure that the bitcoins a client acquires aren’t derived from the proceeds of criminal activity. Elliptic says that in the past year it has been able to map the entire 35 GB transaction history of the bitcoin blockchain.

Interestingly, Elliptic has created a visualisation technology to provide a number of anti-money laundering (‘AML’) services. If you look at the sample visualisation below (see Fig. 2), you can can see how Elliptic can visualise ‘known’ entities e.g. exchanges whilst naming illicit marketplaces and money laundering services.

Through an API, Elliptic’s clients will thus get real-time alerts about any bitcoin payments linked to known thefts, illicit marketplaces and other criminal activity, which are all identified by name. As a result, financial institutions can effectively do real time compliance, adhering to compliance regulation as transactions take place through the blockchain.

Main learning point: As I mentioned in my previous blog post, the world of blockchains is a new one to me. Learning about how people can abuse this new technology is therefore just as new to me. Learning about how Elliptic helps financial institutions and law enforcement agencies to identify illicit blockchain activity has given me a first understanding of how one can work through to blockchain networks to figure out its users and transactions.

The other day I heard a few people use the term “quantified self”. Through Wikipedia I learned that the quantified self stands for “a movement to incorporate technology into data acquisition on aspects of a person’s daily life in terms of inputs, states, and performance.” In other words, this is all about quantifying peoples’ lives and behaviours, thus being able to learn more about people and their different activities and needs.

Ben Essen, a London-based Creative Strategy Director, recently talked about the quantified self at this year’s SXSW in Austin. His talk was titled “Know Thyself. Self Actualization By Numbers” and these are the main things that I learnt from Ben’s presentation:

Essen’s Hierarchy of Quantified Self – Similar to Maslow’s Hierarchy of Needs, Ben Essen has come up with his own “Essen’s Hierarchy of Quantified Self” (see Fig. 1 below). Ben’s hierarchy starts with “goal-progress” (e.g setting daily goals with apps such as Fitbit and Wello) and ends with the “Quantified Society” where everything is informed by our personal data. I like how Ben’s ‘pyramid’ moves from “insight” to “enhancement”, thus highlighting the changing role of personal data as one moves up along the hierarchy.

Context-driven measurement – Melon is a great example with regard to quantifying personal data within a specific context. For instance, with Melon you can track how focused you are when you’re working on your laptop compared to when you’re meditating (see Fig. 2 below). Ben refers to this as “lifestyle context”, which implies that your personal data are likely to vary dependent on your mood or the activity that you are doing. Another good example is Nest which home products are designed to learn from user behaviour. I’ve written a few posts on wearable devices and wearable trends to look out for.

“The Human API” – Ben ultimately envisages a ‘Human API’ which encapsulates all your personal data, irrespective of the underlying data source (e.g. email, browse history, search, etc. – see Fig. 3 below). I’ve been trying to visualise an API of all my personal data (e.g. “went to a Danny Brown gig last month, purchased “The Mindful Leadership” on his Kindle and checked in with his Oyster in north London this morning”) and how a brand or other 3rd party would tap into this data set. This concept provides both opportunities (e.g. fully personalised experiences) as well as risks (e.g what happens if my Human API falls into the wrong hands?).

Connecting data sets and devices – I strongly believe that the next frontier in digital development is the connection of different devices and the connection of a user’s various data sets. The possibilities are endless, but I reckon it will take a while to properly connect personal devices and data, thus creating a ‘personal platform API’ similar to the “Human API”, as mentioned by Ben in his talk at SXSW.

Data shouldn’t replace our intuition – I personally prefer using the term “data informed” over the more common “data driven” since I feel that there are some strong limitations to a purely data-driven approach (I’ve blogged about these constraints previously). In his talk, Ben stressed the importance of understanding and interpreting personal data and using data as a source for decision making. However, Ben was keen to stress that “self-tracking must feed our intuition. Not replace it.”

Main learning point: BenEssen has got a lot of interesting and thought-provoking insights around the topic of the “quantified self”. We are moving steadily in the direction of a society where a lot our behaviours, mood states and activities can or have already been quantified. The idea of a quantified self and a “Human API” will in my opinion truly materialise once we all get smarter about how we connect different devices and data sets. In the meantime, I suggest looking into some of Ben’s observations and reservations around self-tracking and have a think about about how we can move up “Essen’s Hierarchy of Quantified Self”.

One of the things I love about the web is the ease and the speed with which it creates transparency and the way it forces institutions to open up and share. A great example is WikiLeaks where the recent release of secret cables from US diplomats caused a massive media and diplomatic storm. It draws attention to to the fairly recent phenomenon of “open data” which I learned about.

I guess “open data” is all about opening up government data, providing the general public with easy access and enabling them to use this data in whichever way they like. The information thus shared could potentially vary from the US involvement in Iraq and Afghanistan to understanding how local councils are spending tax payers’ money.

Open data ‘champions’ such as Tim Berners-Lee, Nigel Shadbolt and Heather Brooke continue to make a strong case for all public bodies to open up their data. Not only is there a growing focus on transparency and accountability of public bodies but also a large increase in the number of people with the technical skills to make good use of these public data.

As a result, both the UK government (through data.gov.uk) and the US government (through its “Open Government” initiative) have been directing a lot of effort towards making government much more accessible to the public. I’m particularly interested in the (re) use of of publicly available government data with people creating useful applications which clearly serve a greater good. Good examples are a London cycle hire app and FixMyStreet which enables people to “view, report or discuss local problems”.

The opening up of previously ‘hidden’ data is an exciting recent development. The main things I’ve learned about open data thus far:

Open data is inevitable – The web has both caused and facilitated an increase in government accountability and a general need for transparency.

How will the data be used and reused? It will be interesting to see how the datasets released by governments and public bodies will be (re) used by the public. There are some interesting examples out there of apps that technology-savvy people have created and which make good use of publicly available data.

Main learning point: I guess the main challenge with open data is to convert raw and seemingly boring public datasets into accessible, useful applications. Ultimately, the more people (re) using these data will help to create even greater transparency and open up more data.