The Diversity Blog – SaaS, Cloud & Business Strategyhttp://www.diversity.net.nz
Thoughts on the Future of Business and User-Centered TechnologyThu, 15 Mar 2018 16:17:49 +0000en-UShourly1130746192http://www.feedburner.com/fb/images/pub/fb_pwrd.giffeedburnericonDiversitynetnzhttps://feedburner.google.comSubscribe with My Yahoo!Subscribe with NewsGatorSubscribe with My AOLSubscribe with BloglinesSubscribe with NetvibesSubscribe with GoogleSubscribe with PageflakesThat was quick: Crafty Collison’s Creative thing gets CNCF credhttp://feedproxy.google.com/~r/Diversitynetnz/~3/FQEDAcl8YKw/
http://www.diversity.net.nz/that-was-quick-crafty-collisons-creative-thing-gets-cncf-cred/2018/03/15/#respondThu, 15 Mar 2018 15:30:02 +0000http://www.diversity.net.nz/?p=36353Derek Collison is, famously, one of the creators of Cloud Foundry, the platform as a service (PaaS) initiative that is seemingly taking the world by storm. Collison left VMware, the company where Cloud Foundry first saw the light of day, and headed off to found Apcera, a company in a related space. That journey was less successful, Telco vendor Ericcson “invested” (essentially acquired) the company and, as is often the case with these large corporates, the technology somewhat withered on the vine.

But Collison wasn’t done and jumped ship to found Synadia Communication, a company in (yes, there is a theme here) a related space. Synadia was created to commercialize NATS, an open source messaging system that Collison first created way back as the messaging control plane for Cloud Foundry. NATS continued on under the Apcera guise and, one assumes, as part of his leaving Apcera, Collison got to take NATS along for the ride.

Over its history, NATS has been re-conceived a few times, it was originally written in Ruby, before being ported to Go. Under the MIT open source licenses (and, more recently, Apache-2), NATS is a family of open source components that are tightly integrated but can be deployed independently. NATS is based on a client-server architecture with servers that can be clustered to operate as a single entity – clients connect to these clusters to exchange data encapsulated in messages. NATS consists of:

A connector framework – a pluggable Java-based framework to connect NATS and other services.

Anyway, NATS was getting a fair amount of traction in the open source, cloud-native world and this didn’t go unnoticed by the Cloud Native Computing Foundation (CNCF). The CNCF took a look, decided it would be complementary to the thousands (not quite, but there are a lot) of other projects they incubate and is this morning announcing the acceptance of NATS as an incubation-level hosted project, alongside Kubernetes, Prometheus, OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Envoy, Jaeger, Notary, TUF, Rook, and Vitess.

Somewhat interestingly here, NATS is actually a more mature project than many of the other CNCF roster. NATS has been around, in different guises, for seven years and also fulfills a very real requirement within the container-based/cloud-native world, that of messaging or, as the technical lingo puts it, InterProcess Communication (IPC). In talking about the benefits that NATS brings to the world generally, and the CNCF world, in particular, Collison says that:

While most messaging systems provide a mechanism to persist messages and ensure message delivery, NATS does this through log based streaming – which we’ve found to be an easier way to store and replay messages. NATS is a simple yet powerful messaging system written to support modern cloud native architectures. Because complexity does not scale, NATS is designed to be easy to use while acting as a central nervous system for building distributed applications.

In application, NATS Streaming subscribers can retrieve messages published when they were offline, or replay a series of messages. Streaming inherently provides a buffer in the distributed application ecosystem, increasing stability. This allows applications to offload local message caching and buffering logic into NATS and ensures a message is never lost. The benefits that NATS brings have seen it used across various different use-cases: microservice architectures, cloud-native applications and IoT messaging. It is being adopted by a large number of organizations including such marques as Capital One, Comcast, General Electric (GE), and HTC.

MyPOV

A logical addition to the CNCF world since NATS has strong existing synergy with other CNCF projects – used heavily in conjunction with projects like Kubernetes, Prometheus, gRPC, Fluentd, Linkerd, and containerd. While many will worry about the growing breadth of the CNCF stable, to date the projects invited into the fold have been logical additions. So it is with NATS.

]]>http://www.diversity.net.nz/that-was-quick-crafty-collisons-creative-thing-gets-cncf-cred/2018/03/15/feed/036353http://www.diversity.net.nz/that-was-quick-crafty-collisons-creative-thing-gets-cncf-cred/2018/03/15/Cloudian slurps up Infinity Storage to broaden its appealhttp://feedproxy.google.com/~r/Diversitynetnz/~3/TF4z4ASMF4Y/
http://www.diversity.net.nz/cloudian-slurps-up-infinity-storage-to-broaden-its-appeal/2018/03/15/#respondThu, 15 Mar 2018 10:00:02 +0000http://www.diversity.net.nz/?p=36330News this morning from object storage vendor Cloudian who, in an effort to further build out its product offering and customer base in order to create an attractive IPO candidate, is announcing its acquisition of Infinity Storage, a Milan-based company working in the software-defined file storage space.

With the acquisition, Cloudian gains the ability to offer a broader storage story, one that encompasses both file and object-based approaches. Add to that the fact that Cloudian’s offering is built to be compatible with the default cloud standard – Amazon Web Services S3 API, and you have a smart, forward-looking offering.

It also means that Cloudian can now offer both file and object-based storage and thereby consolidate the different storage requirements that many of their customers have. Cloudian suggests that its simple management approach can reduce TCO by 70% when compared to traditional multi-silo NAS systems.

While Infinity Storage wasn’t well-known, it’s founder was. Caterina Falchi was an inventor of the write-once-read-many (WORM) file system that provides jukebox file management and transparent access to data within this protected environment. WORM was originally designed to preserve data integrity for regulatory compliance and now also plays a vital role in protecting data from corruption caused by malware or ransomware. Post-acquisition, Falchi continues on with Cloudian as vice president of file technologies.

Of her company’s history, and it’s Cloudian-flavored future, Falchi has this to say:

For more than a decade, Infinity Storage software has helped enterprise customers simplify file management with enterprise-class features that provide a familiar user experience on next-generation storage platforms. While launching HyperFile with Cloudian, we immediately recognized that our company cultures and technologies meshed perfectly. We are genuinely thrilled to be joining the Cloudian team.

MyPOV

Increasingly organizations are looking to combine their various storage needs under a single platform vendor. The cost and management implications of various standalone products have become too great to continue with the status quo. Add to that the fact that unstructured data is creating a veritable tsunami of storage needs and it makes sense that scale-out platforms like Cloudian are increasingly attractive.

Of course, Cloudian isn’t alone and other vendors such as Scality, SwiftStack, and Rubrik are, to a greater or lesser extent, going after the same prize. That said, the prize is a big one, indeed Gartner predicts that by 2021, more than 80% of enterprise data will be stored in scale-out storage systems in enterprise and cloud data centers, up from 30% today – that’s a whole lot of storage to sell to these large organizations and Cloudian is gunning for a slice of that pie.

]]>http://www.diversity.net.nz/cloudian-slurps-up-infinity-storage-to-broaden-its-appeal/2018/03/15/feed/036330http://www.diversity.net.nz/cloudian-slurps-up-infinity-storage-to-broaden-its-appeal/2018/03/15/MapAnything’s new look aims to out-Google Google Mapshttp://feedproxy.google.com/~r/Diversitynetnz/~3/yRiE_bQZiyc/
http://www.diversity.net.nz/mapanythings-new-look-aims-to-out-google-google-maps/2018/03/14/#respondWed, 14 Mar 2018 16:00:39 +0000http://www.diversity.net.nz/?p=36317Google Maps is an awesome tool and I love it to bits but sometimes it’s good, but not quite good enough. An example of when Google Maps (and, to be fair, I’d include other mapping solutions in here as well, it’s just that personally I use the Goog’s product) falls down is when I have various appointments to keep and want to navigate between them all. Sure it’s not up there with cancer or world hunger, but cutting and pasting between calendar entries is kind of a pain (and, yes, I totally realize just how much of a first world problem this is!)

Anyway, the tech industry is predicated on spending 99% of its time fixing first world problems and MapAnything’s new UI aims to solve this particular one. MAX, presumably named in recognition of the fact that is will be able to navigate the eponymously named mad character from the Thunderdome, is a new interface designed to help field-based workers (or those who have to spend time in the field) plan their day directly from their calendar, rather than from the navigation system they might use.

In terms of who they are, MapAnything was founded back in the dark ages (2009, to be precise) to help with all things geo-productivity. It boasts of over 1,400 customers globally, who range in size from small businesses to large enterprises. MapAnything has been given a tick of approval by a couple of big boys – both Salesforce and ServiceNow regard them as a partner (since they are, after all, additive to those two companies’ own products.)

MapAnything, like any good tech company worth its while, has invented a few buzzwords for what it does. It’s not about navigation, rather it is “the leader in Location of Things” (sigh). Anyway, the new UI has been designed in consultation with the myriad of customers that MapAnything already has

In addition to a UX overhaul, customers are now able t build and architect routes using a new scheduling tool that pulls data directly from calendars in real-time. Until now, there hasn’t been any geo/mapping solution for Salesforce with such an integrated view of users’ already scheduled events and the events they’d like to fit into their day. With the new Schedule view:

A Sales rep can see their existing Salesforce Calendar events on the map and then use travel time to fill their day with other clients in the area

A Sales Manager can view their entire teams schedule view to coach them on where to best spend their time

Field Service Organizations can more effectively dispatch from a combined Map and Gantt view

It’s a simple, but a deceptively complex task that will genuinely save time for users. And allowing service staff and salespeople to save time and focus more on sales and billable hours is an attractive proposition. So attractive that MapAnything has raised well over $30 million to build its business – there’s money in A to B, it seems. MapAnything’s CEO, John Stewart is naturally ebullient about the new UI

Our new MapAnything X interface design is a result of data-driven insights from our customers’ and the feedback of actual sales and service reps that know what they need to succeed. The update is a direct response to what our customers said would most improve their own work, and we’re happy to provide them with the tools to achieve ‘MAX’ productivity.”

But of course, he’d say that.

MyPOV

I actually like what MapAnything is doing – it’s a fundamentally useful tool for a huge variety of different people. Key to the company’s success will be differentiating their product and ensuring that the software vendors themselves don’t decide to build out this functionality. Time will tell.

]]>http://www.diversity.net.nz/mapanythings-new-look-aims-to-out-google-google-maps/2018/03/14/feed/036317http://www.diversity.net.nz/mapanythings-new-look-aims-to-out-google-google-maps/2018/03/14/An epic Q&A on Microsoft’s Azure Stack offering–Part twohttp://feedproxy.google.com/~r/Diversitynetnz/~3/KAVCJYuiXaI/
http://www.diversity.net.nz/an-epic-qa-on-microsofts-azure-stack-offering-part-two/2018/03/13/#commentsTue, 13 Mar 2018 16:05:03 +0000http://www.diversity.net.nz/?p=36340I’ve broken this post into two, to make it easier to digest. Today we’re all about what Azure Stack isn’t, some advice for partners nervous about the future and a look to the future for Azure Stack.

I’ve been excited about Microsoft’s plans to offer an on-premises version of its cloud infrastructure offering for the longest time. While there are many cases of organizations that are completely in the public cloud, for every one of those there are literally hundreds of organizations whose needs are subtly different – organizations with a geographic, connectivity or regulatory reason to retain at least some of its infrastructure footprint on premises. And so the news, all those years ago, that Microsoft would offer some kind of on-premises cloud was both fascinating and exciting.

It’s fair to say that the process hasn’t exactly been plain sailing, and Microsofhasve had a few goes at getting this right. As I’ve written before, I’m also a little worried that there isn’t more clarity about exactly what Azure Stack isn’t.

It is for this reason that I took the time to engage in a lengthy (apologies in advance!) question and answer sessions with Natalia Mackevicius, the Director of Program Management for Azure Stack. I wanted to get deep into the thinking behind Azure Stack, the direction they’re taking it, and exactly what it is (and, perhaps more importantly, isn’t.)

And so, without any further ado, here goes my epic Q&A.

I wrote a post challenging Microsoft to be more emphatic about what Azure Stack isn’t. I perceive that there are lots of people thinking Azure Stack will be the answer to all their problems when, in some cases at least, it won’t. Can you help us all to understand what Azure Stack isn’t – and why that’s important?

When NIST first introduced us to the categories of public, private and hybrid cloud, that was a major step forward in evolving the utility model of computing, but the industry has come a long way since then.

For some customers, the expectation arose that they would be able to create and sustain “alternate” clouds to the public providers. There are very few who have been successful creating a general purpose private cloud. More importantly, most attempts at private cloud result in the realization that sustaining the rate of innovation required to keep that private cloud available, useful, and relevant is a daunting endeavor.

Azure Stack is not an alternative to Azure. Rather, it’s an extension — a way to leverage a company’s existing assets, such as oil rigs, factories or shipping vessels, and integrate them with the capabilities and ecosystem of a major public cloud provider.

There were also some who expected that private cloud would improve operational efficiencies in the same way that virtualization had for the client-server generation of applications. Azure Stack is not an incremental improvement to virtualization.

Most customers don’t need another way to deploy and manage the last thirty years of solutions. In fact, many are actively auditing their IT portfolios to determine how to get out of running legacy systems, so that they can focus on building business solutions based on cloud tools and techniques for the next thirty years.

As they go forward, and invest in cloud native applications, those can run on Azure Stack as needed.

And for those partners who see it as a way to avoid disruption in a world increasingly aligned with the public cloud, what advice do you have for them?

The disruption of cloud is in large part due to the way that it unlocks truly transformative innovation for customers; it fundamentally changes the rules of the last thirty years of IT that business practices are built around. In fact, almost every Azure Stack partner we work with is rethinking their own business models to account for the new rules, such as traditional hosters who are transitioning to a managed services model that’s easier to tie to usage style computing.

I think we’ve come a long way by helping them understand that Azure Stack is an extension of Azure that enables partners to open new lines of business for their Azure practices across the whole of their customers’ business assets.

Many partners understand that they have a massive business opportunity to help customers focus their IT resources on delivering business value through innovation, rather than simply managing costs and risks for heritage systems.

Awesome. With that out of the way, and with a deeper understanding about what Azure Stack isn’t, can you spend some time talking about the benefits you can deliver when Azure Stack is used in the ways it was intended?

For many established companies, there’s a tremendous amount of technical debt that has been built up and is holding back application innovation.

With a consistent platform across Azure and Azure Stack, companies can get beyond their debt by investing in new people skills (for both development and operations), modern application development architectures and processes, as well as operational standards that work the same way wherever they need them.

It gives them a way to not have to split up their IT investments into the different technology silos every time a business need or opportunity arises.

Crystal ball time. We’re sitting here in three or five years. What things do you think Azure Stack will be used for and how will the most progressive companies use it?

I have heard Satya say that computing is going to get more distributed and I agree with that. Azure Stack is a major component of that process.

Customers are going to initially build a solution for Azure and then deploy the exact same thing to Azure Stack or vice-versa.

But soon after that, and we are already seeing customers thinking this way, we’ll see single applications distributed across clouds the same way that many applications are distributed across servers today.

A potential example could be that a customer has a single server or device deployed to many locations, each transmitting IoT data to a regional hub where Azure Stack is running. Azure Stack is used to preprocess the data, meeting some kind of regulatory requirement, and then transmits the transformed data to Azure, where the data from each region is used to train a Machine Learning model. That model can then be brought back down and used to score data in the regional Azure Stack and power a line of business application.

As this model evolves, I think we’ll see new patterns, practices and architectures develop for both applications and data that push the hybrid cloud definition even further.

]]>http://www.diversity.net.nz/an-epic-qa-on-microsofts-azure-stack-offering-part-two/2018/03/13/feed/136340http://www.diversity.net.nz/an-epic-qa-on-microsofts-azure-stack-offering-part-two/2018/03/13/Talkie Twilio Takes on Terribly Toxic Traditional Contact Centershttp://feedproxy.google.com/~r/Diversitynetnz/~3/Gv02egU2df0/
http://www.diversity.net.nz/talkie-twilio-takes-on-terrible-contact-centers/2018/03/12/#respondMon, 12 Mar 2018 19:21:16 +0000http://www.diversity.net.nz/?p=36346It’s been interesting watching communications vendor Twilio since its IPO a couple of years ago. The company, which arguably invented the idea of communications as a developer tool, seems to have mastered the listed-company requirement of constant iteration and product development, and the markets seem to have rewarded the company, and its founder and CEO, Jeff Lawson, accordingly. Twilio has, over the years, grown to the point where over two million developers use its platform to build communications (voice, SMS, video etc.) into their own products.

Twilio is today ramping things up the notch with the announcement of Flex, Twilio’s take on what a contact center should look like. Flex is built on top of Twilio’s existing platform and aims to give both individual developers, as well as Twilio partners, a contact center tool designed to scale up to a massive 50,000 agents.

With this move Twilio would seem to be moving beyond modular communications components, and into the application space. This is a big move for the company and has the potential to see them greatly broaden their footprint and the awareness of their services within large enterprises. And this certainly is a space in need of innovation – while other application areas have seen a dose of innovation (hell, even boring old ERP and accounting systems have gotten more flexible over recent years), call center software seems to be stuck in the dark ages. Pretty much the innovation that large enterprises have had open to them is to move their call centers into the Far East to cut costs – there is, however, a whole heap of technology innovation which can make it more efficient and effective to offer a call center product.

Of course, Twilio’s platform has already been used by organizations to rethink their own call centers – companies such as ING Bank, Zillow, Simply Business and National Debt Relief have built customized contact center solutions have leveraged Twilio’s individual APIs to do so. With Flex, all companies will be able to essentially leverage a call center “out of a box.” The challenge for Twilio here was to make this instantly deployable, while still retaining flexibility – how to make this a packaged solution with the customization of a modular stack? Twilio believes it has resolved those challenges and, via the briefing materials, suggests that Flex allows organizations to:

Programmatically customize any user interface: While Flex user interfaces work out of the box, they are designed to be customized at every point of the contact center journey. Businesses can customize customer-facing components like click-to-call or click-to-chat, add entirely new channels or integrate reporting dashboards to display agent performance or customer satisfaction.

Integrate any application: Flex can integrate with any third-party applications that are critical to the business including customer relationship management (CRM) from Salesforce or Zendesk, workforce management (WFM), workforce optimization (WFO), reporting, analytics and data store.

Building the all-important ecosystem

Of course, building a product and selling it are two different things and while developer tools have more of a direct-sales motion, big back-office enterprise systems rely on a channel to go to market. To this end, Twilio is building a broad ecosystem of partners – from software vendors to technology partners to the system integrators who will get Flex up and running. As Ryan Nichols, General Manager of Zendesk Talk puts it:

In the customer experience space, customer needs are not one size fits all — there is build, buy and many variations in between. The partnership between Twilio and Zendesk is a powerful one because together, we are able to provide businesses choices in how how they can build experiences for their customers. We look forward to continuing to partner with Twilio in the future.

Alongside Zendesk, Twilio is partnering with Serenova in the software space. System integrators such as Tech Mahindra and Perficient will help customers build the exact call center solution they need while Twilio Marketplace partners including IBM Watson, Ytica, and Verint will provide customers with one-click integrations for capabilities like sentiment analysis, workforce optimization, workforce management, analytics, reporting, and storage.

MyPOV

What’s not to like about an entirely new revenue stream for Twilio? What is really interesting here is that this is arguably the first time that Twilio has built a product designed not for developers primarily, but as an all-in-one back office system. It will be interesting to see the market adoption they achieve.

]]>http://www.diversity.net.nz/talkie-twilio-takes-on-terrible-contact-centers/2018/03/12/feed/036346http://www.diversity.net.nz/talkie-twilio-takes-on-terrible-contact-centers/2018/03/12/An epic Q&A on Microsoft’s Azure Stack offering–Part onehttp://feedproxy.google.com/~r/Diversitynetnz/~3/beejMvjdOhM/
http://www.diversity.net.nz/an-epic-qa-on-microsofts-azure-stack-offering-part-one/2018/03/12/#respondMon, 12 Mar 2018 16:04:34 +0000http://www.diversity.net.nz/?p=36338I’ve broken this post into two, to make it easier to digest. Today we’re all about background to Azure Stack and why now is the right time for this product.

I’ve been excited about Microsoft’s plans to offer an on-premises version of its cloud infrastructure offering for the longest time. While there are many cases of organizations that are completely in the public cloud, for every one of those there are literally hundreds of organizations whose needs are subtly different – organizations with a geographic, connectivity or regulatory reason to retain at least some of its infrastructure footprint on premises. And so the news, all those years ago, that Microsoft would offer some kind of on-premises cloud was both fascinating and exciting.

It’s fair to say that the process hasn’t exactly been plain sailing, and Microsoft has had a few goes at getting this right. As I’ve written before, I’m also a little worried that there isn’t more clarity about exactly what Azure Stack isn’t.

It is for this reason that I took the time to engage in a lengthy (apologies in advance!) question and answer sessions with Natalia Mackevicius, the Director of Program Management for Azure Stack. I wanted to get deep into the thinking behind Azure Stack, the direction they’re taking it, and exactly what it is (and, perhaps more importantly, isn’t.)

And so, without any further ado, here goes my epic Q&A.

Hi Natalia, can you start by giving readers a quick introduction to who you are and what your role is within Microsoft?

I am Natalia Mackevicius, the Director of Program Management for Azure Stack. For the past 2.5 years, I’ve been working on Microsoft’s hybrid cloud strategy to bring the value of Azure to customers datacenters, with Azure Stack. I’ve been at Microsoft for 15 years. Prior to Azure Stack, I was the Director for the Partner and Customer Ecosystem.

So Azure Stack sounds like an interesting product offering, and one which really bridges the gaps that most real-world customers have between workloads which can run in the public cloud, and those that need some private infrastructure. Can you give us a sense of the vision you have for Azure Stack?

Many companies have enjoyed huge competitive advantages from large-scale assets – – like brick-and-mortar facilities or large-scale equipment. That said, businesses are seeing nimble startups and traditional competitors take advantage of the next generation of computing to outpace them with innovation. That puts many of our customers in a position to do two things at once – begin the process of a technology infusion to business assets and at the same time build the next generation of line of business applications.

Azure enables them to change the way they do business, and work to move the center of gravity of their IT investments to the cloud. But there are still durable scenarios that require more customization than what is possible in Azure. Azure Stack is a key component of extending the Azure platform for digital transformation on-premises to complete those scenarios.

We see the durable scenarios for Azure Stack clustered around two points of view on data: either 1) there is a latency problem when moving data between locations, or 2) policy requirements have established rules around the management of the data itself.

Lastly, we see customers who have assets on premises that over time they want to move to the public cloud – typically, large scale systems of record, such as mainframes, that require real-time processing. Customers are looking to modernize those applications, and over time move them to the public cloud.

For cases where it isn’t best for an application to bring the data to Azure services in Microsoft facilities, Azure Stack is a vessel for customers to bring Azure services to the data in their own facilities.

Why is the time right for a product like Azure Stack? What is it about cloud adoption rates and trends that make you think that it is the right product, at the right time?

Every time a new platform emerges, in this case cloud, there’s an explosion of different tools and techniques competing to take full advantage of it. At the same time, vendors are continually evaluating and refining what the platform does well and how it needs to evolve. Over time, the architectures, tools, and platform come into focus. This shift has taken longer with cloud due to the magnitude of the disruption. But it’s still the same pattern.

We’re entering the phase with Azure, and with the industry in general, where cloud application architectures and tools are becoming more established. Azure Stack represents the recognition that the public cloud-only approach was missing out on key durable customer scenarios – improved latency and compliance requirements, for example – that require the ability to run cloud applications on-premises.

Microsoft is in the unique position to deliver something to help take the next step forward. One of the things that makes Azure Stack distinctly different from other attempts at Private Cloud, is that it brings the Azure application platform to customers as a coherent whole, not a random assortment of parts that a customer must assemble.

]]>http://www.diversity.net.nz/an-epic-qa-on-microsofts-azure-stack-offering-part-one/2018/03/12/feed/036338http://www.diversity.net.nz/an-epic-qa-on-microsofts-azure-stack-offering-part-one/2018/03/12/European open banking rules and some predictions for the future of financial datahttp://feedproxy.google.com/~r/Diversitynetnz/~3/9EHw9ewUDYA/
http://www.diversity.net.nz/european-open-banking-rules-and-some-predictions-for-the-future-of-financial-data/2018/03/08/#respondThu, 08 Mar 2018 23:34:59 +0000http://www.diversity.net.nz/?p=36334The European Union, for all its ills, can be seen as a good leading indicator of global regulatory trends. While it is a generalization, the EU’s desire to protect its citizens means that regulations that could seem draconian in more liberal regions are put into place. And, over time, these protective regulations become the norm and pop up in other places. Whether it is individual privacy, the right to be forgotten or open data – Europe leads the way and other places follow.

So it is interesting to look to a new piece of European legislation that came into effect a couple of months ago. The Payment Services Directive (PSD2) is all about the connection of different financial data streams. In particular payments and data services. The idea being that a more level playing field will be created allowing anyone to use customer data to drive innovation and, ultimately, fulfill customer needs.

This move is, like regulations out of the EU in other areas, an example of what will become a global approach. The Europeans tend to be ahead of the rest of the world in developing these rules, and their experiences can be a good lesson for other geographies who will undoubtedly be forced to wrangle the same issues.

The technology barriers

It may come as a surprise (to, likely, almost no one) but banks’ technology footprint tends to be fairly archaic. Banks tend to innovate in the customer-facing aspects of their business but more often than not their core backend systems are monolithic legacy stacks that are the antithesis of open, interoperable or readily integrated. As such, it is actually a difficult challenge for banks to build the level of openness that standards dictate. Indeed, commenting on this very issue, Hans Tesselaar, executive director of Banking Industry Architecture Network (BIAN), a not-for-profit fintech industry body said”

The real challenge for banks is that they have to connect to their complex back office environment before they are able to work on implementing open APIs. This is something that fintechs do not have to do, resulting in a much faster and more efficient integration process than banks… API is a short acronym, so there’s the potential for misconceptions that these interfaces are somehow already ‘packaged up’ behind the scenes, ready to be declared ‘open’ for use. The reality is much more complex, due to banks’ tangled up and archaic core infrastructure.

Standards? You can’t handle the standards

If integration didn’t create enough of a roadblock, we also have the issue that the creation of APIs is largely unregulated and, as such, no standard way of describing a banking API exists. There is no lingua franca that everyone speaks.

While I’ve long been an opponent of standards being introduced too early, given the somewhat artificial acceleration that legislated open-data creates, a corresponding open standard is probably required.

The broader issues around financial data

In the UK, accounting vendor Xero has seen the writing on the wall and is playing an active part in thinking about what open banking means for accounting data. In an interview, Xero’s director of all things banking and Fintech, Edward Berks, shared some thoughts about what open banking means for the future of account and, by extension, the future of all financial service providers.

Berks spends a lot of time talking about the benefit that small businesses get from open banking data. I is worth bearing in mind that Xero was one of the first SMB-focused accounting solutions to integrate bank feeds – back then bank integrations were slow and laborious and hard to get over the line, open banking regulation will change this.

But beyond the banks feeding data to the accounting solutions, there is a return flow of data which, arguably, is going to mean much more in terms of fintech disruption. Berks talks about the credit-decisioning process. At the moment, applying to a bank for credit is a painful process involving the exporting of data and many paper forms. In a world where accounting data flows both ways between banks and the small business, this process can be automated to the point where banks can have automated decisioning processes based on analysis of customer data in real time.

Who monetizes, and who owns the data?

All of these conceptual opportunities are very interesting, but they do bring up an age old question, that of data access and monetization. In Europe, the decision has been made for the ecosystem. Customers own their data and if they want to provide access to that data to a bank, accounting vendors have to allow them to do so without “clipping the ticket.”

In less regulated markets, however (which means everywhere in the world except for Europe), the lines are not so clear and there is a lack of clarity about who owns the data and who has access to it. If we look at the credit decisioning process that Berks references above, there are three obvious parties: the customer, the bank, and the accounting software provider. If the customer is paying for their access to accounting software, one would be forgiven that the default position is that they can offer that data to whomever they like. The reality is somewhat different and accounting software vendors may suggest that, since they provide the platform upon which the accounting data is processed, that the data is, at least to an extent, their own IP.

From this perspective, it would make sense that software vendors would want to monetize the provision of this data to banks, regardless of the fact that all of this data relates to a single entity, the end customer.

Which way is the puck traveling?

You’d have to be brave to suggest anything but the fact that the European model – that of customer data belonging to the customer and available for them to use as they wish – will become the norm going forward. To get to that point, however, there are a few technical hurdles to overcome. integrating financial data into both legacy and newer operational systems is a long, slow process and this “plumbing” aspect of financial data cannot be underestimated.

At the same time, there will no doubt be some positioning by the various parties, all keen to clip the ticket as much, and for as long, as they can. Expect to hear of many examples where arguments arise over access to customer data, caused in part by various parties in this relationship wanting to monetize the flow of that data.

Over time, however, things will resolve. History has shown that openness will become the norm and that any attempts to stop that openness for commercial gain will be quashed quickly. There’s a lot of amazing stuff that you can do once financial data can be moved quickly and easily, I’m looking forward to seeing that day eventuate.

It’s all fair in love, war, monitoring and log management as Datadog announced that its log management solution will now be available generally to customers. What that means is that, from today, all Datadog customers will be able to correlate between their log data and the underlying infrastructure and application states that exist. All of which should lead to quicker diagnosis of faults and a reduced time to resolution.

The key thing here for customers is that all existing Datadog solutions will now have the added sprinkling of log data fairy dust, meaning that, as well as unicorns and rainbows, those customers who formerly used a standalone log management offering (cough, Splunk) should have most of their use cases ticked off by the single platform.

As Datadog sees it (and I tend to agree) there are three pillars of observability within cloud applications: infrastructure metrics, application traces, and event logs. As I have been banging on about for quite some time now, I see a real convergence in the space – the application monitoring vendors are all moving on to include infrastructure monitoring while those who traditionally had an infrastructure-centric view, are rapidly backfilling application monitoring into the mix. Both of these distinct groups are also seeing the log capture and analysis opportunity as a big one.

How do they price?

Datadog’s pricing for ingesting and managing logs is based on a value-based model. What this means is that there is a low upfront cost for total logs ingested, encouraging customers to slurp up as much data as possible. Customers then take advantage of the filtering capabilities to decide which logs they wish to index, and which they wish to archive. The upshot is that the pricing model should work well across the continuum of organizations from the smallest, too much larger ones.

Fabien Jallot, the head of DevOps at 24 Sèvres would seem to be a real believer explaining that:

We have various Amazon Web Services (AWS) solutions, containers, servers and applications to monitor. Infrastructure metrics and logs often work together to tell the full story of what’s happening within an application, so we find ourselves continuously jumping from Datadog to another third party log management solution to connect the dots – an integrated solution native to Datadog is a tremendous gain of productivity.

MyPOV

I’ve always thought that a combined solution offering application and infrastructure monitoring alongside log analytics was the best approach. It is an approach that pretty much everyone, no matter which side of the problem they originated from, seems to be coming to.

Time will tell how Datadog fares in an increasingly competitive environment.

]]>http://www.diversity.net.nz/dogging-a-log-loggy-datadog-drops-logs-into-ga/2018/03/06/feed/036313http://www.diversity.net.nz/dogging-a-log-loggy-datadog-drops-logs-into-ga/2018/03/06/Bringing performance management beyond the finance team, Host wants to be…. the mosthttp://feedproxy.google.com/~r/Diversitynetnz/~3/zcCySho_R10/
http://www.diversity.net.nz/bringing-performance-management-beyond-the-finance-team-host-wants-to-be-the-most/2018/03/05/#respondMon, 05 Mar 2018 13:00:59 +0000http://www.diversity.net.nz/?p=36321Host Analytics is a vendor in the so-called enterprise performance management (EPM) space. For those unaccustomed to the world of the enterprise, it works like this: few people actually do anything, while many, many more people spend their time planning and reporting. It’s a sad fact of life that what constitutes “work” in a large enterprise would be scoffed at by those who spend their lives working on a building site, in a hospital, or in a school.

My thoughts on the efficiency and efficacy of the enterprise aside, all of that planning and reporting requires a tool. And in this, organizations have a couple of choices. First, they can do what they’ve done since the dawn of the IT age and leverage a spreadsheeting tool such as Excel. This is where many finance pros like to play and they spend their days geeking out on pivot tables and monstrous spreadsheets.

The other option is to leverage an EPM solution – and large platform vendors (such as Oracle and SAP), as well as pureplay companies (Host Analytics, Anaplan, Adaptive Insights etc), are happy to offer these tools. For its part, Host boasts of over 1,000 customers in 90 countries including such marques as Bose, Boston Red Sox, FitBit, La-Z-Boy, Mayo Clinic, NPR, OpenTable, Peet’s Coffee & Tea, Pinterest, Swissport, TOMS Shoes, and Vitamin Shoppe.

Democratization

But if you’re a vendor trying to displace a tool which is a favorite of the main users of your class of software, you have a real challenge at hand. And one way to mitigate that challenge is to democratize the solution. Essentially what this entails is ensuring that less skilled individuals can suddenly leverage your class of software, thus opening up an entirely new market. The rise of low-code and no-code development solutions is an analogous situation.

In an attempt to democratize EPM, Host is announcing a new offering, “Project Orion” that is aimed at offering an EPM tool which is specifically designed for business users. Now in beta status, Orion is intended for general availability in the second quarter of this year.

In thinking about how to create Orion, Host Analytics spent time pondering the advent of EPM and came to its own conclusion: that for decades, EPM vendors have seen the end-user interface as a configuration problem – designing the system for the finance professional and then re-configuring it or “dumbing it down” for the business user. Instead, Host took the concept of bottom-up, to EPM.

So, what does it do?

Does it do what it says on the box? You be the judge, initially Project Orion features:

A “consumer-grade” user interface. A single interface for planning, budgeting, and forecasting

A task-oriented design that guides business users through the most mission-critical EPM tasks. Don’t call it “dumbed down” call it simplified and helpful

One eye still on the finance role. Orion doesn’t alienate the existing finance users – there is enough customization available to keep them happy

A win/win?

Host is bullish and suggests that both sides of the house win with Orion: finance wins by driving higher engagement, collaboration, and accountability in planning and budgeting while maintaining control over the budgeting process. Business users win by gaining real-time visibility into their budgets, as well as attaining more effective and efficient use of resources.

Whether this will come to bear has as much about elitism (I can imagine finance individuals scoffing at a simplified solution set) and turf protection as it does anything else. For now, however, project Orion is an interesting addition to the EPM space.

]]>http://www.diversity.net.nz/bringing-performance-management-beyond-the-finance-team-host-wants-to-be-the-most/2018/03/05/feed/036321http://www.diversity.net.nz/bringing-performance-management-beyond-the-finance-team-host-wants-to-be-the-most/2018/03/05/Teeny weeny baby steps: Getting Hippie with DocSpera for better surgical outcomeshttp://feedproxy.google.com/~r/Diversitynetnz/~3/bhrwgDGJUNw/
http://www.diversity.net.nz/teeny-weeny-baby-steps-getting-hippie-with-docspera-for-better-surgical-outcomes/2018/02/28/#respondWed, 28 Feb 2018 18:00:39 +0000http://www.diversity.net.nz/?p=36305Anyone who has worked in a medical setting, or who has had the misfortune of having surgery, will be well aware of just how antiquated the systems and processes are behind surgical procedures. Whereas other industries have deeply embraced digitalization and paper-based forms are fading into distant memory, the surgical arena is one where duplication, manual processes, and screeds of paper are still the norm.

So it is cool to hear about the baby steps that are occurring to help surgical teams modernize what they do. A case in point is a recently announced partnership between DocSpera and the Anterior Hip Foundation.

The Anterior Hip Foundation (AHF) is a non-profit organization that focuses on the advancement of anterior approach hip surgery. Essentially anything related to hip surgery from the front is within the ambit of AHF’s interest. Device innovations, educational programs, technology all contribute to better outcomes. For its part, DocSpera is a company that was founded by surgeons and technologists that is building software tools to increase the successful outcomes from surgery and post-operative care and recovery.

Put the two together and you now have a new web portal designed to encourage communication and collaboration across the surgical community. Operating behind the AHF firewall, all AHF members will be able to access a community education and skill-building tool. AHF seems to be particularly enamored with the tool, potentially an indication of just how behind-the-times this industry is. AHF Vice President, Charles DeCook MD said that:

The AHF is always seeking innovative ways to deliver learning opportunities to orthopedic physicians and staff, and DocSpera’s unique and innovative platform capabilities were instrumental to building and launching our new web portal. Our AHF Members can now enjoy immediate access to a password protected, HIPAA-compliant portal to ask questions, download information, view surgical tips, and participate in near real-time peer to peer dialogue.

MyPOV

I’m a little bit torn here. On the positive side, disseminating information and ensuring that everyone involved in anterior hip surgery has access to the latest research, collegial feedback and a forum for discussion is a great outcome. On the other hand, this isn’t exactly rocket science. Essentially DocSpera is giving AHF functionality that enterprise social vendors such as Box and Jive offered nearly a decade ago.

You’d have thought that, seeing the plethora of machine learning, virtual reality and 3D printing innovations going on, that many of those would be applied to this sector to deliver a genuinely changed game. But, alas, the wheels of progress move fairly slowly in the medical world.

Don’t get me wrong, more collaboration and community dialog is always good but… the world should have really moved on from these tiny first steps.