Author: Ron Miller

Sonatype helps enterprises identify and remediate vulnerabilities in open source library dependencies and release more secure code. Today, they announced a free tool called DepShield that offers a basic level of protection for GitHub developers.

The product is actually two parts. For starters, Sonatype has a database of open source dependency vulnerabilities called OSS Index. The company gathers this information from a variety of public sources, says Sonatype CEO Wayne Jackson. While it isn’t as highly curated as the company’s commercial offerings, it does offer a layer of protection that most individual developers or small shops wouldn’t normally have access to.

After a developer installs DepShield, it checks a code commit in GitHub against the known vulnerabilities in the OSS Index with recommendations on how to proceed. The company’s commercial offerings includes a policy engine to automate remediation. The free version simply lets developers know if there are issues, and they can go back and fix them if need be.

“What DepShield and OSS Index are doing is allowing the developers at the front lines to be able to see what’s happening inside their applications and fix the vulnerabilities directly,” Jackson said.

Vulnerability listed in OSS Index. Screenshot: Sonatype

As for the differences between the commercial and free products, Jackson say it’s a matter of scale. “The way you manage a single application or handful of applications as a developer is different than how you might approach it if you’re a CISO or a governance organization for thousands of applications,” he explained. The latter requires a higher level of automation than the former because of the sheer number of applications involved.

DepShield offers the 28 million developers using GitHub access to a baseline level of protection by identifying a set of known vulnerabilities in their applications before they make them public. Jackson says that GitHub’s role is evolving. Today, it’s not only a tool for committing your code, it’s also become a place to do issue tracking and code reviews, and he believes that as such, a product like DepShield is a natural fit.

Known issues list DepShield. Screenshot: Sonatype

DepShield is available starting today in the Security section of the GitHub Marketplace and developers can download and install it for free.

Federacy, a member of the Y Combinator Summer 2018 class, has a mission to make bug bounty programs available to even the smallest startup.

Traditionally, bug bounty programs from players like BugCrowd and HackerOne have been geared toward larger organizations. While these certainly have their place, founders William and James Sulinski, who happen to be twins, felt there was a gap in the marketplace, where smaller organizations were being left out of what they considered to be a crucial service. They wanted to make bug bounty programs and the ability to connect without outside researchers much more accessible, so they built Federacy.

“We think that we can make the biggest impact by making the platform free to set up and incredibly simple for even the most resource-strapped startup to extract value. In doing so, we want to expand bug bounties from probably a few hundred companies currently — across BugCrowd, HackerOne, etc. — to a million or more in the long run,” William Sulinski told TechCrunch.

That’s an ambitious long-term goal, but for now, they are just getting started. In fact, the brothers only began building the platform when they arrived at Y Combinator a couple of months ago. Once they built a working product, they started by testing it on the members of their cohort, using knowledgeable friends as security researchers.

They made the service public for the first time just last week on Hacker News and report more than 120 sign-ups already. Their goal is 1,000 sign-ups by year’s end, which William claims would make them the largest bug bounty platform by count out there.

Screenshot: Federacy

For now, they are vetting every researcher they bring on the platform. While they realize this approach probably won’t be sustainable forever, they want to control access at least for the early days while they build the platform. They plan to be especially attentive to the researchers, recognizing the value they bring to the ecosystem.

“It’s really important to treat researchers with respect and be attentive. These people are incredibly smart and valuable and are often not treated well. A big thing is just being responsive when they have a report,” Sulinski explained.

Screenshot: Federacy

As for the future, the brothers hope to keep building out the program and developing the platform. One idea they have is getting a fee should a client build a relationship with a particular researcher and want to contract with that individual. They also plan to take a small percentage of each bounty for revenue.

Unlike more typical YC participants, the brothers are a bit older, in their mid-thirties, with more than 20 years of professional experience under their belts. Brother James was director of engineering at MoPub, a mobile ad platform that Twitter acquired for $350 million in 2013. Earlier he helped build infrastructure for drop.io, a file-sharing site that Facebook acquired in 2010. As for William, he was CEO of AccelGolf and Pistol Lake, and founding member and project lead at Shareaholic.

In spite of their broad experience, the brothers have valued the practical advice Y Combinator has provided for them and found the overall atmosphere inspiring. “It’s hard not to be in awe of the incredible things that people have built in this program,” William said.

IBM and shipping giant Maersk having been working together for the last year developing a blockchain-based shipping solution called TradeLens. Today they moved the project from Beta into limited availability.

Marie Wieck, GM for IBM Blockchain says the product provides a way to digitize every step of the global trade workflow, transforming it into a real-time communication and visual data sharing tool.

TradeLens was developed jointly by the two companies with IBM providing the underlying blockchain technology and Maersk bringing the worldwide shipping expertise. It involves three components: the blockchain, which provides a mechanism for tracking goods from factory or field to delivery, APIs for others to build new applications on top of the platform these two companies have built, and a set of standards to facilitate data sharing among the different entities in the workflow such as customs, ports and shipping companies.

Wieck says the blockchain really changes how companies have traditionally tracked shipped goods. While many of the entities in the system have digitized the process, the data they have has been trapped in siloes and previous attempts at sharing like EDI have been limited. “The challenge is they tend to think of a linear flow and you really only have visibility one [level] up and one down in your value chain,” she said.

The blockchain provides a couple of obvious advantages over previous methods. For starters, she says it’s safer because data is distributed, making it much more secure with digital encryption built in. The greatest advantage though is the visibility it provides. Every participant can check any aspect of the flow in real time, or an auditor or other authority can easily track the entire process from start to finish by clicking on a block in the blockchain instead of requesting data from each entity manually.

While she says it won’t entirely prevent fraud, it does help reduce it by putting more eyeballs onto the process. “If you had fraudulent data at start, blockchain won’t help prevent that. What it does help with is that you have multiple people validating every data set and you get greater visibility when something doesn’t look right,” she said.

As for the APIs, she sees the system becoming a shipping information platform. Developers can build on top of that, taking advantage of the data in the system to build even greater efficiencies. The standards help pull it together and align with APIs, such as providing a standard Bill of Lading. They are starting by incorporating existing industry standards, but are also looking for gaps that slow things down to add new standard approaches that would benefit everyone in the system.

So far, the companies have 94 entities in 300 locations around the world using TradeLens including customs authorities, ports, cargo shippers and logistics companies. They are opening the program to limited availability today with the goal of a full launch by the end of this year.

Wieck ultimately sees TradeLens as a way to facilitate trade by building in trust, the end of goal of any blockchain product. “By virtue of already having an early adopter program, and having coverage of 300 trading locations around the world, it is a very good basis for the global exchange of information. And I personally think visibility creates trust, and that can help in a myriad of ways,” she said.

Evolute, a 3-year old startup out of Mountain View, officially launched the Evolute platform today with the goal of helping large organizations migrate applications to containers and manage those containers at scale.

Evolute founder and CEO Kristopher Francisco says he wants to give all Fortune 500 companies access to the same technology that big companies like Apple and Google enjoy because of their size and scale.

“We’re really focused on enabling enterprise companies to do two things really well. The first thing is to be able to systematically move into the container technology. And the second thing is to be able to run operationally at scale with existing and new applications that they’re creating in their enterprise environment,” Francisco explained.

While there are a number of sophisticated competing technologies out there, he says that his company has come up with some serious differentiators. For starters, getting legacy tech into containers has proven a time-consuming and challenging process. In fact, he says manually moving a legacy app and all its dependencies to a container has typically taken 3-6 months per application.

He claims his company has reduced that process to minutes, putting containerization within reach of just about any large organization that wants to move their existing applications to container technology, while reducing the total ramp-up time to convert a portfolio of existing applications from years to a couple of weeks.

Evolute management console. Screenshot: Evolute

The second part of the equation is managing the containers, and Francisco acknowledges that there are other platforms out there for running containers in production including Kubernetes, the open source container orchestration tool, but he says his company’s ability to manage containers at scale separates him from the pack.

“In the enterprise, the reason that you see the [containerization] adoption numbers being so low is partially because of the scale challenge they face. In the Evolute platform, we actually provide them the native networking, security and management capabilities to be able to run at scale,” he said.

The company also announced that it been invited to join the Chevron Technology Ventures’ Catalyst Program, which provides support for early stage companies like Evolute. This could help push Evolute to business units inside Chevron looking to move into containerization technology and be big boost for the startup.

The company has been around in since 2015 and boasts several other Fortune 500 companies beyond Chevron as customers, although it is not in a position to name them publicly just yet. The company has 5 full time employees and has raised $500,000 in seed money across two rounds, according to data on Crunchbase.

SessionM announced a $23.8 million Series E investment led by Salesforce Ventures. A bushel of existing investors including Causeway Media Partners, CRV, General Atlantic, Highland Capital and Kleiner Perkins Caufield & Byers also contributed to the round. The company has now raised over $97 million.

At its core, SessionM aggregates loyalty data for brands to help them understand their customer better, says company co-founder and CEO Lars Albright. “We are a customer data and engagement platform that helps companies build more loyal and profitable relationships with their consumers,” he explained.

Essentially that means, they are pulling data from a variety of sources and helping brands offer customers more targeted incentives, offers and product recommendations “We give [our users] a holistic view of that customer and what motivates them,” he said.

Screenshot: SessionM (cropped)

To achieve this, SessionM takes advantage of machine learning to analyze the data stream and integrates with partner platforms like Salesforce, Adobe and others. This certainly fits in with Adobe’s goal to build a customer service experience system of record and Salesforce’s acquisition of Mulesoft in March to integrate data from across an organization, all in the interest of better understanding the customer.

When it comes to using data like this, especially with the advent of GDPR in the EU in May, Albright recognizes that companies need to be more careful with data, and that it has really enhanced the sensitivity around stewardship for all data-driven businesses like his.

“We’ve been at the forefront of adopting the right product requirements and features that allow our clients and businesses to give their consumers the necessary control to be sure we’re complying with all the GDPR regulations,” he explained.

The company was not discussing valuation or revenue. Their most recent round prior to today’s announcement, was a Series D in 2016 for $35 million also led by Salesforce Ventures.

SessionM, which was founded in 2011, has around 200 employees with headquarters in downtown Boston. Customers include Coca-Cola, L’Oreal and Barney’s.

Okta, the cloud identity management company, announced today it has purchased a startup called ScaleFT to bring the Zero Trust concept to the Okta platform. Terms of the deal were not disclosed.

While Zero Trust isn’t exactly new to a cloud identity management company like Okta, acquiring ScaleFT gives them a solid cloud-based Zero Trust foundation on which to continue to develop the concept internally.

“To help our customers increase security while also meeting the demands of the modern workforce, we’re acquiring ScaleFT to further our contextual access management vision — and ensure the right people get access to the right resources for the shortest amount of time,” Okta co-founder and COO Frederic Kerrest said in a statement.

Zero Trust is a security framework that acknowledges work no longer happens behind the friendly confines of a firewall. In the old days before mobile and cloud, you could be pretty certain that anyone on your corporate network had the authority to be there, but as we have moved into a mobile world, it’s no longer a simple matter to defend a perimeter when there is effectively no such thing. Zero Trust means what it says: you can’t trust anyone on your systems and have to provide an appropriate security posture.

The idea was pioneered by Google’s “BeyondCorp” principals and the founders of ScaleFT are adherents to this idea. According to Okta, “ScaleFT developed a cloud-native Zero Trust access management solution that makes it easier to secure access to company resources without the need for a traditional VPN.”

Okta wants to incorporate the ScaleFT team and, well, scale their solution for large enterprise customers interested in developing this concept, according to a company blog post by Kerrest.

“Together, we’ll work to bring Zero Trust to the enterprise by providing organizations with a framework to protect sensitive data, without compromising on experience. Okta and ScaleFT will deliver next-generation continuous authentication capabilities to secure server access — from cloud to ground,” Kerrest wrote in the blog post.

ScaleFT CEO and co-founder Jason Luce will manage the transition between the two companies, while CTO and co-founder Paul Querna will lead strategy and execution of Okta’s Zero Trust architecture. CSO Marc Rogers will take on the role of Okta’s Executive Director, Cybersecurity Strategy.

The acquisition allows the Okta to move beyond purely managing identity into broader cyber security, at least conceptually. Certainly Roger’s new role suggests the company could have other ideas to expand further into general cyber security beyond Zero Trust.

ScaleFT was founded in 2015 and has raised $2.8 million over two seed rounds, according to Crunchbase data.

While serverless computing isn’t new, it has reached an interesting place in its development. As developers begin to see the value of serverless architecture, a whole new startup ecosystem could begin to develop around it.

Serverless isn’t exactly serverless at all, but it does enable a developer to set event triggers and leave the infrastructure requirements completely to the cloud provider. The vendor delivers exactly the right amount of compute, storage and memory and the developer doesn’t even have to think about it (or code for it).

That sounds ideal on its face, but as with every new technology, for each solution there is a set of new problems and those issues tend to represent openings for enterprising entrepreneurs. That could mean big opportunities in the coming years for companies building security, tooling, libraries, APIs, monitoring and a whole host of tools serverless will likely require as it evolves.

Building layers of abstraction

In the beginning we had physical servers, but there was lots of wasted capacity. That led to the development of virtual machines, which enabled IT to take a single physical server and divide it into multiple virtual ones. While that was a huge breakthrough for its time, helped launch successful companies like VMware and paved the way for cloud computing, it was the only beginning.

Then came containers, which really began to take off with the development of Docker and Kubernetes, two open source platforms. Containers enable the developer to break down a large monolithic program into discrete pieces, which helps it run more efficiently. More recently, we’ve seen the rise of serverless or event-driven computing. In this case, the whole idea of infrastructure itself is being abstracted away.

Photo: shutterjack/Getty Images

While it’s not truly serverless, since you need underlying compute, storage and memory to run a program, it is removing the need for developers to worry about servers. Today, so much coding goes into connecting the program’s components to run on whatever hardware (virtual or otherwise) you have designated. With serverless, the cloud vendor handles all of that for the developer.

All of the major vendors have launched serverless products with AWS Lambda, Google Cloud Functions and Microsoft Azure Functions all offering a similar approach. But it has the potential to be more than just another way to code. It could eventually shift the way we think about programming and its relation to the underlying infrastructure altogether.

It’s important to understand that we aren’t quite there yet, and a lot of work still needs to happen for serverless to really take hold, but it has enormous potential to be a startup feeder system in coming years and it’s certainly caught the attention of investors looking for the next big thing.

Removing another barrier to entry

Tim Wagner, general manager for AWS Lambda, says the primary advantage of serverless computing is that it allows developers to strip away all of the challenges associated with managing servers. “So there is no provisioning, deploying patching or monitoring — all those details at the the server and operating system level go away,” he explained.

He says this allows developers to reduce the entire coding process to the function level. The programmer defines the event or function and the cloud provider figures out the exact amount of underlying infrastructure required to run it. Mind you, this can be as little as a single line of code.

Colin Anderson/Getty Images

Sarah Guo, a partner at Greylock Partners, who invests in early stage companies sees serverless computing as offering a way for developers to concentrate on just the code by leaving the infrastructure management to the provider. “If you look at one of the amazing things cloud computing platforms have done, it has just taken a lot of the expertise and cost that you need to build a scalable service and shifted it to [the cloud provider],” she said. Serverless takes that concept and shifts it even further by allowing developers to concentrate solely on the user’s needs without having to worry about what it takes to actually run the program.

Survey says…

Cloud computing company Digital Ocean recently surveyed over 4800 IT pros, of which 55 percent identified themselves as developers. When asked about serverless, nearly half of respondents reported they didn’t fully understand the serverless concept. On the other hand, they certainly recognized the importance of learning more about it with 81 percent reporting that they plan to do further research this year.

When asked if they had deployed a serverless application in the last year, not surprisingly about two-thirds reported they hadn’t. This was consistent across regions with India reporting a slightly higher rate of serverless adoption.

Graph: Digital Ocean

Of those using serverless, Digital Ocean found that AWS was by far the most popular service with 58 percent of respondents reporting Lambda was their chosen tool, followed by Google Cloud Functions with 23 percent and Microsoft Azure Functions further back at 10 percent.

Interestingly enough, one of the reasons that respondents reported a reluctance to begin adopting serverless was a lack of tooling. “One of the biggest challenges developers report when it comes to serverless is monitoring and debugging,” the report stated. That lack of visibility, however could also represent an opening for startups.

Creating ecosystems

The thing about abstraction is that it simplifies operations on one level, but it also creates a new set of requirements, some expected and some that might surprise as a new way of programming scales. This lack of tooling could potentially hinder the development, but more often than not when necessity calls, it can stimulate the development of a new set of instrumentation.

This is certainly something that Guo recognizes as an investor. “I think there is a lot of promise as we improve a bunch of things around making it easier for developers to access serverless, while expanding the use cases, and concentrating on issues like visibility and security, which are all [issues] when you give more and more control of [the infrastructure] to someone else,” she said.

Photo: shylendrahoode/Getty Images

Ping Li, general partner at Accel also sees an opportunity here for investors. “I think the reality is that anytime there’s a kind of shift from a developer application perspective, there’s an opportunity to create a new set of tools or products that help you enable those platforms,” he said.

Li says the promise is there, but it won’t happen right away because there needs to be a critical mass of developers using serverless methodologies first. “I would say that we are definitely interested in serverless in that we believe it’s going to be a big part of how applications will be built in the future, but it’s still in its early stages,” Ping said.

S. Somasgear, managing director at Madrona Ventures says that even as serverless removes complexity, it creates a new set of issues, which in turn creates openings for startups. “It is complicated because we are trying to create this abstraction layer over the underlying infrastructure and telling the developers that you don’t need to worry about it. But that means, there are a lot of tools that have to exist in place — whether it is development tools, deployment tools, debugging tools or monitoring tools — that enable the developer to know certain things are happening when you’re operating in a serverless environment.

Beyond tooling

Having that visibility in a serverless world is a real challenge, but it is not the only opening here. There are also opportunities for trigger or function libraries or companies akin to Twilio or Stripe, which offer easy API access to a set of functionality without having a particular expertise like communications or payment gateways There could be similar analogous needs in the serverless world.

Companies are beginning to take advantage of serverless computing to find new ways of solving problems. Over time, we should begin to see more developer momentum toward this approach and more tools develop.

While it is early days, as Guo says, it’s not as though developers love running infrastructure. It’s just been a necessity. “I think will be very interesting. I just think we’re still very early in the ecosystem,” she said. Yet certainly the potential is there if the pieces fall into place and programmer momentum builds around this way of developing applications for it to really take off and for a startup ecosystem to follow.

While serverless computing isn’t new, it has reached an interesting place in its development. As developers begin to see the value of serverless architecture, a whole new startup ecosystem could begin to develop around it.

Serverless isn’t exactly serverless at all, but it does enable a developer to set event triggers and leave the infrastructure requirements completely to the cloud provider. The vendor delivers exactly the right amount of compute, storage and memory and the developer doesn’t even have to think about it (or code for it).

That sounds ideal on its face, but as with every new technology, for each solution there is a set of new problems and those issues tend to represent openings for enterprising entrepreneurs. That could mean big opportunities in the coming years for companies building security, tooling, libraries, APIs, monitoring and a whole host of tools serverless will likely require as it evolves.

Building layers of abstraction

In the beginning we had physical servers, but there was lots of wasted capacity. That led to the development of virtual machines, which enabled IT to take a single physical server and divide it into multiple virtual ones. While that was a huge breakthrough for its time, helped launch successful companies like VMware and paved the way for cloud computing, it was the only beginning.

Then came containers, which really began to take off with the development of Docker and Kubernetes, two open source platforms. Containers enable the developer to break down a large monolithic program into discrete pieces, which helps it run more efficiently. More recently, we’ve seen the rise of serverless or event-driven computing. In this case, the whole idea of infrastructure itself is being abstracted away.

Photo: shutterjack/Getty Images

While it’s not truly serverless, since you need underlying compute, storage and memory to run a program, it is removing the need for developers to worry about servers. Today, so much coding goes into connecting the program’s components to run on whatever hardware (virtual or otherwise) you have designated. With serverless, the cloud vendor handles all of that for the developer.

All of the major vendors have launched serverless products with AWS Lambda, Google Cloud Functions and Microsoft Azure Functions all offering a similar approach. But it has the potential to be more than just another way to code. It could eventually shift the way we think about programming and its relation to the underlying infrastructure altogether.

It’s important to understand that we aren’t quite there yet, and a lot of work still needs to happen for serverless to really take hold, but it has enormous potential to be a startup feeder system in coming years and it’s certainly caught the attention of investors looking for the next big thing.

Removing another barrier to entry

Tim Wagner, general manager for AWS Lambda, says the primary advantage of serverless computing is that it allows developers to strip away all of the challenges associated with managing servers. “So there is no provisioning, deploying patching or monitoring — all those details at the the server and operating system level go away,” he explained.

He says this allows developers to reduce the entire coding process to the function level. The programmer defines the event or function and the cloud provider figures out the exact amount of underlying infrastructure required to run it. Mind you, this can be as little as a single line of code.

Colin Anderson/Getty Images

Sarah Guo, a partner at Greylock Partners, who invests in early stage companies sees serverless computing as offering a way for developers to concentrate on just the code by leaving the infrastructure management to the provider. “If you look at one of the amazing things cloud computing platforms have done, it has just taken a lot of the expertise and cost that you need to build a scalable service and shifted it to [the cloud provider],” she said. Serverless takes that concept and shifts it even further by allowing developers to concentrate solely on the user’s needs without having to worry about what it takes to actually run the program.

Survey says…

Cloud computing company Digital Ocean recently surveyed over 4800 IT pros, of which 55 percent identified themselves as developers. When asked about serverless, nearly half of respondents reported they didn’t fully understand the serverless concept. On the other hand, they certainly recognized the importance of learning more about it with 81 percent reporting that they plan to do further research this year.

When asked if they had deployed a serverless application in the last year, not surprisingly about two-thirds reported they hadn’t. This was consistent across regions with India reporting a slightly higher rate of serverless adoption.

Graph: Digital Ocean

Of those using serverless, Digital Ocean found that AWS was by far the most popular service with 58 percent of respondents reporting Lambda was their chosen tool, followed by Google Cloud Functions with 23 percent and Microsoft Azure Functions further back at 10 percent.

Interestingly enough, one of the reasons that respondents reported a reluctance to begin adopting serverless was a lack of tooling. “One of the biggest challenges developers report when it comes to serverless is monitoring and debugging,” the report stated. That lack of visibility, however could also represent an opening for startups.

Creating ecosystems

The thing about abstraction is that it simplifies operations on one level, but it also creates a new set of requirements, some expected and some that might surprise as a new way of programming scales. This lack of tooling could potentially hinder the development, but more often than not when necessity calls, it can stimulate the development of a new set of instrumentation.

This is certainly something that Guo recognizes as an investor. “I think there is a lot of promise as we improve a bunch of things around making it easier for developers to access serverless, while expanding the use cases, and concentrating on issues like visibility and security, which are all [issues] when you give more and more control of [the infrastructure] to someone else,” she said.

Photo: shylendrahoode/Getty Images

Ping Li, general partner at Accel also sees an opportunity here for investors. “I think the reality is that anytime there’s a kind of shift from a developer application perspective, there’s an opportunity to create a new set of tools or products that help you enable those platforms,” he said.

Li says the promise is there, but it won’t happen right away because there needs to be a critical mass of developers using serverless methodologies first. “I would say that we are definitely interested in serverless in that we believe it’s going to be a big part of how applications will be built in the future, but it’s still in its early stages,” Ping said.

S. Somasgear, managing director at Madrona Ventures says that even as serverless removes complexity, it creates a new set of issues, which in turn creates openings for startups. “It is complicated because we are trying to create this abstraction layer over the underlying infrastructure and telling the developers that you don’t need to worry about it. But that means, there are a lot of tools that have to exist in place — whether it is development tools, deployment tools, debugging tools or monitoring tools — that enable the developer to know certain things are happening when you’re operating in a serverless environment.

Beyond tooling

Having that visibility in a serverless world is a real challenge, but it is not the only opening here. There are also opportunities for trigger or function libraries or companies akin to Twilio or Stripe, which offer easy API access to a set of functionality without having a particular expertise like communications or payment gateways There could be similar analogous needs in the serverless world.

Companies are beginning to take advantage of serverless computing to find new ways of solving problems. Over time, we should begin to see more developer momentum toward this approach and more tools develop.

While it is early days, as Guo says, it’s not as though developers love running infrastructure. It’s just been a necessity. “I think will be very interesting. I just think we’re still very early in the ecosystem,” she said. Yet certainly the potential is there if the pieces fall into place and programmer momentum builds around this way of developing applications for it to really take off and for a startup ecosystem to follow.

Regardless of what you may think of Facebook as a platform, they run a massive operation and when you reach their level of scale you have to get more creative in how you handle every aspect of your computing environment.

Engineers quickly reach the limits of human ability to track information to the point that checking logs and analytics becomes impractical and unwieldy on a system running thousands of services. This is a perfect scenario to implement machine learning and that is precisely what Facebook has done.

The company published a blog post today about a self-tuning system they have dubbed Sprial. This is pretty nifty and what it does is essentially flip the idea of system tuning on its head. Instead of looking at some data and coding what you want the system to do, you teach the system the right way to do it and it does it for you, using the massive stream of data to continually teach the machine learning models how to push the systems to be ever better.

In the blog post, the Spiral team described it this way: “Instead of looking at charts and logs produced by the system to verify correct and efficient operation, engineers now express what it means for a system to operate correctly and efficiently in code. Today, rather than specify how to compute correct responses to requests, our engineers encode the means of providing feedback to a self-tuning system.”

They say that coding in this way is akin to declarative code, like using SQL statements to tell the database what you want it to do with the data, but the act of applying that concept to systems is not a simple matter.

“Spiral uses machine learning to create data-driven and reactive heuristics for resource-constrained real-time services. The system allows for much faster development and hands-free maintenance of those services, compared with the hand-coded alternative,” the Spiral team wrote in the blog post.

If you consider the sheer number of services running on Facebook, and the number of users trying to interact with those services at any given time, it required sophisticated automation, and that is what Spiral is providing.

The system takes the log data, processes it through Spiral, which is connected with just a few lines of code. It then sends commands back to the server based on the declarative coding statements written by the team. To ensure those commands are always being fine tuned, at the same time, the data gets sent from the server to a model for further adjustment in a lovely virtuous cycle. This process can applied locally or globally.

The tool was developed by the team operating in Boston, and is only available internally inside Facebook. It took lots of engineering to make it happen, the kind of scope that only Facebook could apply to a problem like this (mostly because Facebook is one of the few companies that would actually have a problem like this).

In the age of digital transformation, it’s important to understand your business processes and find improvements quickly, but it’s not always easy to do without bringing in expensive consultants to help. Celonis, a New York City enterprise startup, created a sophisticated software solution to help solve this problem, and today it announced a $50 million Series B investment from Accel and 83North on a $1 billion valuation.

It’s not typical for an enterprise startup to have such a lofty valuation so early in its funding cycle, but Celonis is not a typical enterprise startup. It launched in 2011 in Munich with this idea of helping companies understand their processes, which they call process mining.

“Celonis is an intelligent system using logs created by IT systems such as SAP, Salesforce, Oracle and Netsuite, and automatically understands how these processes work and then recommends intelligently how they can be improved,” Celonis CEO and co-founder Alexander Rinke explained.

The software isn’t magic, but helps customers visualize each business process, and then looks at different ways of shifting how and where humans interact with the process or bringing in technology like robotics process automation (RPA) when it makes sense.

Celonis process flow. Photo: Celonis

Rinke says the software doesn’t simply find a solution and that’s the end of the story. It’s a continuous process loop of searching for ways to help customers operate more efficiently. This doesn’t have to be a big change, but often involves lots incremental ones.

“We tell them there are lots of answers. We don’t think there is one solution. All these little things don’t execute well. We point out these things. Typically we find it’s easy to implement, ” he said.

Screenshot: Celonis

It seems to be working. Customers include the likes of Exxon-Mobile, 3M, Merck, Lockheed-Martin and Uber. Rinke reports deals are often seven figures. The company has grown an astonishing 5,000 percent in the past 4 years and 300 percent in the past year alone. What’s more, it has been profitable every year since it started. (How many enterprise startups can say that?)

The company currently has 400 employees, but unlike most Series B investments, they aren’t looking at this money to grow operationally. They wanted to have the money for strategic purposes, so if the opportunity came along to make an acquisition or expand into a new market, they would be in a position to do that.

“I see the funding as a confirmation and commitment, a sign from our investors and an indicator about what we’ve built and the traction we have. But for us it’s more important, and our investors share this, what they really invested in was the future of the company,” Rinke said. He’s sees an on-going commitment to help his customers as far more important than a billion valuation.