Pulumi seeks to “Program the Cloud”. They provide infrastructure for serverless containers. For an intro overview of Pulumi see: From GeekWire: It didn’t take long for the ex-Microsoft engineers behind Pulumi to prove their software development platform for multicloud computing was worthy of additional investment. Pulumi launched its SaaS multicloud app deployment service which could help businesses […]

On this newest episode of The New Stack Makers podcast, TNS founder Alex Williams is joined by Stackery CEO Nate Taggart, at ServerlessConf San Francisco, to discuss the makings of Stackery, and how it has standardized on top of the AWS Serverless Application Model to the benefit of both its developers and enterprise customers. Prior […]

This is a nice example. The only limit I see it is really hard to distinguish between € 1 cent from € 2 cent coins for a machine, especially from the backside only. In the second picture you see that the 1 cent coin is detected as 2 cent coin. Maybe you need to add some other information like the relative/absolute size to weight the result of Watson VR in a final step? I'm working on a similar Application for iOS with a serverless Python Backend and facing that problem now.

TX-Dallas, We’re hiring an AWS Apps Dev to build serverless architecture platforms. Azure experience can be a substitute for AWS depending on your projects. Required Experience/Qualifications: Bachelor’s or Graduate Degree in Computer Science, Engineering, or similar Agile IT environment experience 3+ years of designing AWS cloud based applications - Azure experience may be considered Serverless architecture

The internet went
down on February 28, 2017. Or at least that's how it seemed to some users as sites and apps like Slack and Medium went offline or malfunctioned for four hours. What actually happened is that Amazon's enormously popular S3 cloud storage service experienced an outage
, affecting everything that depended on it.

It was a reminder of the risks when too much of the internet relies on a single service. Amazon gives customers the option of storing their data in different "availability regions" around the world, and within those regions it has multiple data centers in case something goes wrong. But last year's outage knocked out S3 in the entire North Virginia region. Customers could of course use other regions, or other clouds, as backups, but that involves extra work, including possibly managing accounts with multiple cloud providers.

A San Francisco based startup called Netlify wants to make it easier to avoid these sorts of outages by automatically distributing its customers’ content to multiple cloud computing providers. Users don't need accounts with Amazon, Microsoft Azure, Rackspace, or any other cloud company―Netlify maintains relationships with those services. You just sign-up for Netlify, and it handles the rest.

You can think of the company's core service as a cross between traditional web hosting providers and content delivery networks, like Akamai, that cache content on servers around the world to speed up websites and apps. Netlify has already attracted some big tech names as customers, often to host websites related to open source projects. For example, Google uses Netlify for the website for its infrastructure management tool Kubernetes, and Facebook uses the service for its programming framework React. But Netlify founders Christian Bach and Mathias Biilmann don't want to just be middlemen for cloud hosting. They want to fundamentally change how web applications are built, and put Netlify at the center.

Traditionally, web applications have run mostly on servers. The applications run their code in the cloud, or in a company's own data center, assemble a web page based on the results, and send the result to your browser. But as browsers have grown more sophisticated, web developers have begun shifting computing workloads
to the browser. Today, browser-based apps like Google Docs or Facebook feel like desktop applications. Netlify aims to make it easier to build, publish, and maintain these types of sites.

Back to the Static Future

Markus Seyfferth, the COO of Smashing Media, was converted to Netlify's vision when he saw Biilman speak at a conference in 2016. Smashing Media, which publishes the web design and development publication Smashing Magazine
and organizes the Smashing Conference, was looking to change the way it managed its roughly 3,200-page website.

Since its inception in 2006, Smashing Magazine
had been powered by WordPress, the content management system that runs about 32 percent of the web
according to technology survey outfit W3Techs, along with e-commerce tools to handle sales of books and conference tickets and a third application for managing its job listing site. Using three different systems was unwieldy, and the company's servers struggled to handle the site’s traffic, so Seyfferth was looking for a new approach.

When you write or edit a blog post in WordPress or similar applications, the software stores your content in a database. When someone visits your site, the server runs WordPress to pull the latest version from the database, along with any comments that have been posted, and assembles it into a page that it sends to the browser.

Building pages on the fly like this ensures that users always see the most recent version of a page, but it's slower than serving prebuilt "static" pages that have been generated in advance. And when lots of people are trying to visit a site at the same time, servers can bog down trying to build pages on the fly for each visitor, which can lead to outages. That leads companies to buy more servers than they typically need; what’s more, servers can still be overloaded at times.

"When we had a new product on the shop, it needed only a couple hundred orders in one hour and the shop would go down," Seyfferth says.

WordPress and similar applications try to make things faster and more efficient by "caching" content to reduce how often the software has to query the database, but it's still not as fast as serving static content.

Static content is also more secure. Using WordPress or similar content managers exposes at least two "attack surfaces" for hackers: the server itself, and the content management software. By removing the content management layer, and simply serving static content, the overall "attack surface" shrinks, meaning hackers have fewer ways to exploit software.

The security and performance advantages of static websites have made them increasingly popular with software developers in recent years, first for personal blogs and now for the websites for popular open source projects.

In a way, these static sites are a throwback to the early days of the web, when practically all content was static. Web developers updated pages manually and uploaded pre-built pages to the web. But the rise of blogs and other interactive websites in the early 2000s popularized server-side applications that made it possible for non-technical users to add or edit content, without special software. The same software also allowed readers to add comments or contribute content directly to a site.

At Smashing Media, Seyfferth didn't initially think static was an option. The company needed interactive features, to accept comments, process credit cards, and allow users to post job listings. So Netlify built several new features into its platform to make a primarily static approach more viable for Smashing Media.

The Glue in the Cloud

Biilmann, a native of Denmark, spotted the trend back to static sites while running a content management startup in San Francisco, and started a predecessor to Netlify called Bit Balloon in 2013. He invited Bach, his childhood best friend who was then working as an executive at a creative services agency in Denmark, to join him in 2015 and Netlify was born.

Initially, Netlify focused on hosting static sites. The company quickly attracted high-profile open source users, but Biilman and Bach wanted Netlify to be more than just another web-hosting company; they sought to make static sites viable for interactive websites.

Open source programming frameworks have made it easier to build sophisticated applications in the browser
. And there's a growing ecosystem of services like Stripe for payments, Auth0 for user authentication, and Amazon Lambda for running small chunks of custom code, that make it possible to outsource many interactive features to the cloud. But these types of services can be hard to use with static sites because some sort of server side application is often needed to act as a middleman between the cloud and the browser.

Biilmann and Bach want Netlify to be that middleman, or as they put it, the "glue" between disparate cloud computing services. For example, they built an e-commerce feature for Smashing Media, now available to all Netlify customers, that integrates with Stripe. It also offers tools for managing code that runs on Lambda.

Smashing Media switched to Netlify about a year ago, and Seyfferth says it's been a success. It's much cheaper and more stable than traditional web application hosting. "Now the site pretty much always stays up no matter how many users," he says. "We'd never want to look back to what we were using before."

There are still some downsides. WordPress makes it easy for non-technical users to add, edit, and manage content. Static site software tends to be less sophisticated and harder to use. Netlify is trying to address that with its own open source static content management interface called Netlify CMS. But it's still rough.

Seyfferth says for many publications, it makes more sense to stick with WordPress for now because Netlify can still be challenging for non-technical users.

And while Netlify is a developer darling today, it's possible that major cloud providers could replicate some of its features. Google already offers a service called Firebase Hosting that offers some similar functionality.

For now, though, Bach and Biilmann say they're just focused on making their serverless vision practical for more companies. The more people who come around to this new approach, the more opportunities there are not just for Netlify, but for the entire new ecosystem.

More Great WIRED Stories
Self-improvement in the internet age andhow we learn
A drone-flinging cannon proves UAVscan mangle planes
Google's human-sounding phone bot
comes to the Pixel
How Jump designed aglobal electric bike
US weapons systems are easy cyberattack targets
Looking for more? Sign up for our daily newsletter
and never miss our latest and greatest stories

In this third part of my series on Azure Function development I will cover a number of development concepts and concerns. These are just some of the basics. You can look for more posts coming in the future that will cover specific topics in more detail.

General Development

One of the first things you will have to get used to is developing in a very stateless manner. Any other .NET application type has a class at its base. Functions, on the other hand, are just what they say, a method that runs within its own context. Because of this you don’t have anything resembling a global or class level variable. This means that if you need something like a logger in every method you have to pass it in.

[Update 2016-02-13] The above information is not completely correct. You can implement function global variables by defining them as private static.

You may find that it makes sense to create classes within your function either as DTOs or to make the code more manageable. Start by adding a .csx file in the files view pane of your function. The same coding techniques and standards apply as your Run.csx file, otherwise develop the class as you would any other .NET class.

In the previous post I showed how to create App Settings. If you took the time to create them you are going to want to be able to retrieve them. The GetEnvironmentVariable method of the Environment class gives you the same capability as using AppSettings from ConfigurationManager in traditional .NET applications.

System.Environment.GetEnvironmentVariable("YourSettingKey")

A critical coding practice for functions that use perishable resources such as queues is to make sure that if you catch and log an exception that you rethrow it so that your function fails. This will cause the queue message to remain on the queue instead of dequeuing.

Debugging

It can be hard to read the log when the function is running full speed since instance run in parallel but report to the same log. I would suggest that you added the process ID to your TraceWriter logging messages so that you can correlate them.

Even more powerful is the ability to remote debug functions from Visual Studio. To do this open your Server Explorer and either connect to your Azure subscription. From there you can drill down to the Function App in App Services and then to the run.csx file in the individual function. Once you have open the code file and place your break points, right-click the function and select Attach Debugger. From there it acts like any other Visual Studio debugging session.

Race Conditions

I wanted to place special attention on this subject. As with any highly parallel/asynchronous processing environment you will have to make sure that you take into account any race conditions that may occur. If at all possible keep the type of functionality that your create to non-related pieces of data. If it is critical that items in a queue, blob container or table storage are processed in order then Azure Functions are probably not the right tool for your solution.

Summary

Azure Functions are one of the most powerful units of code available. Hopefully this series gives you a starting point for your adventure into serverless applications and you can discover how they can benefit your business.

The latest buzz word is serverless applications. Azure Functions are Microsoft’s offering in this space. As with most products that are new on the cloud Azure Functions are still evolving and therefore can be challenging to develop. Documentation is still being worked on at the time I am writing this so here are some things that I have learned while implementing them.

There is a lot to cover here so I am going to break this topic into a few posts:

Creating and Binding

Settings and References

Coding Concerns

Creating A New Function

The first thing you are going to need to do is create a Function App. This is a App Services product that serves as a container for your individual functions. The easiest way I’ve found to start is to go to the main add (+) button on the Azure Portal and then do a search for Function App.

Click on Function App and then the Create button when the Function App blade comes up. Fill in your app name remembering that this a container and not your actual function. As with other Azure features you need to supply a subscription, resource group and location. Additionally for a Function App you need to supply a hosting plan and storage account. If you want to take full benefit of Function Apps scaling and pricing leave the default Consumption Plan. This way you only pay for what you use. If you chose App Service Plan your function will will pay for it whether it is actually processing or not.

Once you click Create the Function App will start to deploy. At this point you will start to create your first function in the Function App. Once you find your Function App in the list of App Services it will open the blade shown below. It offers a quick start page, but I quickly found that didn’t give me options I needed beyond a simple “Hello World” function. Instead press the New Function link at the left. You will be offered a list of trigger based templates which I will cover in the next section.

Triggers

Triggers define the event source that will cause your function to be executed. While there are many different triggers and there are more being added every day, the most common ones are included under the core scenarios. In my experience the most useful are timer, queue, and blob triggered functions.

Queues and blobs require a connection to a storage account be defined. Fortunately this is created with a couple of clicks and can be shared between triggers and bindings as well as between functions. Once you have that you simply enter the name of the queue or blob container and you are off to the races.

When it comes to timer dependent functions, the main topic you will have to become familiar with is chron scheduling definitions. If you come from a Unix background or have been working with more recent timer based WebJobs this won’t be anything new. Otherwise the simplest way to remember is that each time increment is defined by a division statement.

In the case of queue triggers the parameter that is automatically added to the Run method signature will be the contents of the queue message as a string. Similarly most trigger types have a parameter that passes values from the triggering event.

Input and Output Bindings

Some of the function templates include an output binding. If none of these fit your needs or you just prefer to have full control you can add a binding via the Integration tab. The input and output binding definitions end up in the same function.json file as the trigger bindings.

The one gripe I have with these bindings is that they connect to a specific entity at the beginning of your function. I would find it preferable to bind to the parent container of whatever source you are binding to and have a set of standard commands available for normal CRUD operations.

Let’s say that you want to load an external configuration file from blob storage when your function starts. The path shown below specifies the container and the blob name. The default format show a variable “name” as the blob name. This needs to be a variable that is available and populated when the function starts or an exception will be thrown. As for your storage account specify it by clicking the “new” link next to the dropdown and pick the storage account from those that you have available. If you specified a storage account while defining your trigger and it is the same as your binding it can be reused.

The convenient thing about blob bindings is that they are bound as strings and so for most scenarios you don’t have to do anything else to leverage them in your function. You will have to add a string parameter to the function’s Run method that matches the name in the blob parameter name text box.

Summary

That should give you a starting point for getting the shell of your Azure Function created. In the next two posts I will add settings, assembly references and some tips for coding your function.

As more organizations embrace hybrid cloud – with more than 50 percent claiming a hybrid cloud setup – and serverless, now used by close to third of organizations, they lack the tools and specialization to keep up, according to Alcide. 75 percent of respondents expect to see an increase in the number of security tools they rely on in the next year, while over half share they still manually configure security policies. The resulting complexity … More →

Senior Software Engineer - REACT, NodeJS, React Native Senior Software Engineer - REACT, NodeJS, React Native - Skills Required - REACT, React Native, IOS, Android, AWS, NodeJS, Serverless, JavaScript
If you are a Senior Software Engineer with experience, please read on!
Based in the Big Apple, we are bringing health and wellness to the modern day user! With our recent Series A funding, we are doubling our team size and will continue to grow our business with potential new launches in the coming year! Our product is meant to have a positive impact on our user's lives and is deployed worldwide!
**What You Will Be Doing**
You will be working closely with our product, data, and design teams while leading our mid-level developers to build out our future products.
**What You Need for this Position**
More Than 4 Years of Experience and Knowledge of:
- JavaScript
- REACT
- NodeJS
- AWS
Nice to Have:
- React Native
- Android
- IOS
- Serverless
**What's In It for You**
- Competitive Compensation of $130-180k DOE
- Unlimited Vacation
- 100% Medical, Dental, Vision
- Standing desks so you aren't sitting all day!
- Take a break with company game nights!
- Need a pick me up? Free snacks in the office!
So, if you are a Senior Software Engineer with experience, please apply today!
Applicants must be authorized to work in the U.S.
**CyberCoders, Inc is proud to be an Equal Opportunity Employer**
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
**Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire.
*Senior Software Engineer - REACT, NodeJS, React Native*
*NY-New York*
*MT1-1493010*

Go Lang Developer - National TELECOMM Company Go Lang Developer - National TELECOMM Company - Skills Required - Go, .NET, JavaScript, C#, NODE, NodeJS, Node.js, Software engineers, Software Developer, API
If you are a Go Lang Developer with experience, please read on!
Title: Go Lang Developer
Location: Downtown San Francisco
Salary: Negotiable | Depending on experience
Based in our downtown SF, CA, we are a growing TELOCOMM company making a huge impact into our industry. You will be responsible for maintaining and enhancing our suite of websites and APIs, utilizing a wide-range of technologies and methodologies supporting monolithic, micro-service, and serverless based solutions.
**What You Will Be Doing**
- Design, develop and unit test web-based software
- Follow best practice coding standards
- Produce quality code with unit tests and documentation
- Design activities, pull request reviews, code reviews, demos to other engineers
- Communicate technical concepts including architec
**What You Need for this Position**
- Go
- JavaScript/Node.js
- REST API
**What's In It for You**
- Competitive base salary and overall compensation package
- Full benefits: Medical, Dental, Vision
- 401 (K) with generous company match
- Generous Paid time off (PTO)
- Vacation, sick, and paid holidays
- Life Insurance coverage
1. Apply directly to this job opening here!
Or
2. E-mail directly for more information to James@CyberCoders.com
Applicants must be authorized to work in the U.S.
**CyberCoders, Inc is proud to be an Equal Opportunity Employer**
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
**Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire.
*Go Lang Developer - National TELECOMM Company*
*CA-San Francisco*
*JT7-1492834*

Go Lang Developer - National TELECOMM Company Go Lang Developer - National TELECOMM Company - Skills Required - Go, .NET, JavaScript, C#, NODE, NodeJS, Node.js, Software engineers, Software Developer, API
If you are a Go Lang Developer with experience, please read on!
Title: Go Lang Developer
Location: Golden, CO
Salary: Negotiable | Depending on experience
Based in our Golden, CO, we are a growing TELECOMM company making a huge impact into our industry. You will be responsible for maintaining and enhancing our suite of websites and APIs, utilizing a wide-range of technologies and methodologies supporting monolithic, micro-service, and serverless based solutions.
**What You Will Be Doing**
- Design, develop and unit test web-based software
- Follow best practice coding standards
- Produce quality code with unit tests and documentation
- Design activities, pull request reviews, code reviews, demos to other engineers
- Communicate technical concepts including architec
**What You Need for this Position**
- Go
- JavaScript/Node.js
- REST API
**What's In It for You**
- Competitive base salary and overall compensation package
- Full benefits: Medical, Dental, Vision
- 401 (K) with generous company match
- Generous Paid time off (PTO)
- Vacation, sick, and paid holidays
- Life Insurance coverage
1. Apply directly to this job opening here!
Or
2. E-mail directly for more information to James@CyberCoders.com
Applicants must be authorized to work in the U.S.
**CyberCoders, Inc is proud to be an Equal Opportunity Employer**
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
**Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire.
*Go Lang Developer - National TELECOMM Company*
*CO-Golden*
*JT7-1492918*

Application Developer (AWS/JAVA) Application Developer (AWS/JAVA) - Skills Required - AWS, Amazon, Java, Lambda
If you are a Application Developer with (AWS/JAVA) experience, please read on!
**What You Will Be Doing**
Design and build new backend services on AWS to support our platform
100% hands-on coding
Design and build new front-end features
API design and development
Maintain and improve the performance of existing software
Write tests for existing and created code to ensure compatibility and stability
**What You Need for this Position**
More Than 5 Years of experience and knowledge of:
Bachelor's Degree in Computer Science or equivalent experience
Solid understanding of computer science fundamentals, data structure, algorithm distributed systems, and asynchronous or event-driven architectures
4+ years of current JAVA coding experience
Experience coding and testing applications that use AWS services components such as EC2, API Gateway, Lambda, S3, EBS, RDS, SQS
Experience with microservice architectures, asynchronous frameworks, caching and server side concepts
Ability to multi-task easily and juggle priorities in a fast-paced environment
Familiarity with source control, Git and working with complex branching
Ability to rapidly design, prototype, and iterate to solve problems and fix bugs
Desired qualifications
Coding applications on AWS using JAVA a MUST
AWS Lambda, Serverless on Java and/or node.js
AWS Developer Certifications is a big plus
Experience working with large file processing 10 GB to 100GB type range
Exposure to scientific software or biological research is a bonus
Nice to have: front-end web development experience using a javascript framework, Ruby on Rails or similar, etc.
So, if you are a Application Developer with (AWS/JAVA) experience, please apply today!
Applicants must be authorized to work in the U.S.
**CyberCoders, Inc is proud to be an Equal Opportunity Employer**
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
**Your Right to Work** – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire.
*Application Developer*
*CA-Palo Alto*
*HT1-1492959*

Tel Aviv November 6, 2018 Alcide , provider of the most comprehensive full-stack cloud-native security platform, today released the findings of a new industry report: 2018 Report: The State of Securing Cloud Workload based on responses from close to 350 security, DevOps and IT leaders. The report reveals that as more organizations embrace hybrid cloud with more than 50 percent claiming a hybrid cloud setup and serverless, now used by close to third of organizations, they lack the tools and specialization to keep up. 75 percent of respondents expect to see an increase in the number of security tools they rely on in the next year, while over half share they still manually configure security policies. The resulting complexity has the potential to slow critical business functions in the absence of an integrated security approach to distributed cloud environments.

According to a recent report from 451 Research :

The pace of innovation on cloud-native environments places significant burden on traditional security practices. Not only is there a need to support new technology options quickly moving from traditional virtual machines to containers, serverless, and newer constructs such as service mesh but there is also a difference in how security and DevOps teams consider their needs and workflows.

Alcide’s report, conducted in August 2018 in conjunction with Informa Engage, reinforces the idea that new practices and technologies are disrupting traditional security practices, with findings including:

Cloud complexity increasing, with hybrid cloud as the new infrastructure normal:
While virtual machines (VMs) remain the most common cloud computing environment (83%), containers (37%), serverless (28%), and service mesh (21%) are gaining traction.
Hybrid and multi-cloud approaches now make up more than three-quarters of all configurations (77%).
Serverless running in production; most bullish about its security
Despite some security concerns, the majority (57%) of serverless users are currently running it in both production and development.
The majority currently using serverless have a high degree of confidence in its security, while one third (32%) express a lack of confidence in the security of their environments.
As cloud infrastructure complexity grows, security becomes a shared responsibility with DevOps
Fewer than half of organizations (45%) now have a dedicated security team responsible for the cloud, with 35% of all organizations now using either a DevOps team or dedicated DevSecOps team for security.
Hybrid cloud complexity pushes Dev, Sec and Ops teams to look for more tools to secure their distributed environments
Over two-thirds (75%) expect to increase the number of tools in use over the next twelve months with no one expecting to retire any tools in use
One-third of organizations reporting using more than five tools for cloud security
Proliferation of cloud security tools leave the enterprise vulnerable, point to need for intelligent policy automation
More than half (60%) of organizations rely on manual configurations of security policies for their apps, while almost all organizations (90%) rely on multiple individuals to configure and set policy rules.

“Our report validates what we’ve seen with our own customers modern organizations are striving for a consolidated security approach that will support business velocity and tackle the challenges associated with the overhead of multiple tools in use,” said Karine Regev, VP Marketing of Alcide. “Modern teams can’t assume that emerging technologies like serverless are secure, and need a practical and uniform enforcement and management of security policies to control disparate and cloud-native services, infrastructure and environments.”

This report, designed to be a primer on the current state of the DevOps tools market, isn’t meant to be a definitive guide to every DevOps tool available. We hope to set the DevOps toolset baseline and clear confusion by providing an overview of the tool categories that currently are ...Read More

Bluefin are recruiting a number of Software Developers for a large consultancy here in Melbourne. The Developers will be based on site at one of the large enterprise organisations. The project is about enhancing/rebuilding an existing Oracle system that is monolithic in nature and has extreme constraints around performance, capability and agility (time to market). The idea it to build external components to integrate with it in AWS using serverless technology. Part of the project will be to migrate the existing relational database into AWS and possibly moving it to a noSQL structure, if it makes sense to do so. Software Developers are required for this, with preferably some DevOps experience. Some of the tech involved is as follows: AWS: Lamda Dynamo DB Cloud Formation Cloud Watch SNS Other stuff: Kafka Splunk NodeJS Java API (RESTFUL) development No SQL Spring Boot

You can host a serverless function in Azure in two different modes: Consumption plan and Azure App Service plan. The Consumption plan automatically allocates compute power when your code is running. Your app is scaled out when needed to handle load, and scaled down when code is not running. You don’t have to pay for ... Read moreReducing Azure Functions Cold Start Time

As more organizations embrace hybrid cloud – with more than 50 percent claiming a hybrid cloud setup – and serverless, now used by close to third of organizations, they lack the tools and specialization to keep up, according to Alcide. 75 percent of respondents expect to see an increase in the number of security tools they rely on in the next year, while over half share they still manually configure security policies. The resulting complexity … More →

In today’s business environment, with the rapidly increasing volume of data and the growing pressure to respond to events in real-time, organizations need data-driven strategies to gain valuable insights faster and increase their competitive advantage. To meet these big data challenges, you need a massively scalable distributed streaming platform that supports multiple producers and consumers, connecting data streams across your organization. Apache Kafka and Azure Event Hubs provide such distributed platforms.

How is Azure Event Hubs different from Apache Kafka?

Apache Kafka and Azure Event Hubs are both designed to handle large-scale, real-time stream ingestion. Conceptually, both are distributed, partitioned, and replicated commit log services. Both use partitioned consumer models with a client-side cursor concept that provides horizontal scalability for demanding workloads.

Apache Kafka is an open-source streaming platform which is installed and run as software. Event Hubs is a fully managed service in the cloud. While Kafka has a rapidly growing, broad ecosystem and has a strong presence both on-premises and in the cloud, Event Hubs is a cloud-native, serverless solution that gives you the freedom of not having to manage servers or networks, or worry about configuring brokers.

Announcing Azure Event Hubs for Apache Kafka

We are excited to announce the general availability of Azure Event Hubs for Apache Kafka. With Azure Event Hubs for Apache Kafka, you get the best of both worlds—the ecosystem and tools of Kafka, along with Azure’s security and global scale.

This powerful new capability enables you to start streaming events from applications using the Kafka protocol directly in to Event Hubs, simply by changing a connection string. Enable your existing Kafka applications, frameworks, and tools to talk to Event Hubs and benefit from the ease of a platform-as-a-service solution; you don’t need to run Zookeeper, manage, or configure your clusters.

Event Hubs for Kafka also allows you to easily unlock the capabilities of the Kafka ecosystem. Use Kafka Connect or MirrorMaker to talk to Event Hubs without changing a line of code. Find the sample tutorials on our GitHub.

This integration not only allows you to talk to Azure Event Hubs without changing your Kafka applications, you can also leverage the powerful and unique features of Event Hubs. For example, seamlessly send data to Blob storage or Data Lake Storage for long-term retention or micro-batch processing with Event Hubs Capture. Easily scale from streaming megabytes of data to terabytes while keeping control over when and how much to scale with Auto-Inflate. Event Hubs also supports Geo Disaster-Recovery. Event Hubs is deeply-integrated with other Azure services like Azure Databricks, Azure Stream Analytics, and Azure Functions so you can unlock further analytics and processing.

Event Hubs for Kafka supports Apache Kafka 1.0 and later through the Apache Kafka Protocol which we have mapped to our native AMQP 1.0 protocol. In addition to providing compatibility with Apache Kafka, this protocol translation allows other AMQP 1.0 based applications to communicate with Kafka applications. JMS based applications can use Apache Qpid™ to send data to Kafka based consumers.

Overview: Aaron talks with Lee Eason (@leejeason; Director of DevOps at Ipreo and the co-founder of Tekata.io) at All Things Open about his DevOps transformation for all of the organization’s 30+ products and 65+ scrum teams leading to a dramatic reduction in manual work and an increase in quality and customer satisfaction across the board.