Thoughts from Technology Artisans

cloud

In the end of February five Siilis took off from Helsinki to visit Berlin. ClojureD conference to be exact. Two tracks, plenty of talks and focus in Clojure. This is an overview from the talks that we visited and because Joy Clark had awesome notes on talks too I asked permission to share her notes here as well. Big thumbs up for the visual content!

Teaching Clojure

For me the first talk of the day was about teaching. We have ran our junior program with Clojure for two years as we speak so new views and how others find teaching in this domain was interesting. Mike Sperber shared his point on systematic thinking and how it affects the way we build software.

Maria: a beginner-friendly coding environment for Clojure

Next talk was about Maria by Dave Liepmann. A simple really beginner friendly place to learn fundamentals in Clojure. In his talk there was quite a few demos on how fast one can get on with coding.

Fast results are important when you start something new and Maria offers fast response to your coding and shows results immediately. Of course when you start doing bigger and more advanced stuff your pace of getting things done and ready will be different.

Writing test that suck less

Last talk before lunch was about testing by Torsten Mangner. This talk dived a little deeper to the fundamentals of testing. Talk began wit a confession and gave us the baseline.

“Writing unit tests in Clojure is easy, since testing pure functions is trivial. But the more high-level our tests become, the more they have to deal with the state and side-effects of your application. It becomes harder and harder to write proper tests. And worst of all: they are harder and harder to understand and to maintain, greatly diminishing the value of those tests as a documentation of your software.”

A Dynamic, Statically Typed Contradiction

After lunch we had the pleasure of diving into mathematics. As Andrew Mcveigh talked about lambda calculus and its application in a “Hindley-Milner“ based type system and how it can be mapped onto Clojure. This solution can also be used to “type-chec” a subset of Clojure code.

Unfortunately my attention was to fully hang on with the maths and our ways with Joy had departed so, no good pictures from this talk. Sorry!

Onyx, virtual machines, jokes and afterparty

Next up for me was Vijay talking about Onyx and its uses for data crunching. An overall look for the uses. Masterless systems, stream and batch processing, flow control and principles of Clojure.

After coffee we got the pleasure of seeing a walkthrough how to implement Clojure on a new virtual machine. Heavy stuff and a great idea.

Last official talk before the lightning talks (I had to skip those because I was hungry and needed a Döner 😉 ) was called defjoke. Live coding a fully compliant spec for humour.

All talks were taped and are available soon in http://clojured.de/ check it out. Other great photos and one-liners can be found in https://twitter.com/clojuredconf

Thanks from Siili to great organisers and speakers. Hoping to see you soon in other adventures!

I remember last year binge-watching the Serverlessconf recordings online thinking “If there is one conference I want to go this is the one!”. The talks were so utterly good that I felt bad for not having witnessed it live. I was looking for the coming Serverlessconfs but they seemed to be coming nowhere near me. Then suddenly they announced the event in Paris and I went crazy. Let´s do this!

On day #1 there were multiple workshops on various technology stacks. AWS, Microsoft Azure, Google Cloud, OpenFaaS and IBM Cloud Functions.

I chose a workshop that was organized by A Cloud Guru where we built a Youtube like video service website with AWS API Gateway, Lambda, Elastic transcoder, Google Firebase and Auth0.

The present and future of Serverless observability | Yan Cui

The first presentation of the day was about the present and future of Serverless observability. Yan Cui was pleasant to listen as he had a very fluent presentation style and insightful observations. He also pointed out an interesting whitepaper Serverless Application Lens for the AWS Well-architected framework.

The document covers common serverless applications scenarios. I encourage you to read it to understand AWS best practices and strategies to use when designing architectures for serverless applications.

The next presentation was given by Subbu Allamaraju from Expedia Inc. Expedia is American travel company that owns some of the most popular travel sites like Hotels.com and Trivago. He approached the concept from an interesting perspective highlighting some points on benefits of the serverless paradigm and in contrast common counter arguments like today’s limitations and constraints, personal perspectives of cloud, portability, lock-in, cost etc. He highlighted how past habits, culture and legacy can really slow down adoption.

The presentation provided ideas on how to tackle these blockers on the journey towards a platform of the future, where functions and events fit, and how to prepare ones colleagues for that future.

Bret McGowen & Monika Nawrot showed demos around the Google Cloud offering. Monika explained how Google Serverless offering has grown over the years and had a Google Cloud Functions demo with thumbnail creation function, storage & logging. She highlighted also that there is a possibility to use a Function emulator for debugging functions.

Bret McGowen vent quickly through some of the best Backend as a Service (BaaS) offerings out there and then focused on the Google Firebase. He explained that the Realtime database was the first Firebase feature but there are many more built-in features like Cloud storage, Functions, Hosting, etc.

He also made a bold statement which people seemed to find quite amusing. “More serverless than serverless!” =) . It sounded like he was selling artificial sweetener by claiming “Sweeter than sugar” and I found it triggering my bullshit detector. Luckily the demos totally convinced everybody as he showed the Firebase realtime update capability. Apparently Firebase is also handling situations when a client connection drops. The data syncs instantly when a client gets back online. All the Firebase features can be implemented directly in the client-side code using an SDK. The sugar on the top was BigQuery that could query strings on pretty much the entire universe in just mere seconds.

Kassandra Perch talked about visualizing Serverless architectures and what does a healthy serverless app look like. Her demo included inspecting for example error rates and counting cold starts. She showed how to include IopipeLib, trace and profiler in a lambda function & how to use those. Interestingly the IOPipe wraps the whole lambda handler. The recommended tool IOPipe has a slogan “See inside you Lambda functions” You can find it here: https://www.iopipe.com/

Frédéric Lavigne introduced IBM´s Serverless offering and clarified the difference between IBM Cloud Functions & Apache OpenWhisk. OpenWhisk is open source and used in the IBM managed service. The IBM Cloud team contributes to the open source project. Runtimes include: Node 6, Node 8, PHP 7.1, Python 3 and Swift. Additionally, OpenWhisk can run any custom code put in a Docker container.

In the demo he defined a smiley face as SVG inside the Function code. This was enabled by the RAW HTTP handling setting. Unfortunately, there was some demo effect with API Gateway as he tried to show how to use Web actions and API Gateway. IBM Cloud Functions have some built-in event sources such as Cloudant, Message Hub and Github. Currently they are developing a composer for cloud function flow control.

The presentation included some interesting customer cases such as the DOV-E Hypersonic-based Mobile Device Communication. They had Coca Cola TV advertise trigger a hypersonic message on the viewers phone. Viewer could not hear the communication, but they received a message on the phone asking whether they would like to drink some Coca Cola right there and then.

Event specifications, state of the serverless landscape, and other news from the CNCF Serverless Working Group | Daniel Krook

Daniel Krook had a very informative talk about Event specifications, state of the serverless landscape, and other news from the CNCF Serverless Working Group. CNCF stands for Cloud Native Computing Foundation which drives the adoption of the new Serverless computing paradigm.

They have multiple projects under their wings such as Kubernetes, Prometheus, Fluentd to name a few. However, currently there are no serverless projects. They consist of four active CNCF Working groups (Continuous Integration, Networking, Storage, Serverless) and have now a whitepaper available for review. It is hosted in Github for collaboration purposes and they are accepting pull requests to keep up with the fast phase of development in Serverless. Please find it here: http://www.github.com/cncf/wg-serverless

Accelerating DevOps with Serverless | Michael H. Oshita

Michael H. Oshita talked about Accelerating DevOps with Serverless. His excellent demo showcased how to provide automated branch testing infrastructure for Devs without burdening Ops. Technically the setup uses Github hooks that trigger Circle CI, provisions using Terraform templates and Lambda functions are orchestrated with Step Functions. The infrastructure is created on pull request and destroyed when branch is merged and deleted. See the ECS Deity -project here: http://www.github.com/ijin/ecs-deity

End-To-End Serverless | Randall Hunt

The most entertaining talk of the day was a live coding session with technical evangelist Randal Hunt. He´s opening line was surprisingly in French and the goofball show made everyone crack up. He also asked audience for suggestions on what to build there on the spot in front of the audience. Not something most presenters would want to try I imagine. The randomness didn´t seem to bother viewers and he did manage to finish the live demo just in time which made the organizers happy. The energetic personality and laid back attitude with relevant content was a great combination and I hope to see more of this in the future. If there is one recording you should watch, it would be this one in my opinion.

With Great Scalability Comes Great Responsibility | Dana Engebretson

Dana Engebretson had a pleasant talk telling an interesting story about her journey to Serverless architecture for analytical purposes. Humorous croissant baking tips and to-the-point analogies made it inspiring and entertaining.

Overall the talks were versatile and excellent. There were also some less technical talks but nothing really outside the Serverless focus which was good as it didn´t seem like the organizers had struggled finding relevant and good presentations for the event. I recommend getting involved in your local cloud computing communities. Serverless development is super-efficient and enjoyable so if you haven´t tried it I suggest you do so without further due.

The author Jouni Leino has 15 years of experience in IT. He lives in Stuttgart Germany working as a Field Application Engineer in the Automotive industry. As Serverless Competence Lead he´s mission is to guide customers and colleagues towards a more cost effective and productive future.Follow in TwitterLinkedIn

Say what? Cigars and Serverless? What on earth could those two have in common? Maybe not much but bare with me and I’ll let you know.

The background

Some time ago I happened to run into a new Finnish open-source sensor beacon platform. I really wanted to give it a try. What could I do with it? Enter cigars! I came up with a requirement and created a user story that I wanted to implement: “As a cigar owner I want to be able to monitor the temperature and humidity of my humidor regardless of my location so that I know when to add purified water into the humidifier“. My current solution required me to open the humidor and check the meter inside it.

The brainstorm

Two important non-functional requirements were wireless communication from the humidor so I wouldn’t have to do any physical modifications to the humidor and a long battery life so I wouldn’t have to replace it too often. Both were met by the chosen sensor.

So what else would I need to make it happen? A place where I could process and save the sensor data and visualize it for devices regardless of their (or my) location. Enter cloud! I also had a couple of Raspberry Pi boards lying around so when the sensors arrived I was good to go.

AWS has an IoT service that can listen to messages from things (as in Internet of Things) you have registered to it. You can then do whatever you wish with those messages: save them into DynamoDB, process them with Lambda, forward them to Kinesis etc. Just what I needed. Enter serverless! Have to say I was very exited. I had never done any IoT stuff before so this was going to be a learning experience for me as well.

The solution

First thing I wanted to do was to read the sensor from the RasPi. A little bit of web surfing revealed a small but enthusiastic community around the sensor and I found a python script doing exactly what I wanted. The communication technology would be BLE.

Next thing was to connect the RasPi to AWS IoT service. That was also a no-brainer thanks to AWS documentation. AWS creates certificates and keys for the thing to be authenticated with and an endpoint for the messages. AWS processes the incoming messages with Rules. A rule defines a query that parses the incoming message and action(s) to be performed. The thing publishes it’s messages into a named MQTT topic via the given endpoint and the rule is a subscriber to the same topic.

I chose to save the data into DynamoDB by my IoT rule and implement a serveless website using S3 and Lambda. S3 is an object storage that is perfect for hosting static html files and Lambda is a compute service to run code without having to worry about any infrastructure. My Lambda function fetches the data from DynamoDB table and is called by ajax from html through API gateway.

Finally I wanted the lightest and the simpliest javascript graph library to visualize the sensor data from my humidor. See the screenshot above. A bit boring graph I know. But luckily it is not an EKG!

At the time of writing this the data is flowing once every 15 minutes from the sensor into DynamoDB and it is read from there by Lambda whenever the html page is loaded. Maybe I’ll implement some alarms next?

What did I learn?

First of all I learned once again that serverless services are extremely fast and easy to implement for example for prototyping. They enable individuals and businesses to do things that have been impossible or at least expensive in the past. They also make it easy to explore new ways of doing thing and doing business. My rough cost estimation for this solution is 1-2€ per month after the AWS free tier has been eaten. So it is fast, easy and cheap as well.

Secondly I learned a lot about how DynamoDB works. There was quite a few tricks on the way. For example is allows you to set a TTL attribute for a field containing epoch seconds as a string but it won’t do anything.

The second official Siili CraftCon was held before summer holidays 2017. It is an internal craftmanship conference for all craftmen and -women in Siili. This time it was half days and three tracks worth of pure skillz with topics such as “how to be a tech lead“, “data driven design“, “RPA” and more.

I had the pleasure of speaking about serverless architecture to the whole crowd as a closing presentation. Since I am a keen agile/lean fan I am also totally in love in serverless architecture and everything it has to offer in terms of reacting to changes and feedback and the ease of implementing new features and trying out new things.

Serverless means that you only need to think of you business logic. Everything else is taken care of by your chosen vendor. All big name vendors have their own serverless platform and services. In this context I am talking about PaaS (Platform as a Service) and FaaS (Function as a Service) side of serverless.

Some may include SaaS and (m)BaaS solutions into serverless context. SaaS stands for Software as a Service and like the name implies it includes software you configure for your needs. Some examples are Google Apps, Dropbox and Slack. (m)BaaS is (mobile) Backend as a Service and it provides some backend services, such as authentication, mainly for mobile applications.

FaaS is a subset of PaaS and means you write your function in you chosen language (or in a language that is supported by your chosen vendor) and deploy it. You also have to configure how the function is called. It can listen to events or can be triggered by an http request via an API gateway among other ways. Your vendor takes care of scaling it to your needs and you pay only for execution time. Wikipedia explains FaaS as “A category of cloud computing services that provides a platform allowing customers to develop, run, and manage application functionalities without the complexity of building and maintaining the infrastructure“.

PaaS includes lot more than just FaaS. Widely available PaaS services include messaging, databases, big data, analytics, file storage etc. They all are services you launch and configure. You can insert your FaaS function in a PaaS workflow and use all other available services with it. Again your vendor takes care of your infrastructure needs like scaling and backups and you pay for what you use. Wikipedia explains PaaS as “A category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure“. See how it differes from FaaS?

In short, serverless means you are responsible only for your code and your data.

In Siili we have a lot of internal serverless development also on top of all the fancy stuff we create for customer. We also have cloud sandboxes freely available for all Siilis for learning and trying out serverless stuff.

Back in the days there weren’t any tools for continuous integration or delivery (like Jenkins or Ansible etc). There was only ad-hoc build, manual testing and deployment done by developers from their own computers – and that was suprisingly ok (or they actually didn’t know the power of CI/CD tools yet). Actually it’s fine even today if we are looking CI/CD pipieline from a Lean perspective: to automate only the necessary parts of the value stream to get things done, BUT remembering to measure and improve the processes if needed (by creating a feedback loop and continuously measuring the process in retrospectives).

Sometimes it would be better to make your own tools instead of forcing to use ready made ones. I remember the times before Ansible when I was scripting a bit vague deployment pipeline with Powershell to deploy some PHP and Java artefacts to Windows environment alongside of .NET stuff packed into Nuget packages over TeamCity and Octopus Deploy. I know that was pretty questionable solution and I’m sure I’ll not do that again, but in the moment it felt a good enough (compromise) and I learned pretty much what it is to build things pretty much from the scratch. I found that a complex CI/CD processes could be an indicator of massive technical debt or cultural or process problems – but anyway, it was working and what was the most important thing. Customer got the solution and our team delivered a working software to production many times in a day.

Nowadays CI and CD pipelines can be presented as a code and stored to the VCS instead of managing configurations from the tool’s own GUI. Presenting configuration as a code is good because e.g. Ansible/Docker/Jenkins configurations are as valuable as the code of the main business logic and it’s fair to manage them over the same processes (for example pull requests, show-and-tell sessions , testing etc.). That approach turns the focus on the actual value and leads the change closer to DevOps culture.

Another good thing of modern CI is easier parallel processing set up. Today it’s possible to bring a flexible amount of Jenkins slaves in Docker containers if needed, or execute unit, integration and system tests in parallel by various tools like Gradle, Selenium Grid2 or Robot with pabot (for parallel end-to-end testing). Of course the cloud based CI services like Travis or CircleCI are pretty handy tools when project is suitable for using them.

But still the basic idea for me is to see the CI/CD pipeline as a part of the value stream. It’s all about choosing the right tools for the right case, or crafting the own tools if there’s no suitable one available.