The term GraphQL is getting a lot of attention lately. As its name suggests, it’s a query language, just like SQL (Structured Query Language). Unlike SQL however, it’s used not in database interactions, but in web APIs. To explain the benefits it can bring there, we’ll explore GraphQL in a series of 3 blog posts, of which this one is the second:

Context: in the blog post we’ll put GraphQL in its historical perspective and wider context, and will discuss its general pros and cons

Syntax: this blog post will offer an introduction and general overview of the GraphQL syntax and how it’s used to collect data by the consumers of your API

With Java and Spring: we’ll discuss 2 prominent Java libraries that enable you to create a GraphQL server API. The first relies on plain Java while the second build on the Spring framework

Schema

A GraphQL API consists of a single endpoint, with a single schema that defines the data that is available to be queried. For the code examples below, we’ll start of with a hypothetical GraphQL API with the following schema.

In our example, the root has two sub-objects, i.e. students (which is a list of Person) and student (which is a single person)

The student object requires an id argument of the type ID, and it’s required (hence the exclamation mark)

You’ll already have noticed that a GraphQL schema is strongly typed, including the types Person and ID. Of those two, ID is a scalar (a primitive leaf value) just like String, Int, Float, and Boolean. These scalars are built-in, but an API can also define additional custom scalars, such as Date and Url in the example above. These two are of course returned as simple string values, but they have the guarantee from the API that they will always have the correct format (e.g. ISO-8601 standard date format and a valid URL, respectively).

The Person and Address types are custom types, which are composed of scalars and/or other custom types. In the Address definition, you’ll notice the @deprecated, which includes a reason for the field being deprecated.

There’s a bit of complexity in such a schema. However, a major benefit of GraphQL is that this schema serves as a contract and simultaneously as concrete documentation, which for most APIs is either missing or requires a substantial amount of effort to create and to maintain.

Queries

Based on the schema we’ve defined above, we can retrieve data from the GraphQL API via queries. These contain a description of the information that you want to retrieve, including object (or objects) and the fields that you are interested in. Just like the statically typed schema, a query is also very explicit and concrete. Take for instance the query:

{
students {
fullName
}
}

This is probably the simplest kind of query you can perform with GraphQL and it will return all the students and for each student will include their fullName – nothing more and nothing less. Note that the data is returned in a JSON format, and that the data itself is always placed under a data field, which leaves room for other top-level response fields (e.g. meta or errors).

Keep in mind we’ve been querying the students object, which contains the full list of all students and will always return a list of students. In our example, if you want to fetch only a single student, you need to rely on a different object (student), and pass it argument.

{
student(id: 2) {
firstName
lastName
}
}

Just to illustrate that it’s possible, instead of asking for the fullName, we’re now requesting the firstName and lastName. This query will return:

{
"data": {
"student": {
"firstName": "Jane",
"lastName": "Doe"
}
}
}

Finally, it’s important to note that if you for instance need to do both of the two last queries (i.e. students and student(id: 2)), then there is no need to perform separate API calls. You can simply use:

Mutations

So far we’ve only retrieved information with the API. Usually though, it’s also necessary that the end-user can push or modify data. In GraphQL this is done via mutations. To continue with our example, we’ll add the following to our schema:

This createStudent mutation allows us to pass a firstname, lastname and date of birth, and will return a Person. Just as in a regular query, we can specify the information we want to have returned. For example, we can simply ask for the id.

Given that this equates to a method call with input parameters and an output value, the complexity of the query and the effect that it can have is entirely up to the API developer to decide. As such, we’re still left with finding the balance between a single mutation that can do many things versus several mutations that have a single, focused concern.

Conclusion

Since the GraphQL syntax is quite expansive (as you’ll notice from the online specs), we’ve only managed to go over part of it here. That part should however suffice to already accomplish plenty with GraphQL and allow you to leverage much of the benefit that GraphQL can provide. For that, you’ll of course need to have a server application that provides a GraphQL API, and that’s what we’ll cover in the next and last blog post of this series.

Alain Van Hout

Java Software Crafter

Alain Van Hout (36) is a Java Software Crafter with a Master in Science (Biology) and experience in academic research concerning evolutionary biology. Currently, Alain is working on web applications and API integrations at the genomics data analysis provider BlueBee (Mechelen), where he combines his background in biology with his passion for software Craftsmanship.

]]>https://www.continuum.be/2020/05/06/graphql-2-3/feed/0Breaking away from framework lock-in with Web Componentshttps://www.continuum.be/2020/03/20/breaking-away-from-framework-lock-in-with-web-components/
https://www.continuum.be/2020/03/20/breaking-away-from-framework-lock-in-with-web-components/#respondFri, 20 Mar 2020 15:14:39 +0000https://www.continuum.be/?p=29427In this blog post we’ll have a look at how Web Components can help in breaking away from framework lock-in.

The problem

Imagine a company where you have different domains with different teams, that are working on different applications using different frameworks (eg. Angular, React, Vue).

Each domain has a domain manager with their own experiences, personality and opinions. Together with a domain architect, decisions will be made that will impact every single member within the domain. Those decisions define the use of a certain technology, which methodologies and workflows to follow, whether or not the team will work agile, and so on. When working in their own domain bubble, the chances are fairly high the team will act like a well oiled machine. Work will be delivered on time, there will be no discussion about the chosen approach and everyone including upper management will be happy.

This is what we call a siloed approach and is something you’ll want to consider breaking out of. A siloed approach doesn’t stimulate the sharing of information with employees of different domains in the same company.

Imagine that upper management has requested the domains to have a bi-weekly get-together to discuss what they are working on and how they are approaching their projects. After a few get-togethers, it becomes clear that there is a huge amount of discrepancies between the different domains that have impact on: budget, productivity and portability of team members.

One thing that stands out is the list of frameworks that is used: Angular, Vue, React, Svelte and even server-side rendered applications with Java and .NET. Every team is writing everything from scratch based on the wireframes they receive or if by any luck designs are delivered.

The solution

Say hi to Web Components. Web components are a set of web platform APIs that allow you to create new custom, reusable, encapsulated HTML tags to use in web pages and web apps.

Web Components have been around for quite some time but today there is no reason not to use them, since every mayor browser supports it.

You can write your own Web Components from scratch, but this can be quite hard to do. So why not use a third party library? There are plenty of them such such as Stencil, Polymer, LitElement, and so on.

Personally I’m a big fan of Stencil. Stencil is a compiler that generates Web Components (more specifically, Custom Elements). Stencil combines the best concepts of the most popular frameworks into a simple build-time tool.

Things that make the developer experience great with Stencil:

Built-in dev-server for hot module reloading

Screenshot visual UI diffs

Auto-generate component documentation

When deciding on which third party library to use, it’s important to make a decision that fits your needs. Does the library cover the features you want? Will the Web Components generated just work like a regular HTML element? What is the size of the library and elements created?

Now you might be thinking: “So you are ditching frameworks in favor of using Web Component libraries? Doesn’t that create a lock-in?” Yes and no. You are indeed dependent on a specific library for creating your Web Components, but once generated they can be used in whichever framework you would like. So sharing between projects is perfectly possible.

Conclusion

I really think Web Components are the way to go. Using Web Components doesn’t imply a complete company culture overhaul since you can start small. Over time components can be distributed over different domains, teams and projects which allows them to be more productive. In return the time and budget saved can be used to create better user experiences and even better services.

]]>https://www.continuum.be/2020/03/20/breaking-away-from-framework-lock-in-with-web-components/feed/0The ins and outs of GraphQL: Contexthttps://www.continuum.be/2020/03/03/the-ins-and-outs-of-graphql-context/
https://www.continuum.be/2020/03/03/the-ins-and-outs-of-graphql-context/#respondTue, 03 Mar 2020 14:49:37 +0000https://www.continuum.be/?p=29174In this blog post we’ll put GraphQL in its historical perspective and wider content, and will discuss its general pros and cons.

The term GraphQL is getting a lot of attention lately. As its name suggests, it’s a query language, just like SQL (Structured Query Language). Unlike SQL however, it’s used not in database interactions, but in web APIs. To explain the benefits it can bring there, we’ll explore GraphQL in a series of 3 blog posts, of which this one is the first:

Context: in the blog post we’ll put GraphQL in its historical perspective and wider context, and will discuss its general pros and cons

Syntax: this blog post will offer an introduction and general overview of the GraphQL syntax and how it’s used to collect data by the consumers of your API

With Java and Spring: we’ll discuss 2 prominent Java libraries that enable you to create a GraphQL server API. The first relies on plain Java while the second build on the Spring framework

Early web API history

Software applications often have a need to talk to each other, whether it be banking applications that need to collaborate for financial transactions or a single-page web app that needs to retrieve information from its backend server application. Ideally, this is done with a degree of reliability and in a way that’s easy for software developers to implement and maintain.

First defined in 1998 (and published in 2000), Simple Object Access Protocol (SOAP) was a big step forward in that regard, because it provided a uniform way for two applications to communicate, with a clear contract on the format of the data and with increasingly widespread support by programming libraries. Because SOAP is based on official web standards that are maintained by the World Wide Web Consortium (W3C), and inherently involves security functionality, it is a reliable choice for API development. Even so, it involves a lot of protocol and programming ceremony, complicated data structures, not to mention a lot of XML.

REST (short for representational state transfer) originated in Roy Fielding’s PhD dissertation in 2000. Unlike SOAP, it’s an architectural style rather than a protocol. Despite that difference, in practice it offers a viable and more developer-friendly alternative to SOAP, by relying on meaningful and consistent URLs (e.g. /doctors/5/appointments), HTTP verbs (GET, POST, etc), statelessness, and (in practice at least) JSON as the typical structure of the data. By 2010 the popularity of REST had overtaken SOAP (which nevertheless still serves as the backbone for many systems – particularly in finance).

Problems with REST

Because REST emphasizes having separate endpoints for each of your resources (e.g. patients, doctors, appointments, schedules, etc), the number of endpoints can become quite large. That adds a lot of flexibility, but can also mean that your frontend needs to send large amount of requests to fetch all the data for a single page of your webapp.

On top of that, a REST API response for a given endpoint generally has a fixed format and fixed degree of detail, regardless of what information the user of the API does and does not need. As a result, data transfer in a REST API can be suboptimal (or require additional developer effort to make the response format more flexible, e.g. via query parameters).

Enter GraphQL

At least in part due to these issues with REST APIs, in 2012 Facebook started working on an alternative approach, called GraphQL. That work remained internal until it was publicly released in 2015 and finally moved to a separate foundation in 2018, where the GraphQL specification is managed.

In brief, GraphQL is a specification that details how to use a single HTTP call to retrieve data that is tailored to meet the API consumer’s exact needs. The basic trick is to send a POST request with a request body contains all the information that the server needs to return all the data the consumer needs and only the data that the consumer needs.

Because it is (based on) a specification, GraphQL provides predictability and reliability more similar to SOAP, while its JSON-esque syntax and lack of complex ceremony allows for more developer convenience. Furthermore, because a GraphQL API consist of a single endpoint and allows the developer to exactly specify what data should and should not be returned, it also avoids much of the issues with REST.

As is evident from its name, it’s is a query language. And like SQL, it builds on a schema, has the equivalent of tables and where-clauses and allows you to declaratively specify exactly what information you want to retrieve. It also has the distinction between queries which return information (like SQL select statement) and mutations which modify data (somewhat like SQL update statements, but potentially more powerful).

GraphQL caveats

Although GraphQL offers a useful alternative to e.g. REST, this does not mean that it can entirely replace REST, that it’s always the optimal choice or that it does not have its own constraints and caveats.

The first thing to note is that GraphQL adds quite some complexity. In larger web APIs that serves many different consumers, that complexity has a high return-on-investment, but in smaller APIs it may be more prudent to stick with REST. Similarly, when the API provider and API consumer are tightly coupled (e.g. for a small internal application), you may not need the flexibility that GraphQL offers.

Another issue stems from the fact that GraphQL always makes use of the HTTP POST verb, regardless of whether you’re making requesting data or performing update, because you need to pass your query via the request body. As a result, the well-established HTTP caching features are not in play when using GraphQL (although workarounds do exist).

Furthermore, despite being statically typed, GraphQL still lacks (stringent) conventions with regard to how errors are returned (besides using an error field). This means that you still need to provide API documentation and/or that the API consumer is required to parse and loosely interpret the error content. Contrast this to REST, which builds on the HTTP spec and its well-defined and widely used range of status codes.

Similarly (and again contrary to REST), the specification does not offer a way to deal with other content types (i.e. mimetypes; for example when uploading or downloading files), because GraphQL depends on JSON-like queries and JSON responses.

Despite these constraints, the real take-home message here is to use a tool based on the job at hand. In some usecases that may not include GraphQL, but in others it will definitely shine.

Conclusion

It should be clear that GraphQL has a lot of promise, learning from and building on the best parts of its predecessors. In the next part of this series, we’ll go into the syntax of GraphQL, to be used by a frontend application or any other API consumer that requires tailormade complex data.

Alain Van Hout

Java Software Crafter

Alain Van Hout (36) is a Java Software Crafter with a Master in Science (Biology) and experience in academic research concerning evolutionary biology. Currently, Alain is working on web applications and API integrations at the genomics data analysis provider BlueBee (Mechelen), where he combines his background in biology with his passion for software Craftsmanship.

]]>https://www.continuum.be/2020/03/03/the-ins-and-outs-of-graphql-context/feed/0To pair or not to pair…https://www.continuum.be/2020/02/28/to-pair-or-not-to-pair/
https://www.continuum.be/2020/02/28/to-pair-or-not-to-pair/#respondFri, 28 Feb 2020 10:35:57 +0000https://www.continuum.be/?p=29155Is pair development the new way to do development, is it the goose with the golden eggs, or is it just another way to write software…

Is pair development the new way to do development, is it the goose with the golden eggs, or is it just another way to write software…

Well I think, as always, that the truth lies somewhere in between. But first things first: pair development, for me at least, goes beyond only development. It is a technique that can be used for lots of things: for development, for designing, for infrastructure changes, even for activities way beyond the scope of an IT department. I’m thinking about activities like cooking or crafting, as long as there is a creativity aspect and is involved. In all those cases two minds working together will provide way better results than single minds working in separation.

What is pair development?

But what is pair development exactly? Well, let’s take the example of the developer. In that case we need two developers (obviously), one computer and one clear defined piece of software to work on. The first developer takes place at the keyboard and does the actual coding (the driver) and the second developer observes, checks the code and keeps the bigger picture in mind (the observer).

Advantages of pair development.

The duality between the low level view of the driver and the high level view of the observer makes it easier to spot mistakes and correct them early, which makes code-reviews unnecessary and speeds up the production readiness of the product. The driver and observer switch roles on a regular base to keep them focussed and to prevent fatigueness.

Another advantage, besides the early spotting of bugs, is the fact that both developers immediately have the same level of knowledge of the piece of software, meaning there is no need for costly knowledge sharing sessions. This leads to a climate where the developers can challenge their pairs knowledge, and where they can challenge each other by writing tests to break their pairs code. In the next iteration, when the developers switch roles, the issue gets (hopefully!) fixed and a new, possible even harder test is written.

Pair development leads, for different reasons, to a higher velocity. Teams can deliver more value in the same amount of time. In other words, pair programming leads to faster benefits of the time invested by the developers! And that will lead on his hand to a shorter development time. The shorter development time, in combination with the lower amounts of bugs, will hopefully increase of the trust of the management. A higher trust that can lead to a more pair development proof company, or even an introducing of pair development principles in complete other departments like HR, housing or even the management itself.

How to facilitate pair development?

To be able to create a fertile atmosphere for pair development, not only the management has to trust the developers, but the developers should also trust their pairs. Pairs typically spend a lot of time in close alignment. They should know each others ins and outs, they should both be open to each other and respect their pairs privacy at the same time. Only that way pair developers can be vulnerable, can drop their egos and start working together in a close and trustfull combo.

In such a combo partners should respect each other, and the basics are, as always, most important. Pairs should use gentle language, and they should not interrupt each other. It is important to take the time to listen to each other and to explain things when they are not clear, even more than once. And perhaps the most important of all: nobody owns the code and (especially!) nobody created the bugs…

I also think it is very important that pair development stays a choice for the developer, it should never be mandatory. Some developers won’t be able to behave well in pair development and will work better in separation. That does not mean that we can not try to convince these developers of the benefits, but at the end the developer has to do it by himself or herself.

Conclusion

Is pair development the solution to everything? I don’t think it will work in all cases, but it is a very satisfactory way of working that can lead us a big step in the right direction…

The 2019 edition of Devoxx was a fun one, with some great talks. For me personally the talk by Martijn Verburg ‘Cloud Native Diabolical Developer’ really stood out. He was the CEO of JClarity and is now the Principal Engineering Group Manager (Java) at Microsoft. He is very charismatic and during his session he challenges everything about cloud native.

It was a bit of a controversial talk: is cloud native really the way to go?

Before we continue, I want to point out that the rest of the Blog is the speaker’s opinion, not necessarily my own. However, I found it very interesting to look at the cloud native movement from a different perspective.

Cloud native break down

DevOps

These days software programmers are asked to be full stack developers. In recent years programmers were not only asked to be able to write backend and frontend code, but also to be a DevOps engineer. Everybody wants full stack developers! You have to be an expert in Java, an expert in the JVM garbage collection, an expert in Docker, Kubernetes, know some shell scripting, some command line for your particular cloud vendor, package managers, some React or Angular for the frontend, some basic networking/firewalls/ports, server meshes, circuit breakers,… and the list goes on.

There are a lot of tools that DevOps engineers can utilize these days (Puppet, Terraform, Chef, …) all with their specific commands and flows. The guidelines stipulate that a DevOps engineer should be in the same team as the developers. However, in practice there are mostly DevOps teams. According to our speaker – Martijn Verburg – this is not necessarily a bad thing. You can’t be an expert in every area, so it is good to have specialists around.

Continuous Delivery

The reason why we have DevOps is to do continuous delivery. Instead of doing deploys manually, you now have pipelines that build your code somewhere on a server. It runs performance tests, integration tests, E2E tests, … If all goes well, your pull request is accepted. If some tests fail, you have to fix it. If someone has pushed his code by then, you can fix merge conflicts before trying to go through the pipeline again in the hope to succeed this time.

Many teams have big monitors to follow the statuses of all the different build pipelines. However, it is very rare to see all the pipelines turn green on that big screen monitor. Sometimes teams even take a sprint to fix these broken builds, just to see them red again after a few weeks.

Microservices

Microservices have been the go to architecture for many new projects. But as the image illustrates, microservices very easily become tangled to one another if you are not careful. They become hard to debug, and versioning becomes very important. When performance issues arise, many fall back to sharing the same data store over multiple microservices to solve the issue. But then the isolation part of the microservice architecture doesn’t hold up anymore.

Containers

Containers are yet another layer of abstraction. We, as developers, are writing Java code, which compiles to Java byte code, which gets (if you are lucky) hot spotted to machine code. It then runs on a container, that runs in a Kubernetes pod, that pod runs on a VM somewhere. The VM runs on an operating system that actually runs on a piece of hardware. Needless to say everything gets more complicated and performance-wise it doesn’t necessarily get any better.

Docker and Kubernetes have each a lot of commands that you need to learn and master. For Kubernetes it is even best that you learn Helm as well. Kubernetes can do wonderful things, if you get it right. But there is a steep learning curve and there are many pitfalls (readinessprobe vs livenessprobe, rolling upgrades, …).

Conclusion

We used to write code that got us to the moon without all the latest tools, microservices and the Agile movement. You have to ask yourself do you really need all these tools, architecture, …? These tools and architecture make things more complex. Not all projects are Netflix or Twitter. Sometimes you just need a monolith. Build for what you need, not what you want or as he calls it Résumé Driven Development.

Supersonic Subatomic Java

For the past several years, I’ve been fascinated by the Microservices movement. The architecture, the promises and how to achieve this. I’ve experimented with frameworks and tools like KumuluzEE, Javalin, Micronaut, Sprint Boot 2, MicroProfile and with the early releases of Quarkus.

Quarkus version 1.0 has just been released and I think that it’s time for a proper introduction.

Why Quarkus?

The migration of a monolithic architecture to a microservice architecture will offer you more agility, scalability and faster business reactivity. You can implement new features more efficiently and more quickly without impacting other services.

What is Quarkus?

Quarkus is designed with the emphasis on Cloud Native, Microservices and Serverless applications. You can use Quarkus for monolithic applications, but it’s not designed for that use case. If offers significant runtime efficiencies, like faster startup (tens of milliseconds), low memory utilization, smaller application and container image footprint. Quarkus brings you the following 4 benefits, which we will dig into below:

Developer Joy

Supersonic Subatomic Java

Unifies Imperative and Reactive coding

Best of Breed Frameworks and Standards

1. Developer Joy

The platform offers a reload of your application in the blink of an eye. This means that for any change that you make to your code or configuration, the application is immediately reloaded. No need for a new build and no need to restart your application.

Quarkus offers the opportunity to unify all of your configuration in a single file. This can include items like database configuration, properties for different environments or library-specific properties. As a result, there is no longer a need for multiple configuration files.

Generating a native image for GraalVM can be quite a hassle. Quarkus however simplifies this for you tremendously. All you need is the following Maven command line:mvn package -Pnative.

2. Supersonic Subatomic Java

Supersonic refers to the startup time of your application while Subatomic refers to the total memory usage of your application. Because a picture is worth more than a thousand words, I will let it speak for itself:

3. Unifies Imperative and Reactive coding

Quarkus gives you the ability to write imperative and/or reactive coding in a single platform. This is possible due to the fact that Quarkus uses Vert.x and Netty, which form the reactive core of the framework. You can find detailed information about this architecture by reading this blog post.

4. Best of Breed Frameworks and Standards

The Quarkus platform offers you the most used frameworks such as Eclipse MicroProfile, JPA / Hibernate, JAX-RS / RESTEasy, Eclipse Vert.x, Netty and more. Quarkus also includes an extension framework that third-party frameworks can use to extend it. The Quarkus extension framework reduces the complexity for making third-party frameworks run on Quarkus and compile to a GraalVM native binary.

How does Quarkus work?

The startup time of a framework is determined by several processes, for example:

Quarkus will try to move as much of these processes, except from the last one, from the startup to the build time. Because of that, a lot of the work only has to be done once and not at every startup and less classes need to be loaded. This has a positive impact on memory consumption and startup time.

To go even further you can make use of native compilation for the GraalVM. This process will eliminate as much of the ‘dead’ code as possible. This will further decrease the size, memory consumption and startup time of your application.

Conclusion

As you’ve seen, Quarkus promises you a faster startup and less memory utilization than traditional Java applications. It offers an effective solution for serverless, microservices, containers, Kubernetes, FaaS and the cloud because of its container-first approach. Moreover, you can use imperative and reactive coding due to the fact that it’s a single platform. Quarkus supports the most used frameworks and offers an extension framework that can be used by those frameworks to make it compatible for Quarkus and can be compiled to a GraalVM native executable.

During my visit to Spring IO 2019, I joined in on a presentation on the OWASP top 10 security issues and what tools there are to check for those. It was a very interesting presentation and it also made me realize that security in applications is something I do not know much about, not yet anyway. At another talk, one of the presenters demonstrated KeyCloak as an identification provider and management tool, which brings us to this blog post.

Why?

On its website the company calls itself a provider of open source identity and access management for modern applications and services. This is quite a promise, but it can also be used for existing more traditional applications.Since 2013 they provide a java-based Authentication and Authorization Server. The company started as a part of a JBoss community project and has now been taken in by RedHat. And another nice feature is that it’s open source and if needed you can extend it as you need.

At the first run, you are asked for an admin user and password. Afterwards, you can click on the link to the main admin console where you can set up a security realm with several users.

Out of the box a demo realm named acme is defined. For this demo I’ll aptly rename it to ContinuumSecure. And it looks like this in the console:

In the past, when implementing a user sign in screen and keeploggedin functionality, it was always a long and difficult task, with lots of stress and problems along the way. KeyCloak provides you numerous possibilities out of the box. Just by flipping a few toggles on the login tab, you can: add a user registration link, allow a user to login with their email address instead of the username, allow a user to change their password if they forgot and even force the login procedure to require SSL.You can even add themes to Keycloak to allow styling of the login page in the style of the application you are integrating with.

Where implementing such functionality yourself would take several weeks, it now only takes minutes. Which is a really impressing feature.

Another one is that when a user is logged in, you can provide a link to allow the user to update his or her account without needing an intervention of a developer or administrator, enable two factor authentication for the application or view all the application to which the user has access. And all of that is customizable to match the rest of your application.

To connect your application to the realm defined in KeyCloak, you need to add a client for the application in KeyCloak. In the admin console this can be done in the second menu item on the left-hand side of the console aptly named Clients.

On the right you see the button to add a new client for your application. Click it and a new window opens to allow configuration of the client.

You can either import a client configuration or configure it yourself. Add a name (client ID) and select the client protocol, this can be OpenIdConnect or SAML.After saving this new client setup, you can configure it further.

This allows the configuration of several application URL’s, to enable proper redirecting after authentication and so on.

On the second tab, you can create a number of roles for that specific client which you can later assign to a number of users depending on your use case. Roles can also be defined on a realm level and be used across different clients.

KeyCloak also allows to register different Identity Providers such as Google, Facebook, Github and other.

Furthermore, user federation can also be configured to connect to an existing Kerberos or LDAP setup within your company.

The last part I want to show you is the user management. With a few clicks, you can also manually set up user groups and accounts using the lower part of the left-hand menu.

And configure what actions they should do the first time they log in, like complete their profile, update the password or validate their email.

After creation, you see all the users and several administration features:

This is just a small pick of the features KeyCloak provides out of the box. If that is not enough for you, since KeyCloak is open source software, you can always go about adding your own features.

SSO using KeyCloak

SSO or Single Sign On/out means once you are authenticated once, you are authenticated for all applications that use the same KeyCloak server. KeyCloak supports implementing this using both OIDC 1.0 or SAML 2.0. It also provides single logout functionality to logout once for all applications.

To integrate this in applications, KeyCloak provides adapters supporting the different standards and for specific implementations such as Spring boot, Spring Security, Angular, NodeJS, Tomcat, …

KeyCloak also provides reverse proxies. That way you don’t even need to integrate your application with it, just have it behind a protective shield and accessed via redirection.

To integrate KeyCloak with a Spring application using Spring security just complete the following steps:

The value we specify in keycloak.resource matches the client we named in the admin console.

Next, we add our dependency for Spring security. There is a Keycloak Spring Security Adapter and it’s already included in our Spring Boot Keycloak Starter dependency. We’ll now see how to integrate Spring Security with Keycloak.

]]>https://www.continuum.be/2020/01/15/securing-your-spring-apps-with-keycloak/feed/0RxJS mapping operatorshttps://www.continuum.be/2020/01/09/rxjs-mapping-operators/
https://www.continuum.be/2020/01/09/rxjs-mapping-operators/#respondThu, 09 Jan 2020 09:06:12 +0000https://www.continuum.be/?p=29020Anyone who has built an application with Angular knows the different RxJS operators and in most cases the usual suspect ‘map’ operator. But when do you use which mapping operator? Learn more about when to use which operator in this article.

When do you use which RxJS mapping operator?

Anyone who has built an application with Angular or who has used RxJS does know the different RxJS operators and in most cases the usual suspect ‘map’ operator. But when do you use which mapping operator? You can use i.e. the most simple ‘map’ operator but also the more advanced mergeMap, concatMap, exhaustMap and switchMap operators. The comments on the internet are not so clear and everyone has his own idea on when to use which operator.

In the code sample below you can see a simple ‘map’ operator used for mapping the strings from a observable to a more extended string:

This works just fine on simple values or objects, but things get a little bit more complicated when dealing with more complex things…

Mapping failure

When calling an API or when you’re connected via a Websocket and you want to directly use the observable without subscribing, you’ll need to map these values and return their inner observable. You could try to do that by again using the simple ‘map’ operator, but this will result in the following code sample:

Since the map operator doesn’t do any ‘flattening’ of the inner observable, we just get a returned observable wrapped in another observable, which is useless to work with.

Advanced mapping

RxJS provides for these more advanced operations mapping operators that can do mapping but also flattening. Flattening is the process you would typically do in the subscribe on an observable. Basically you’re going to subscribe on an outer observable and in that subscribe you’re going to subscribe again on the (inner) observable emitted by the outer observable. You could do this by using the simple ‘map’ operator and just following the above described sentence as you can see in the code sample below:

However, doing a subscribe in another subscribe is bad practice and can cause memory leaks in your application. To fix this problem, you can use the more advanced mapping operators that do the mapping + flattening all in one operator.

Fixing the code with a mergeMap operator would look like this:

the mergeMap operator first does the mapping of the outer observable values and flattens the returned inner observables so that we can use it in our outer subscription

The difference between the advanced mapping operators

As mentioned before, you have different advanced mapping operators; mergeMap, concatMap, exhaustMap and the switchMap operator. In general terms, every advanced mapping operator does the same thing. It just does mapping + flattening of an observable, but every operator has his own different outcome.

Imagine yourself trying to get a coffee in a coffee bar. You and another person are waiting for the bar to open. The barista is getting everything ready and is going to open the bar. From here on the mapping operators will decide how you and the other person will create the queue for getting that coffee.

MergeMap: As soon as the bar opens you and the other person are running to the barista in chaos and trying to get there first. There is no clear queue and you two could potentially order your coffee at the exact same time.

ConcatMap: As soon as the bar opens you and the other person are getting in a nice queue and get served by the barista in the order you’re queueing. First come, first served not matter how long the queue gets. This could potentially cause the problem that the queue gets very large and that other people at the end of the queue would have to wait a very long time for their coffee.

ExhaustMap: As soon as the bar opens you’re standing in a fair queue with the other person standing behind you. However, the order is taking too long for the other person behind you and he leaves the bar without his or her cup of coffee.

SwitchMap: As soon as the bar opens, you order a coffee. A few seconds later the other guy arrives and he just gets in front of you, ordering his coffee. unfortunately your order is getting cancelled by the barista and the other guy is getting his coffee unless another person arrives, which would cancel his or her order as well.

Conclusion

All advanced mapping operators act in a same way mapping + flattening, but can have different outcomes depending on the context or situation.

mergeMap: alias for flatMap. Combine the results of two HTTP calls into a single result set.concatMap: run an operation on each entry in a queue, and respect the order of the queue.exhaustMap: when a login HTTP call takes some time, we don’t care if the user keeps retrying. We only try again when we’re done.switchMap: when listening to a mouse move event, we only care for the current position. Older position values should be ignored.

Good use cases for every operator (these are only a general rule of thumb):

mergeMap: delete operations since the order does not matter and the end result should be that everything is deleted.

concatMap: update or create operations since a create or update operation can be time sensitive and every operation should be handled no matter how long it would take.

exhaustMap: non parameterized queries since the results will probably be the same.

switchMap: parameterized queries since a delayed query would contain more recent query params which will let to the more relevant information you’re asking for.

]]>https://www.continuum.be/2020/01/09/rxjs-mapping-operators/feed/0Introduction to reactive programming with Springhttps://www.continuum.be/2019/12/24/introduction-to-reactive-programming-with-spring/
https://www.continuum.be/2019/12/24/introduction-to-reactive-programming-with-spring/#respondTue, 24 Dec 2019 12:15:12 +0000https://www.continuum.be/?p=28998This article, written by Crafter Ward, provides an answer to the following questions: what is reactive programming and what does reactive programming solve?

What is reactive programming?

Reactive programming is all about dealing with asynchronous data streams and a specific propagation of change. This means implementing modifications to the execution environment in a certain order.

Let’s have a look at a real life example to explain this: Alain wants to spend the evening with his colleague Tim. They want to eat pizza and watch the Game of Thrones finale. Let’s outline his available options:

The synchronous approach: Alain finishes work. He goes to the pizzeria, orders pizza and waits till it is done. When he gets the pizza, he then picks up Tim and finally makes it home and starts the final episode.

The asynchronous approach: Alain orders pizza online, phones Tim, invites him to come over. He heads home, then the pizza gets delivered. He starts watching the episode while eating the pizza. He does not wait for Tim to show up before starting to eat or start watching the episode.

The reactive approach: Alain orders pizza, phones Tim, invites him to come over, heads home. The pizza gets delivered. This time, Alain waits for Tim to arrive and after Tim arrives, he starts the episode and they eat pizza together.

The synchronous approach takes way too long. He has to go to the pizzeria, wait for the pizza to be ready (he won’t be able to do something else) and leave with the pizza. Alain would probably have wanted to cancel the whole thing before he gets home. When Alain would use the asynchronous approach, he would have eaten the whole pizza and watched the episode before Tim even arrived. Tim would not like this.

The only approach that makes sense is the reactive approach. Alain waits till all the asynchronous actions are completed and then proceeds with further actions.

What does reactive programming solve?

Let’s say there is a publisher of data and a consumer of that data. We have an application that deals with data at a large scale and should process data that is resilient in the face of data coming in, errors in the system and slow downs in the system.

Should the publisher keep feeding the consumer data if it can’t handle the speed of the data and overload it? Should the user expect the program to remain unresponsive till the consumer catches up? Should the publisher know about a crash of the consumer or should he just keep passing data that will not be used?

Reactive systems make these problems top priority. So reactive programming makes systems react to changes in data flows.

Reactive Manifesto

The reactive manifesto is a document that defines the core principles of reactive programming. It was first published in 2013. It is the bible for the programmers of the reactive programming religion and a must read for everyone starting with this. You should read it to understand what the principles are and what it is all about.

The 4 main principles:

Responsive

A responsive system is quick to react to all users in order to ensure a consistently positive user experience.

Resilient

A resilient system applies proper design and architecture principles in order to ensure responsiveness.

Elastic

The system stays responsive under varying workload. Making the best use of the resources you have available to you.

Message driven

Reactive systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency.

Reactive streams

Reactive streams define a set of interfaces of how we might deal with reactive streaming situations. There are four main interfaces:

Publisher – A producer of values that may eventually arrive.Subscriber – Listens to what is published by the publisher.Subscription – Publisher communicating with a subscriber.Processor – Combination of a publisher and subscriber that allows to process data.

A publisher is setting up the potential to publish information, but until you get a subscriber attached to it, it is not necessarily publishing that information. When we look at the imperative way, we start the application and it starts doing stuff. This is totally different in the reactive way, where it gets ready to do stuff and it waits for the signal to consume the results.

The intention from reactive streams was that it would move on to a more real world implementation of this basic specification. One of those implementations is Project Reactor.

Project Reactor

Project Reactor translates reactive streams into a framework that you can use. It was started in November 2015 and forms the basis of reactor support in Spring.

Key concepts:In reactive streams we have a Publisher but in Project Reactor they decided to have two specialized Publishers:

Flux – a Publisher of 0 to N elementsMono – a Publisher of 0 to 1 element

This decision has been made because not everything is a Flux. Sometimes we know that we are expecting to get back a single value or no value at all. So for code optimizations and for being able to work in the real world with this tool, it can be handy to narrow things down to a Mono and use methods specific for a Mono.

R2DBC

R2DBC (Reactive Relational Database Connectivity) is an endeavor to bring a reactive API to SQL databases. It was first announced at SpringOne Platform 2018. It is an incubator to integrate relational databases using a reactive driver.

Key concepts:

Reactive Streams – it is founded on Reactive streams providing a fully reactive non-blocking API.Relational Databases – engages SQL databases with a reactive API, something not possible with the blocking nature of JDBC.Scalable Solutions – makes it possible to move from the classic one thread per connection approach to a more powerful, more scalable approach.

There’s more

I only talked about the basic concepts of reactive programming. There is much more to be found on this interesting subject and it has a lot of potential for the future. I encourage everyone to try and start learning more about reactive programming. A good place to start would be: the Spring documentation and a blog post of Matt Raible of Okta.