Exchange Support: Kyber

We've added an easy way for logged-in users to access Kyber Network. Logged in users will now see a new tab in their profile, which brings them to the brand-new exchange part of the platform.

DevOps: Docker builds

We updated our pipelines to automatically push out new versions of our main branch to docker. This way, we create a way to continuously deploy new versions, as they get developed. Basically, you'll get updates faster, reducing release-time while at the same time trying to reduce the bugs we might introduce by releasing bigger packages.

If you want to run your own version of Cindercloud - which you totally can, just credit us - you can use the images that can be found on dockerhub: https://hub.docker.com/r/cindercloud/

DevOps: Moved to a new server

Although we haven't talked that much about our server architecture before (we promise we'll make a separate blog post on this matter), we wanted to include the fact that we moved to a new system. Our old system wasn't fast enough when it came to indexing historic transactions. At this point, we're reindexing each and every single historic transaction into a new database. In the coming days, it's possible that transaction history might be incomplete.

Start of Trezor Integration

Trezor will be one of our first hardware wallets that we'll support. We haven't deployed any of this functionality yet, but expect to release it in the coming days/weeks.

Quick access to supported ERC20 tokens

On the bottom of every supported ERC20 token, we've added a small section where you can access the contract. At this point, only constant function are possible.

General Updates

We've received our KeepKey, so if possible (because we're still not sure that there's currently a bridge to js), we'll start with the integration as soon as possible.

Once QuickNode has added their full-archive, tracing nodes, we'll use them to index all internal transactions and to be able to give nice traces of your transactions.

As always, if you find any bugs, report them to us on github. Until next week!

In this section, we'll be talking about all development updates of the platform.

Access a contract

In short:
A new section has been created which allows a user to access the constant functions of a smart contract.

In our first version of this new development, we've made it possible for a user to access a smart contract. Although the page says that you can access state-modifying functions if you're logged in using a web3 provider, we've not implemented this feature yet. Expect to be able to access state-modifying functions in the near future though!

Mnemonic Support

General Updates

Basically, these types of blog posts are part of our general update.

From now on, we'll be writing down all of our development updates, as well as other updates (partnerships, ideas, future integrations etc..) in these short, weekly posts. The reason for this, is because quite some people from the community want to know when we come up with new features. Change is hard for some users, so reaching out to them with what we changed might help alleviate the possible confusion.

Not many people realise it at this point, but all funds sitting in Parity standard multi sig wallets are frozen, never to be accessed again. It takes nearly half a day before the news hits the public. But when it does, it hits hard.

The parity Statements

We started digging on Etherscan to find the root cause of the problem, to see if anything could be reverted or saved.

It was this transaction that clarified what happened. During Parity's code refactoring - after the disaster that happened in June 2017 - developers at Parity Technologies extracted common code from the standard multisig wallets and put it in a precompiled and deployed contract. This way, all new multisig contracts could use this contract by calling it and users could save up on a quite some gas.

It seems however that the code wasn't rigorously tested and contained a bug. A rather big bug to be honest.

Instead of implementing the initWalletcode, it was also extracted into the library.

In short, the initWalletcode could also be executed on the WalletLibrary Library, which is the only library all of these multisig wallets use.

This converted the library into an actual multisig wallet.

Moments later, the same person called the following function.

The library is now killed, making it useless. All calls to this contract will fail. Well, almost all calls. The multisig contracts have effectively been reduced to

contract Wallet {
function () payable {
Deposit(...)
}
}

Funds are locked up, never to be returned again (*).

Now what?

At this point, there's not much we can do. All users that had/have a standard multisig wallet by parity were affected and all of their funds are locked up.

As you can see in the image below, the wallet library address is hardcoded without a way to overwrite it.

The White hat group compiled a list of all known affected wallets. If you are using a multisig wallet, we advise you to check if you are among the victims.

We, at FundRequest were lucky to have been using the geth-multisig, so our funds are uneffected. We're deeply sorry for all the people that lost their funds.

Hours after the actual event, reddit started flooding with people screaming they either

want a hard fork to save the funds

are very much opposed to a new hard fork

want a hard fork combined with the planned constantinople release

At the time of writing, no official word has come from the Ethereum foundation about any hard fork, so it's all absolutely speculative. However, we want to know what your stance on this matter would be?

Are you Pro? Opposed? Is code law or is there a gray area? We'd like to know.

myetherwallet is looking for an Ethereum/MyEtherWallet expert to help with support & education. So if you're a person that can provide support or clarification in a courteous and professional manner and you have some extra time, you should contact them.

The ethereum developers posted a new roundup on their blog. It's worth a read to see what's coming up in the very near future.

Hi Crypto fans and welcome to our very first installment of this week in Crypto. This weekly series of small posts will be a recapitulation of interesting blogposts, technology announcements and discussions in the cryptocurrency sphere.

Navigate to your contract

Navigate to Verify and Publish to go to the correct verification page.

We'll need to add some extra parameters, which can only be enabled by pointing your the url to a different endpoint. Change "contractVerify" to "contractVerify2" in your url.

Fill in the correct values

Next up, you'll need to fill in the correct values in the form. Enter the contract name you wish to verify. Choose the correct compiler. You can find the correct compiler by opening a new terminal and entering the following command.

solc --version

The result will most likely be a non-nightly build.

The correct value for Runs, which is the amount of times the optimization should run, is 0.

Paste the contents of your contracts in the code box. Keep the code as it was when you deployed it, but remove all the imports, as all code will be in a single file, so importing will not be necessary or supported.

If you deployed a contract which contained constructor arguments, you'll need to add them in the following input box. Constructor arguments are expected to be ABI-encoded. More information on ABI-encoding can be found on the page, but remember arguments are right to left. An example for one of our projects can be found below.

Tip: If you can't figure out your ABI-encoded constructor arguments, first look at the encoded unlinked binary of your contract, then look at the encoded transaction-arguments of the transaction that created your contract. If your contract was deployed with constructor arguments, they should have been appended to the transaction data. So the difference between the two will be the ABI-encoded constructor arguments.

Lastly, you'll need to add any libraries you're using. You can find the location of your Libraries in the output of Truffle when you deployed them or in the generated json files.

That's it, hit verify and publish. Your contract should now be verified. Your code should be visible on your contract and methods for reading the state of your contract will be available on the web page.

Why verify your contracts?

The reason we always verify our contracts is easy. First of all, it generated trust. People can see your contracts and verify the code with what you claim they should be doing.

Secondly, it's darn useful to have an easy go-to page to read the state of your contracts.

Considering you're reading this blog post, I'm going to assume you know what IPFS is, or at least heard of it.
Maybe one of your colleagues dropped the name, or some random chap on the internet told you to put your website or decentralized application on IPFS.

Nevertheless, I won't be going into detail than what is strictly necessary to host some simple static files on IPFS. It's an introduction aimed at beginners who want to tinker around with the distributed web.

What is IPFS?

IPFS stands for InterPlanetary File System. It's a protocol designed to create a permanent and fully decentralized method to store and share files.

It aims to be a replacement for the http-protocol, but faster, more efficient and less expensive. IPFS makes it possible to distribute high volumes of data with high efficiency and zero duplication means less storage cost.

It's a peer-to-peer distributed file system, has no single point of failure and nodes do not need to trust eachother.

IPFS is highly resilient and provides historic versioning. Think of it as git for the world wide web.

Why use IPFS to host your DApps?

If you want to deliver a decentralized application, you might just be thinking about a decentralized backend. Leveraging blockchain technology as the backbone of your applications will deliver decentralized, open and highly resilient solutions. But it's important that not only your backend is highly resilient.

You might be relying on a single webserver, configured with apache or simply an s3 bucket with cloudfront or a github page with an Akamai cache. These solutions work great and will be the perfect safe haven for your day-to-day development.

But you're creating a DApp. A Decentralized Application. You want more.

5 minute tutorial

First of all, before being able to push your files to IPFS, you'll need to install it.

Now that you have IPFS on your machine, you can push files to it. Pushing files will add them to the network and return you a hash. Navigate to the folder whose contents you wish to upload and enter:

ipfs add -r .

This will add the folder to IPFS recursively, making sure all references are linked. The last hash returned is the hash of the folder, which is the one we'll need later.

If you wish to upload just a single file, type

ipfs add <file>

You can see the metadata of the object linked to any hash by typing

ipfs object get <hash>

To see our file in our browser, we'll need to start up an ipfs daemon, which you will need to be able to communicate. This command make you a node on the network and will start up a gateway. Open up a new terminal and enter:

ipfs daemon

Now navigate to http://localhost:8080/ipfs/hash to see your folder being served over http. You can also navigate to https://ipfs.io/ipfs/hash to see your files being served over another node. It might take a little bit to load your files the first time, as the files still need to propagate over the network until they're resolved by the gateway.

Extra: using DNS to point to your IPFS

We're going to throw in a something extra here, because we're still stuck with hashes, which can be found unappealing. Luckily, it is possible to enable your dns to point to an IPFS record.

First of all, you'll need to point your domain to the ip-address of https://ipfs.io. Fetch the IP and change your dns-tables to point all traffic to it.

The last thing you'll need to add is a TXT-record in your domain settings which the IPFS gateway can check and serve the correct page. Enter the hash we saved earlier in the hash field.

dnslink=/ipfs/<hash>

That's it, you're all set. If you navigate to your domain now, you'll be presented with the page you uploaded to IPFS.

Comment below on what applications you've been deploying on IPFS and share your experiences with me.

Netflix has always been a proud contributor to the open source world. It's fascinating to see how each of their libraries facilitate a lot of tasks and can help create your development in a tremendous way.

In this series of blogposts - The Netflix stack, using Spring Boot - I'll be going over some of the libraries which Netflix has created and how to incorporate them in your spring applications. As always, it'll be more of a hands-on experience, as this blogpost will basically just be an overview of what you can find in the accompanying repository

Feign

In part 1, we looked at Eureka. We created a microservice that could register itself on the Eurkeka Server and created an API that would not only register itself on the Eureka server, but also find our other micro services using the registry. Last week we had a hands-on experience with Hystrix, the circuit breaker.

There were a lot of replies of people asking how we could use the registry to find our microservices, but also use the response from Eureka to actually call our micro services. Today i'll be talking about how Feign will aid you in creating rest clients for all of your services, with minimal configuration and code.

Feign is a java to http client binder inspired by Retrofit, JAXRS-2.0, and WebSocket. Feign's first goal was reducing the complexity of binding Denominator uniformly to http apis regardless of restfulness.

If you know Retrofit, you'll see it is very easy to create rest clients using Feign. Feign is also a declarative web service client. The beauty of the entire Spring Boot Feign stack is how it can seamlessly be combined with the other libraries we discussed and will be discussing in future posts.

A small word on Ribbon

In my initial itinerary, I planned on talking about Ribbon and Spring Cloud Ribbon in part 3 of this series. However, the actual use case in our examples would be calling rest endpoints. Ribbon could do a whole lot more than that and therefore I decided that I might do a different blogpost in the end using a different examples to show more of Ribbon's capabilities.

As a result, I'm going to talk about Feign (well, actually Spring Cloud Netflix Feign), which uses Ribbon under the hood to load balance our requests and is a perfect library for the examples we're creating.

The Configuration

I won't post the entire configuration of our application, as it would just bloat this blogpost with unnecessary code. If you'd like to see what an entire application in Spring Boot looks like, just head over to the repository to check it out

build.gradle

compile 'org.springframework.cloud:spring-cloud-starter-feign'

Activating Feign Clients

With Feign added on the classpath, only 1 annotation is needed to make everything work with default configuration properties.

@EnableFeignClients

Creating a Rest Client

Creating a Rest Client is really easy, and most of the time, all we need to do is create an interface and add some annotations. Our environment will create the implementation at runtime, find the endpoint from our Eureka registry and delegate it to the proper services through the Ribbon load balancer.

http://notification-service isn't a randomly chosen name, and you can probably guess at this point why. Since our micro service is a Eureka Client, Ribbon will look for an entry in the registry, and translate it to the proper hostname or ip and port. If you remember last blogpost, we registered our notification microservice as notification-service.

Feign Client with HystrixObservable wrapper

With Hystrix on the classpath, you can also return a HystrixCommand, which you can then use synchronously or asynchronously as an Observable in your design.

Feign Client with Hystrix Fallback

Last time we discussed Hystrix and how we could write fallback methods. Feign Clients have direct support for fallbacks. Simply implement the interface with the fallback code, which will then be used when the actual call to the endpoint delivers an error.

Accessing External APIs

So far, we used Feign to create clients for our own services, which are registered on our Eureka Server using a service name. It's not unusual that you'd want to implement an external rest endpoint, or simply an endpoint that's not discoverable by Eureka. In that case, you can use the url property on the @FeignClient annotation, which gracefully supports property injection.

Here's an example of how you'd create a rest client for the java subreddit1.

Optional Configuration

I won't be going over each and every single configuration. I believe that once you're all set and you have a working example, configuration can be found in documentation, which is actually very good.

However, some caveats which are discussed in the documentation are rather important. In particular, the way you can define a specific configuration for a feign client is something you need to take good care of, because if you don't pay close attention to what you're doing, the application can behave in an undesired way.

By default, Spring Cloud Netflix provides:

Decoder feignDecoder: ResponseEntityDecoder

Encoder feignEncoder: SpringEncoder

Logger feignLogger: Slf4jLogger

Contract feignContract: SpringMvcContract

Feign.Builder feignBuilder: HystrixFeign.Builder

However, it does not provide:

Logger.Level

Retryer

ErrorDecoder

Request.Options

Collection

If you need one of the beans which are not provided yet, or you want to override the default provided beans, you can create a configuration per FeignClient contract, like we did in the following example.

The big caveat with this configuration is that your actual configuration class has to be annotated with @Configuration to support injection and context. However, if this configuration class is on the component scan path, it'll be also picked up as general configuration. This means that a configuration class like this, when also scanned by our automatic component scan, will override all of the beans for each and every FeignClient, not just the one which defined it as configuration.

As a result, you should place it inside a package that isn't a candidate for a component scan, like we did in the repository.

The Github Repository

As we said before, this is not just an ordinary blogpost. It's more of a guide on how to set up your environment to quickly start working with the discussed technology. That's why we're always making sure we have an accompanying github repository available, so people can easily see how it works and have a working example at hand.

The repository that's accompanying this blogpost is a bit different. Each time I'm releasing an new part of this series of blogposts, the repository will have a new branch, which will contain the new technology that will be discussed. In the end, I'll hope to come up with a nice example of how all the technologies can work together.

One important note to make is that I keep updating my repositories of all my previous blogposts. You'll notice that I've updated the Spring-Cloud-Netflix version between blogpost 2 and this one, to keep up with current development.

at the time of writing, the spring cloud team released version 1.1.0.RC, which resulted in a few breaking changes in our codebase. If you're interested in what was changed (don't worry, it wasn't much), you can check PR#1. It might be good to know that, at the time of writing this blogpost, the versions of the individual Netflix libraries in Spring Cloud Netflix are up to date with the latest version. If you had any problems trying out some features listed in the official documentation of Netflix OSS, chances are they should be resolved now, because we're using the latest version. ↩

Netflix has always been a proud contributor to the open source world. It's fascinating to see how each of their libraries facilitate a lot of tasks and can help create your development in a tremendous way.

In this series of blogposts - The Netflix stack, using Spring Boot - I'll be going over some of the libraries which Netflix has created and how to incorporate them in your spring applications. As always, it'll be more of a hands-on experience, as this blogpost will basically just be an overview of what you can find in the accompanying repository

Hystrix

Last week we showed how we can leverage the capabilities of Eureka to make our microservices discoverable. This week, we'll be looking at totally different, but extremely useful Library called Hystrix.

From the Netflix Hystrix Github repo:

Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable.

In short: Hystrix is a properly written circuit breaker.

Circuit Breaker?

In the field of electronics, a circuit breaker is an automatically operated electrical switch designed to protect an electrical circuit from damage caused by overcurrent/overload or short circuit.

In the field of software development, the purpose of a circuit breaker isn't that much different. In theory, a circuit breaker is designed to automatically detect failures to access remote (or local) services and provide fallback mechanisms where needed.

Hystrix By Example

As I do in most of my blogposts regarding libraries or technologies, I try to incorporate them in a small project, to show you exactly how to set up, configure and use it.
I extended my repository to not only contain Eureka, but also Hystrix Examples.

As you'll see, we've added a new microservice: the api-service, which will be the main entry point for our application. The api-service will find and locate the necessary microservices using our Eureka-server and will perform calls on them. Hystrix will come into play when a microservice appears to be down, falling back to other methods and saving the state as an open circuit, so future calls know that this microservice is unreachable.

The configuration

I won't post the entire configuration of our application, as it would just bloat this blogpost with unnecessary code. If you'd like to see what an entire application in Spring Boot looks like, just head over to the repository to check it out.

build.gradle

compile 'org.springframework.cloud:spring-cloud-starter-hystrix'

Enabling Hystrix

If Hystrix is the only circuit breaker on the classpath, you can enable it by simply adding the next annotation to your main application class or a configuration class.

@EnableCircuitBreaker

If it's not the only one however, you can add

@EnableHystrix

And that's it. Hystrix is now enabled, but not really doing an awful lot. Let's change the logic of our application to use the behaviour of hystrix as a circuit breaker. Please note that it'll enable hystrix-javanica. Which is actually a wrapper around native Hystrix. Hystrix javanica has the benefits of giving us annotational support for Hystrix, using aspects. We won't cover all of the configurations you can do, because Hystrix Javanica has decent documentation.

Making methods circuit-aware

The default way to use hystrix, would be to annotate a method with @HystrixCommand. In this annotation, you can define a method that can be called when the annotated method fails (read: throws an exception).

Configuring our commands

You can configure your command keys, threadpools and more properties. We won't go into detail, as this is explained very well in the documentation and optional.

Monitoring Hystrix

Having an entire system in place that monitors our circuits certainly is nice, but monitoring is nothing without a visual representation where you can quickly grasp if something is wrong.

Hystrix can be monitored in a few ways, which we'll briefly discuss, as they all speak for themselves once you know where to look.

Spring Actuator - Hystrix Health Endpoint

If you enabled Hystrix in your microservice, Spring Actuator will automatically add the Hystrix Health to your application's health endpoint. (In our case, when running the api-service, it's configured to be found at http://localhost:9000/health

{
[removed for brevity]
hystrix: {
status: "UP"
}
}

The Hystrix Dashboard

When you added Hystrix-javanica, the application also provides us with an extra endpoint: an http-stream sending out all of the events concerning hystrix. You can find this endpoint by navigating to http://localhost:9000/hystrix.stream .

The good folks at Netflix created a dashboard on top of this endpoint. Simply add the dependency to your build path.

Below you can see two examples of a running application. One has a healthy hystrix circuit, the other one is open, resulting in a bypass of the failing call.

Closed

Open

Multiple Hystrix Endpoints

If you have multiple Hystrix endpoints, it can become a bit difficult to monitor the health of each and every single application. In one of my next blogposts, I'll be showing you how you can use Netflix Turbine to aggregate the server-sent events that are being emitted by the Hystrix streaming endpoint.

The Github Repository

As we said before, this is not just an ordinary blogpost. It's more of a guide on how to set up your environment to quickly start working with the discussed technology. That's why we're always making sure we have an accompanying github repository available, so people can easily see how it works and have a working example at hand.

The repository that's accompanying this blogpost is a bit different. Each time I'm releasing an new part of this series of blogposts, the repository will have a new branch, which will contain the new technology that will be discussed. In the end, I'll hope to come up with a nice example of how all the technologies can work together.

Netflix has always been a proud contributor to the open source world. It's fascinating to see how each of their libraries facilitate a lot of tasks and can help create your development in a tremendous way.

In this series of blogposts - The Netflix stack, using Spring Boot - I'll be going over some of the libraries which Netflix has created and how to incorporate them in your spring applications. As always, it'll be more of a hands-on experience, as this blogpost will basically just be an overview of what you can find in the accompanying repository

Eureka

Eureka is a REST (Representational State Transfer) based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers.

Basically, the Eureka infrastructure is set up as a client-server model. You can have one or multiple Eureka Servers and multiple Eureka Clients. It's a registry where clients (your microservices) can connect to (register), making your Eureka server aware of where your microservices are located, how many there are and if they're healthy or not.

As I have always done in my blogposts, I'll accompany this with a Github Repository serving the sole purpose of giving you a working example.
It's the first time I'm doing a series of blogposts on the same subject. Therefore, I'll create branches for each part, which can individually be checked out. I'll try to keep everything up to date with the newest versions, like I'm doing with all of my other examples.

Gradle Setup

My build tool of choice is gradle, so the entire example will be based on a gradle configuration. The configuration will consist of a parent, which will include all the microservices, who can be deployed individually. For this example, we'll only have one microservice (the Eureka Client) and one Eureka Service.

Our parent project will be just like a basic parent pom in maven and will facilitate the build of the entire project and microservices. I choose for this setup as it is easily approachable by someone who just wants to check out the code. Check out the gradle structure in the repository. It's not hard to set this up, but it can be an example on how to set this up.

Eureka Server

By default, a Eureka server will also be a Eureka Client, trying to connect to the Registry.

application.yml - our configuration

In our setup however, we want 1 main Eureka Discovery Service other clients can connect to. This is the minimal configuration we would need.

Running the example

We provided a startup script which you can build and run all the services. the only prerequisite is that you need gradle in order for this to run.

./startup.sh

to stop the services, simply run

./stop.sh

Watching our services

By default, spring boot Netflix provides a UI on top of the Eureka Server. In our example, we've deployed it to http://localhost:8761/, so navigate to it and watch the health of your Eureka Server and Client.

You'll see something like this.

As you can see, after starting up the server and client, the notification-service has registered itself.

Coming up

This model of microservices that register themselves to a global registry will have a lot of advantages when it comes to building one or multiple applications using a microservice architectural approach. Eureka on its own won't have that much of use, but as you'll see in the future blogposts, Eureka will be the key element to locate all of our microservices.

The Github Repository

As we said before, this is not just an ordinary blogpost. It's more of a guide on how to set up your environment to quickly start working with the discussed technology. That's why we're always making sure we have an accompanying github repository available, so people can easily see how it works and have a working example at hand.

The repository that's accompanying this blogpost is a bit different. Each time I'm releasing an new part of this series of blogposts, the repository will have a new branch, which will contain the new technology that will be discussed. In the end, I'll hope to come up with a nice example of how all the technologies can work together.

]]>0x00 Prelude

I originally thought of writing this text as an intro for the readme file in the repository of the wowscrappie project, but it became too big to be just an introduction. I wanted to say too much. The readme file is not a great place to distract people

I originally thought of writing this text as an intro for the readme file in the repository of the wowscrappie project, but it became too big to be just an introduction. I wanted to say too much. The readme file is not a great place to distract people from what the repository was all about.

I decided to dedicate a blogpost to it.

0x01 WowScrappie

I won't be going into much detail of what wowscrappie is. Wowscrappie is quite a niche website/application created for the world of warcraft community, where players can share their user interface configurations with other users.

Before the release of wowscrappie, a lot of people shared their configurations on pastebin. There's nothing wrong with this approach, but a few problems quickly arose. Configurations were scattered all over the place, with little to no context or screenshots. I wanted to solve this problem by simply creating an application which can do all this.

Wowscrappie has grown, and is still organically growing to something more than that. I kindly take requests from other people, converting for example spreadsheets they use into a usable application. It hasn't been a full time project, but I don't want this cool project to die a meaningless death. It has quite a lot of new and returning visitors each day.

0x02 Open Source

From the start on, when I wrote the first code and laid out the first bricks of wowscrappie, it has always been the goal to create it to be fully open source. Somehow, somewhere along the way, I was too focused on creating features. Features that put aside the core of this project.

Instead of open sourcing it, I was stuck with a design that didn't allow us to quickly open up the code to the public, because the builds were dependent on configuration-specific elements, which should never be committed in an open source environment (apikeys, secrets..).

Wowscrappie has always been a project where I could try out new technologies and concepts, which may benefit other developers to see how, why and when such technologies could be used. It has been a playground where I could learn new things that can help my in my further career as software developer.

As I took a minor break from developing new features and letting wowscrappie gradually grow due to visibility in search engines, I wanted to refactor some of the existing code, so I could finally do what has always been the goal from the start on: make wowscrappie open source.

0x03 By and for the community

One of the key concepts of wowscrappie, is the fact that it is has been developed in a way that lets people share their experiences, so other people could benefit from it.

I want to extend this core concept to the code. The entire ecosystem of wowscrappie is now available to anyone. Anyone can now fork this repository, implement features, fix bugs and enhance the entire experience of the application.

0x04 The future

I'll continue to keep wowscrappie up and running. I've got some nice ideas on entirely new features and I have some technologies in mind that I would like to try out. Wowscrappie is and will stay the playground that it is today.

In future blogposts, I'll be explaining the core setup, used technologies, things I bumped into and development progress.

0x05 Feedback

I'll always try to listen to the public to see what they would like to see added, not only configurations, but also features in wowscrappie as an application.

Creating a scalable architecture. It's fun, exciting, but above everything else, there's such a wide variety of technologies to choose from. Blatantly using all of these technologies, will leave you with a big, heavily depending application.

In this example, instead of using the more famous message queues (rabbitMQ or

Creating a scalable architecture. It's fun, exciting, but above everything else, there's such a wide variety of technologies to choose from. Blatantly using all of these technologies, will leave you with a big, heavily depending application.

In this example, instead of using the more famous message queues (rabbitMQ or any other embedded or full blown queue), I tried leveraging the pubsub capabilities of a Redis server to simulate message publishing and subscribing.

How the redis pubsub configuration works

I won't go in depth here, but it's useful to know how your messages get processed using pubsub in a redis environment. There is, as the name suggests, two parts to this exchange. First of all, there's a party which can publish messages to a channel. These messages can take on any form (be it a string, object..). On the other side, there are 0, 1 or multiple parties which are subscribed to the channel. Messages are volatile, much like a chatbox. Only parties which were subscribed at the time of publishing will receive the message.

A personal use case

For a personal project, I was dealing with data that had to be imported from a public API. Because of the size of the data, and the continuous fashion in which the information had to be fetched, processed and saved, I had to come up with an architecture that was scalable. I started tinkering with little spring boot projects, which could be deployed multiple times. Think of it as nodes. A main web application had to send out calls to the nodes, in order to fetch and process the information. I wanted to create an architecture where the nodes would not know of the existence of one another.

Why not choose the full blown MQ?

At the time of writing, I had already a redis database running in production. I used it as caching implementation for the Spring 3.1 caching abstraction, as well as for storing my HTTP-Sessions, using Spring Session. After reading through the documentation, the pubsub configuration felt like it could suffice for what I needed.

The Receiving End

As this is more of a hands-on blogpost, it'll be accompanied by a basic code-example on how my application was developed. I'll start off with the receiving end, the node part in my application. This can be just one node, or multiple listeners.

This doesn't do much. This POJO is just a java class which will be send broadcasted to a redis server and picked up by a listener. This can take on any form. In one of the examples spring provided, it was just a string. I used an object, as this has more value in my opinion. This is the method where you can define what happens with the object once it is received.

Let's provide our listener

A messagelistener adapter

Next up, we'll need a MessageListenerAdaptor. Here, we'll define that our listener will have to serialize the input, which will be JSON, into a java class of the type ImportTarget. If no Serializer would have been set, the MessageListenerAdaptor would try to parse the input as a string.

Our Listener Container

The last part of our configuration consist of the creation of a JedisConnectionFactyory, as well as a RedisMessageListenerContainer. In this example, we setup our RedisMessageListenerContainer to make our MessageListenerAdaptor listen on the topic with the pattern import. You can change this to your likings.

As you can see, all we need to do is create a RedisTemplate and set the default Serializer. As we want our object (in our case, an object of type ImportTarget) to be sent to our redis server in the form of a JSON, we choose for the JacksonJsonRedisSerializer.

Sending our payload to the topic

Sending the payload to our redis server is really easy. All we need to do now is Inject the RedisTemplate into a component and send the object to the correct topic.

Caveats

One this to be remembered is that our redis server, as said before, treats our channel as a topic, and not a queue. Therefore, all of our listeners will react to the call, not just one. If no listeners were defined, or no application was actively listening to our channel, nobody would get notified and the message would be gone.

]]>markdown documentation into your web page

I often feel the need to write documentation, and recently, it got even worse. I like documented APIs. Don't get me wrong, I love looking into the source of different libraries and frameworks, but nonetheless, I'm happy when the documentation can bring me enough

I often feel the need to write documentation, and recently, it got even worse. I like documented APIs. Don't get me wrong, I love looking into the source of different libraries and frameworks, but nonetheless, I'm happy when the documentation can bring me enough information for me to be productive with a library.

With the rise of all these libraries, frameworks and dedicated open source documentation pages, I write more documentation than ever. As github README.md files, as wikis, etc...

To be able to reduce the time writing documentation for a dedicated website, I thought. why not use the markdown as base for my documentation?

And so it started

I immediately started writing a small piece of JavaScript that was able to include my formerly created (README.MD, anyone?). Using an existing library as markdown converter, the result came rather quickly.

Basically, the script would check every tag for an mdjs class and insert the markdown as html inside of that tag.

But I was not satisfied.

JavaScript is not my best asset

I have to admit. Most of my javascript code is not something to be proud of. I don't know. I often find myself starting out right, but creating an unmaintainable mess in the end. I wanted this to be different. I wanted to be able to use this small piece of code, throughout all of my projects. So this had to be done right, even for a small script like this.

Grunt.js

So I looked a bit into grunt and came up with the first version of md.js. It's open source, has a small footprint and will be expanded with new functionality as requested. So feel free to fork it, create pull requests and create tickets.

Links

Reactor, as the name suggests, is heavily influenced by the well-known Reactor design pattern. But it is also influenced by other event-driven design practices, as well as several awesome JVM-based solutions that have been developed over the years. Reactor's goal is to condense these ideas and patterns into a simple

Reactor, as the name suggests, is heavily influenced by the well-known Reactor design pattern. But it is also influenced by other event-driven design practices, as well as several awesome JVM-based solutions that have been developed over the years. Reactor's goal is to condense these ideas and patterns into a simple and reusable foundation for making event-driven programming much easier.

About this blogpost

This blogpost will try to teach you the basics of event driven programming using spring boot and reactor. It won't cover every aspect of reactor, nor will one be able to use it as complete reference. I will, however, try to give as much examples as possible in my accompanying code.

Accompanying code

This small tutorial is accompanied by a Github Repository. Not all content of the code at the repository will be discussed here, so don't forget to check it out later!
The code was compiled and tested using the JDK8, and will therefore require you to have Java 8 to test this application.

If you find anything in the repository that is unclear, or you something you'd like to see a seperate blogpost of, feel free to file it as an issue in the repository.

Running the example

Simply download the code - either using git or plain archive downloading. Make sure you have gradle installed.

gradle bootRun

The Code

In this section I'll go over the important components which wire up the example application. It's a full stack application, which means that it'll contain a model, repositories, services and controllers, as wel as a basic view, written in thymeleaf. The frontend won't be a subject of this article, however, feel free to check it out on Github!

Gradle Dependencies

All we need on top of our standard Spring Boot starter imports, is the following dependency:

What is described here, is a stripped-down version of an entity called LogMessage. A LogMessage will just be an entry in our database containing a basic String and some metadata, such as a logDate and an enumerated category.

A Restful repository

With Spring Data JPA we can avoid all the boilerplate code which would normally fill our application. Just a simple interface is enough to expose the database in a modern fashion. We also added the @RepositoryRestResource annotation, which will later expose the entire repository as a REST-API. This is done by Spring Data Rest.

The Reactor AutoConfiguration

This configuration really speaks for itself. We don't need any special reactor implementation, and therefore, we can count on Spring Boot to provide us with an active Environment, as well as a ReactorAutoConfiguration. Simply Enable it using the @EnableReactor annotation.

@Configuration
@EnableReactor
public class ReactorConfiguration {
}

Wiring up our components - The receiving part

Event driven infrastructures always consist of at least the following two parts: A Sender and a Receiver that will somehow listen or register on a given endpoint. Let's start with some example code of the receiving part.

We'll start with registering on 2 events.
First of all, we'll register on an event that's triggered once channel log.(trace|debug) is being notified. As you'll see we use reactor.event.selector.Selectors.R, which is a selector we can use to match a certain regular expression.

The second selector we'll be using, will be a class-selector. reactor.event.selector.Selectors.T will react on the notification of a class, in our case ReactorExampleException.

We could also be using reactor.event.selector.Selectors.$, which is just a simple String-based selector. The syntax highly resembles the JQuery Selectors Syntax

The result

If we first start up our application, we'll quickly notice that we started with an empty database. By default, Spring boot looks for a DataSource implementation on the classpath. We just added a h2-database, so everyone can test this application without any 3d party necessities, such as a mysql database for example.

Because we're using spring data rest on our LogMessageRepository, it is being fully exposed. Simply browse to the following url to consume the self-explanatory API.

What more can we find in the Github repo?

In the repository, I also added a thymeleaf template which connects through websockets to the server. Everytime someone accesses the application by visiting the homepage, a small fragment of the homepage would be updated using JavaScript.

This all is merely an example of how one would use Reactor in a project. If any bugs or questions arise, feel free to mark them as an issue in the repository.

]]>Thymeleaf, a worthy alternative

If you were looking for a decent alternative for those old JSPs, look no further.

Thymeleaf is a Java library. It is an XML / XHTML / HTML5 template engine (extensible to other formats) that can work both in web and non-web environments. It is better suited for

If you were looking for a decent alternative for those old JSPs, look no further.

Thymeleaf is a Java library. It is an XML / XHTML / HTML5 template engine (extensible to other formats) that can work both in web and non-web environments. It is better suited for serving XHTML/HTML5 at the view layer of web applications, but it can process any XML file even in offline environments.

How does Thymeleaf compare to other web frameworks?

From their web page FAQs

Thymeleaf makes a strong stress on natural templating —allowing templates to be working prototypes and its syntax tries to be cleaner and more in tune with the current trends in web development. Also, from an architectural standpoint, both Velocity and FreeMarker work as sequential text processors —which allows them to process many types of content— whereas Thymeleaf is based on XML parsing techniques —which limits it to XML-based formats. This makes Velocity and FreeMarker much more versatile, but on the other side allows Thymeleaf to take advantage of interesting features specific to XML-based environments, especially the web.

About this blog post

In this tutorial, I'll briefly show how we can serve a web page using spring boot and thymeleaf. This blogpost won't go into any depth of how thymeleaf works, or how you can use it. Instead, it will teach you how you can set up your environment with spring boot to serve the pages.

Accompanying code

This small tutorial is accompanied by a Github Repository. Not all content of the code at the repository will be discussed here, so don't forget to check it out later!

Just a basic Controller

The controller we'll be using will be kept simple, as this is a tutorial on how to setup a thymeleaf application with spring boot, not an entire blogpost on spring mvc.

While starting up, Spring Boot will automatically detect the presence of the starter-web and starter-thymeleaf modules, and therefore making sure the correct interceptors and template engines are started.

Your resources

Resources are defined - as the name implies - in your resources folder. Thymeleaf will automatically search for pages in your templates folder, directly under resources.
In our controller, we returned "main" as index page. /resources/templates/main.html will be automatically resolved.

Our Desired Result

If you run the example, as pulled from github using the following command:

gradle bootRun

and navigate to http://localhost:8080 you should get something along the lines of the following screenshot.

Coming up

In one of my future blogposts, I'll be discussing some more in depth on how we can change some of the default settings for thymeleaf.