Hi Liferayers! We'd like to share with you that we have created non-official, public Docker images from our master branch! And we are releasing them every day.

The rationale for pushing to a non-official, public repository is simple: anybody in the world could fetch and build the platform from Github, so why not making it easy for them, offering a Docker image with that process already done? We have learned a lot by doing this for months, and are now ready to make them public.

Why non-official? Well, we would like to publish Liferay DXP images first, which we'll explain later on in this post.

Where are the Liferay Portal CE images published?

date, in yyyymmdd format, i.e. 20170901. A version will be created every day.

What is the structure of the image?

Our public images run with OpenJDK, more specifically JDK 8u141, so we are leveraging the Docker image the OpenJDK team created: openjdk:8u141-jdk, which is based on Debian Jessie.

Then we add the result of the build process (hello "ant all" my old friend), a bundle with the current version of Tomcat. And finally we expose port 8080.

What are we doing with these Liferay Portal CE images?

We are using this image to internally deploy Liferay Portal CE projects to several servers, so that any team in the company can use it and play around with current development of the product. But we don't only deploy those images; we have also trained teams to use it locally. For instance, if UX/Design teams need to check a behaviour in a version, they can start it locally and compare features between different versions.

What happens with Liferay DXP images?

We are working on creating them. Customers have been asking for Docker images for a while, and we want to meet that request. Also, internal clients such as WeDeploy, which is using its own Docker image for Liferay DXP SP4, could potentially consume those images. That's why we published Liferay Portal CE images first under my personal account instead of the official. Once we have everything in place for Liferay DXP and Liferay Portal CE images, we will push the Liferay Portal CE ones back to the official repo. We hope that this serves the needs of both our customers and community, by eventually providing both Liferay DXP and Liferay Portal CE images. :)

And that's all :)

As always, please send us your feedback, and hope this kind of actions helps both the Company and our Community.

I'm very proud and glad to announce that, from now on, we are going to be able to write integration test in our Liferay plugins!

* Before breaking this down, I want to thank all people that collaborated as a strong team to achieve this: Carlos Sierra, Cristina González and Miguel Pastor, who worked really hard to push this awesome stuff to the product.

Well, with integration tests I mean those tests that rely on other services, like portal services or even services within the plugin itself. We will have a real (not mocked) instance of that service, with all the wiring it uses (Persistence behaviour, caches, indexing, etc).

This black magic has been desired by many of you since years, but at last, we have made a fine integration of Liferay with one of the coolest testing frameworks nowdays. This framework is Arquillian (http://arquillian.org), an innovative and highly extensible testing platform for the JVM that enables developers to easily create automated integration, functional and acceptance tests for Java middleware.

After tons of beer and two or three minutes discussing this, we think that the best option to start with is using a Remote approach. This allow us to run test in development time and we can supply managed behavior using CI scripts if needed.

Just to make things easier, we have added some capabilities to our plugins SDK to configure a Liferay bundle (the one defined on the SDK) with Arquillian support, which means:

JMX enabled and configured.

Tomcat's manager installed and cofigured.

Arquillian dependencies available on compile/test time.

I'll explain them later more deeply.

Secondly, we have created a library that make it easier to create a WebArchive, the file that Arquillian needs to send to the container. This piece of software builds a WebArchive and execute portal's auto-deployers, so just see the WebArchive as an abstraction of a plugin WAR file that has been dropped on LIFERAY_HOME/deploy folder, but not deployed to the container.

At this moment, you must define a method with Arquillian's @Deployment annotation, and build your WebArchive there. (We are deciding how to improve this, but for now it's mandatory to define this deployment method).

Once we have created the WebArchive, we can add classes (or resources) to that archive, which it's actually a very good thing, because we are making test dependencies explicit: just read the test to see all of them.

Lastly, test classpath must contains an arquillian test descriptor, where you define where your remote server is running. This file, named "arquillian.xml" is placed under PLUGIN-NAME/test/integration folder.

Mmm... let me think... I believe that's all, so let's summarize!

Tomcat configured with JMX, manager, and valid credential to access the manager

Library that builds the plugin so that Arquillian knows how to deal with

Test classpath configured

We have added some cool tools on the SDK so that you can add all this previous configuration executing only two ANT targets:

On root folder of plugins SDK (ONLY FOR FIRST TIME YOU START WITH SDK): ant setup-testable-tomcat, which will configure your bundle, affecting these files.

In this example, CalendarLocalServiceUtil is a real object, so no need of mocking anymore!!!

But, why is this CalendarLocalServiceUtil a real object? Where is the magic here?

Arquillian deploys the fully working plugin into the container, which has been started before. Then it executes the tests into the container, and after test execution Arquillian returns test results to the runner, undeploying the plugin after all.

This is really cool!! Because you can run your tests using your ANT commands on your shell, or even using your IDE, which speeds up development process.

Ok, but won't it be time-consuming deploying/undeploying the plugin?

Not at all. Do not forget that your container is started, so the deploy->test->undeploy cycle should be very fast (10 seconds or less). All the heavy load was done during container and Liferay portal startup, it's only your plugin actions on live.

Will I be able to debug?

Yes, the blessed debugger! If you start your container in debug mode, then you can create a Remote connection to your tomcat and debug. Have you noticed I said Remote connection? Why did I say that?

As you have read some paragraphs before, the tests will be executed on a Remote server (maybe in your local machine, but still remotely), so you need to configure your IDE to point to that debug port.

Future?

Well, as you can figure out after this reading, we could backport this to 6.2.x and 6.1.x branches, so plugins on that versions can be tested.

And of course, next benefit of having Arquillian integration is that we can start writting more tests in our plugins right now!!!

As all of you already know, we are doing huge efforts trying to convert Liferay into a nicer, simpler and more extensible platform to use. In the near version you will be able to write apps on top of Liferay in a completely different way, and, of course, we want you to test this new applications, so we are currently going through the review process of the basic testing infrastructure which will support this new testing mechanisms.

Hello Continuous Readers! I'm here again to share a new CI practice in a very small pill.

Do you remember a time when you had some tests that continuously failed, and nobody had bandwith to work on it? What was the easiest solution for them? Of course commenting or removing them, because they were disturbing your green lights on the server.

Did I say of course? Of course not.

This must always be the last resort, very rarely and reluctant used, because it will hide the real problem, and what you want with your tests is exactly the opposite: to show what is happening, in special what is wrong to solve it as soon as possible.

Instead of using the rug as in the picture above, try to apply these simple rules:

Has a regression been found by the test?

Fix the code!

Is one of the assumptions of the test no longer valid?

Delete it!!

Has the application really changed the functionality under test for a valid reason?

Update the test!!

With these three very simple rules you can solve the majority of the situations related with regressions found.

Here I'm again after a long time without writting about CI in Liferay. If you remember my last post, it described the importance of reverting those commits that break the build as quick as possible.

Well, in this blog post I will share a small pill that can help you during the revert process.

Before reverting an offending commit, establish this rule: whenever the build breaks on check-in, try to fix it for an specific amount of time, defined by your interests, in example 10 minutes.

If, after that, you aren't finished, revert to the previous version.

In Liferay, we usually dedicate 20-30 minutes to investigate the problem. If we cannot solve it, because we don't know much of the functionality that is breaking the build, then we rollback.

Another thought that we feel trying to solve other developer's failure is that maybe they will see you as the last bullet on the gun, and they don't mind on breaking the build "because it will be fixed magically when I come to work tomorrow".

Instead, we prefer reverting the commits, and notify the developer with another specific email (not only the automatic email sent by the CI server), explaining why we have rolled back the commits, so actually he/she is aware of the failure.

In this blog post I want to talk about developers mentality, about how they (we) create software that never fails, and brights more than the sun, and is faster than a jet plane... or not?

No, seriously, developers are more selfish about our code than other careers, and we usually don't like others criticising us for it. Well, but we must not forget that we are in a team, with many co-workers (maybe distributed all around the world), and one of our highest wishes must be a software product with the best quality. And in order to achieve that quality, we have to polish defects we commit.

If you remember my last post, there is a role named "Build Master", that is responsible to polish those defects, reverting wrong commits and addressing the issue back to the developer who caused the problem.

I think that I've written this before, but maybe is better to empower this idea: we all make mistakes, so everyone of us will break the build from time to time.

And the important thing is not to blame the developer, no. Indeed the most important thing is to get everything working again quickly. Of course, if you aren't able to fix the problem quickly, for whatever reason, you should revert the previous change-set held in the version control, and remedy it locally. After all, you know that previous version was good. Why? Because you don't check-in on a broken build!!!

I'll add a brief story to show you how reverting is a good idea :)

Airplane pilots assume that something will go wrong, so they should be ready to abort the landing attemp, and 'go around' to make another try.

Imagine how critic is this landing process compared with a set of commits: they prefer aborting it in order to avoid human beings deaths rather than doing a dangerous maneuver. So, why not doing the same with those conflictive commits, that will be re-sent as quick as possible?

I love that picture! Imagine yourself any Friday, at the end of your day-work. You look at the CI server and, unluckily, it is broken. You have only three options:

Resign yourself that you will be leaving late, because you'll try to fix it.

Revert you changes and retry next week.

Leave now and leave the build broken.

Of course, the best choices are to choose number 1 or number 2, never number 3. In the above picture, Iron-man decided that he doesn't mind what will happend after that "bomb". But why is it a bomb to leave the build broken?

It's a bomb, because any co-worker that pulls from your master branch will get dirty code. And what happens with this? Please look back to my first post about don't check-in on a broken build.

On the other hand, if you try to solve, or even revert your changes before leaving, you will keep the build green, and other developers will be happy to pull safe code from SCM repository.

Some good practices to avoid potential problems are:

Check in frecuently and early enough to give yourself time to deal with problems should they occur

Experienced developers often save checks in for the next day

Well, at this point you could say: "Ok, I follow similar practices but my team is distributed and we have problems working in different time zones".

In Liferay we actually have this "problem":

As you can see, we work in different time zones: China, Europe and America, following the sun.

In case of problems:

If China breaks the build... then Europe day's work is dramatically affected

If Europe goes home on a broken build... America would be screaming and crying

​

How have we solved it? Well, at this point the figure of the Build Master appears:

This role not only mantain the build but also policed it, ensuring that whoever broke the build was working to fix it. If not, the build engineer would revert that check-in, so it's mandatory that the build engineer has write access to the master branch in your SCM, or prioritize his commits, otherwise.

The build master is a controversial role, because nobody wants to see his/her commits rolled back. But the whole team should accept that this is not a personal offense, it is another effort to improve the quality of the product, never criticizing the developer.

For that, we all should open our mind and accept that is not bad to revert someone's commits: they are still present in the history of the project, so we could restore them whenever we need them. And after all, those commit were breaking something, weren't they?

This is my third blog post about Continuous Integration best practices, and today I want to explain the benefits of being patient after sending commits for being reviewed.

As developers, we are used to work on functionalities, finish them, and jump to another one. We send our work to a reviewer, and continue working on other tasks. As you probably know, in these cases our mind completely focuses on the new task to do its best, almost forgetting the previous one.

Do you remember my last post, about running the tests (manually in a local machine, or automatically in Jenkins via a pull-request)? Well, please think about not knowing about the results of this tests execution. Are you sure that your work works as expected? Are the tests finding potential bugs on it?

If you don't monitor the build that executes the test for your changes, last questions won't be answered until it's very late: when your code is pushed to your master branch, where other developers can pull from it, getting unexpected behaviour.

So this blog post is asking you for waiting for the commit test to pass, being aware of tests results, and start solving it as soon as possible (if needed).

In my last post I also commented that the CI server is a shared resource with a lot of information. Developers should monitor it to verify that test results for their commits caused failures or not. Doing that, they are in the best place to solve potential problems, because they haven't switched context between tasks.

One of the most important thing of this best practice is that you must know that everyone can commit errors. Furthermore, errors are an expected part of the process.

But our goal, in what we will be focused, is to find and eliminate them as soon as possible, without expecting perfection and zero errors.

While the build is running, you can organize your inbox, prepare for next tasks, have a coffee, or even go to the bathroom! Because it should take the least time for the build to finish: depending on your project size, between 10-20 minutes is OK.

Following with these blog posts series about good practices in Continuous Integration, I want to talk about the benefits of running tests.

Practice 2: Always run the tests

When a developer commits a new functionality, it's expected that in that commit, the software works as what we believe it should work. And, if a software works as expected in a single commit, why not release in that state? And what would happen if we can assert that every commit in the history makes the software to work as expected? Just iterate the sentence "release in the $COMMIT" through each commit in the history... We would have achieved the ability to be more "releasable", as we can release in whatever commit we want.

At this point, we should have realized how important a single commit is, as it could trigger the creation of a release candidate.

Ok, we know what the goal is: to have good commits that works as expected but, how can we achieve it? How can our commits be more releaseable?

One of the most important thing you can do to verify that your commits work as expected is to write well written test for them, and when I say well written I mean that they must test the functionality: conditionals, loops, different values..., not only the happy path of the test.

Once you have written good tests, you need to run them, and check results. I will assume that you know how to write and run tests, this is not the main goal of this blog entry, so let me continue without that explanation.

In Liferay, we can run tests in two ways:

Locally: a developer can use some ant targets to run test in his/her own workspace, so he/she can test the code before sending it. Please read the wiki page explaining the Testing Infrastructure in related assets:

ant test-unit: execute all unit tests (dependencies to other systems, i.e. databases, are not real: we mock what we need)

ant test-integration: execute all integration tests (dependencies to other systems are real, not mocked)

ant test-class: to execute only one test class

etc.

After sending a pull request: we use Jenkins as CI server to manage all our CI processes, and we have made that every pull request sent to a peer reviewer is monitored by the CI server: it checks out the code, and execute some tasks (compilation, format source, test execution...) The cool thing here is that there is a Jenkins plugin that can monitor the pull request and operate after tests results, managing Github pull request (auto closing if it breaks tests, writting comments, changing pull status...). In this scenario, a peer reviewer knows if the pull request he/she is going to review is good or breaks something, so we reduce the feedback loop a lot with this process, discarding bad pulls as soon as possible.

Mmmm... interesting, two places to run tests: locally and in the CI server. But why both?

You as developer could have the latest version of a library, or a driver, or an application that configures XXX in your O.S, or even your O.S. is tunned because YYY.

On the other hand, the CI server is a controlled environment, it always runs with the same scenario, for each commit sent by each developer, so every test is executed with same conditions, in every build, for everyone. And that's a very good choice, because then, your test results will be repeteable.

Maybe you don't want to execute tests locally, it's ok, we don't have problems with that, but try always to run the tests in a controlled environment.

Another good capability of the CI server, because of being a controlled environment, is that it is also a centralized information repository: everyone in the team can look at it searching for build results, and seeing what is happening at every moment related to tests. The CI server produces logs for almost everything, so it's very easy to read them and to be informed about the real state of the commit (and the project, too).

Once looking up the server logs, which logs are the most important for us to verify that our commits are good? Well, we have two possible options to know what is happening:

Jenkins logs: you can configure Jenkins to send the commiters an email with test results, telling them that their commits produced a breakage. In this case we have improved the usability of default Jenkins email, to make its reading easier.

Github logs: the plugin we use to monitor pull requests can write comments on Github, so this really good platform also sends emails when a pull is commented with tests results. So a developer inmediately knows whether his/her commits passed the tests or not.

Both of them produce very good complementary information a developer will know what to do with. Then, try to notify them with every break your CI system discovers, so the culprit can be ready to solve it as soon as possible, as we saw in last practice.

That's all for today, please wait till next blog entry about CI best practices!

Hi all! I'm writing this blog entry as the first post of a continuous integration blog serie, sharing our knowledge and usage of this technique.

In these blog posts, I will to talk about some good practices I would recommend you to follow, based on my experience reading the book "Continuous Delivery", by Jez Humble and David Farley, concretely chapter number 3. And of course, my experience dealing with CI in Liferay.

One of the most important thing that I've learned reading this book, is that Continuous Integration (CI), is a practice, not a tool , and requires a significant degree of discipline from the team as a whole. So all team members are involved on it, and must collaborate to achieve its perfection.

The objective of a CI system is to ensure that a software is working, in essence, all of the time. So you should keep that in mind as a mantra: the software is working before your changes, and it will also work after them.

We hope this blog serie helps you if you are starting with CI, but we also want to hear your experience and your feedback about. So please comment what you consider on it.

Ok, once introduced the topic, let's start with the first practice....

Practice 1: Don't check-in on a broken build

You are about to start a new work day, and see the build broken. Have you received an email from the CI server? If so, you should know how to proceed to verify if you are the causant of the errors, and if it is so, please try to solve it as soon as possible instead of starting to code that stellar functionality as a beast.

Doing that, you can identify the cause of the breakage very quick, and then fix it, because you are in the best position to work out what caused the breakage.

But, wait, you have already finished your work, and the build is still broken. Why shouldn't you check-in further changes on that broken build?

First of all, it will compound the failure with more problems. Imagine that you don't know about these practices, so every time you check-in, you cannot prove that your changes are not adding more errors, and maybe your changes plus existing errors cause another different problems.

Direct consequence of previous sentence is that it will take much longer for the build to be fixed, because you added more complexity to the problem.

Of course, you can still check-in. And you can also get used to seeing the build broken. In that case, the build stays broken all the time :(

And that's the cycle, it's true.

But after many broken builds, the long term broken build is usually fixed by an Herculean effort of somebody on the team (here in Liferay is usually Miguel) and the process starts again.

Clean catalina.out log to be sure that your installation is successfull.

Restart server.

Check in catalina.out that Liferay starts reading your portal-ext

Browse to you portal: ENVIROMENT_NAME.jelastic.com

Setup Wizard is the first thing you'll see, but as JNDI is configured, we cannot modify database settings. We should go to SERVER_ROOT/server/context.xml for database changes.

Set up for portal (name, language, admin credentials), and...

Here it is! Your portal up and running!

Then, you can tune your portal with portal-ext reminding not to modify properties set in this blog.

Importants (and new things):

As you can see, we are taking care of telling setup wizard where to read the new props file (the include-an-override property), and we are also configuring database with JNDI, but of course you can do it with JDBC, just with the usual way: