What are the drawbacks if I don't share these files? None, from my experience. When doing a fresh checkout of a project, IDEA asks whether I'd like to create a new project. Yes please, and all files are generated correctly.

The only annoyance we have had at the company from time to time is when someone accidentally adds an .iml file to the VCS. There's a feature in IDEA to stop it from asking:

Settings -> Version Control -> Ignored Files

This setting has to be applied in every project separately. And thus it's not very helpful in this case. Because every new project I check out, IDEA asks me whether I want to add the .iml file to version control (one by one), and I can just tell it to stop asking:

Conclusion

For the time being we keep IDEA files out of the VCS. Our projects are usually multi-module maven, with lots of iml files. Your mileage may vary.

Friday, November 29, 2013

For many years I've been using the PuTTY network terminal to connect to remote servers when doing sysadmin tasks. And when I do, my monitors quickly fill up with those black windows. Remember web browsing before tabs? Like that. I totally lose track.

The other thing that's annoying is that it does not let me categorize my saved sessions. All are in one alphabetical list. That might work when you just have a handful, but my list grew to dozens. Meh.

With such an user interface of the site, it would be too much to expect gimmicks in the program. Yes there was an update. Security fixes.

So I googled for what else there is. And apparently there are plenty of wrappers around PuTTY that bring exactly what I want: Tabs and Session Folders. I don't know how they could hide from me for so long - maybe because neither the official site nor the Wikipedia page mention any.

I've decided to go with this open source and actively maintained one: superputty

Well then, what's the difference?

The assert statement

Unless assertions are actively turned on when running the program (-ea), they are not executed. They are still in the bytecode, but they don't run.

Bugs go unnoticed. These bugs should have been detected already in unit tests and the testing phase before deployment when running with assertions on. But experience taught us that some slip through.

No performance loss. There are a few rare scenarios where it matters.

Side effects are possible.Badly programmed assertions can cause nasty side effects. Because sometimes the assert code is executed, and other times it's not, the program may behave slightly different.

Using the assert statement gives control to the runtime config whether they run or not. The choice is left to the person who runs the program. The choice can be made from run to run, no recompile is needed.

The AssertionError

The asserting code is always executed. Changing your mind about execution means a recompile and re-deployment.

Bugs always show up.Even in production they abort the program.

Performance loss.In most cases it's irrelevant.

Guaranteed to have no side effects.The executed code is always the same.

Vital or nice to have?

Programs were written in Java long before assertions were available. Some came up with their own self-made assertions logic - which was not as clean and short as the real thing. And most probably didn't.
Assertions are not allowed to alter the program logic, were not always available, and are not mandatory. One can very well run the same program with them turned off. Conclusion: nice to have.

However, it is not guaranteed that a specific bug ever shows up without assertions running. Let's take a flight simulator for example. Assume a computation bug that only occurs in a rarely executed code path. The bug can cause the airplane to fly at slightly reduced speed, and no one ever notices. Or it can be a number overflow, causing the plane to fly backwards. That will surely be seen each time.

Would assertions help in production code?

It depends on the domain. Assertion means abort. Is that what you'd want, to prevent the worst? Or would you rather try to go on, hoping that it's a minor, neglegtible bug?

End-of-day accounting program: rather abort, alert the technicians, they fix it, and go on. No damage done, and bug fixed.

Real-time program where thousands of employees depend on it, and an abort means a big loss of work time. No abort. Hope that the bug eventually shows up as a side effect, and it can be traced and fixed.

Program where an abort is the worst case scenario, like a flight simulator: go on.

Conclusions

assert, not AssertionError

When writing library code, assert is the right choice. It gives the power to the user whether assertions should be on or off, whether he favors detection + abort, or to go unnoticed.

When writing an application, use assert as well. If you want to enforce assertions, you can still hack in this piece of code, and if you change your mind later, you don't have to go through all assertions and replace them:

However, in some cases, it might be more suited to throw an exception instead of an error, for example an UnsupportedOperationException. In the above case an exception barrier could catch that, then turn all semaphores at the intersection to blinking orange for a minute, and then restart the green interval as it would from a clean program start and continue normal operation. That would have several advantages:

be cheaper than sending a technician

the semaphore is off for short time only

while it's off it's blinking orange, rather than being totally off, that seems more secure

The problem would be detected in both cases. With an AssertionError it's in the output for free. And with UnsupportedOperationException it's the task of the one catching the exception to log it.

So again, the choice is between a hard abort, or giving the program the chance to recover and continue if a higher level decides to do so.

What I'm missing: detection and logging!

In the production scenarios 2 and 3 from above I'd want a 3rd way of handling assertions, which is not offered by Java's assertion feature.

Java has 2 strategies:

Don't even check, hence no abort

Check, and conditionally abort or continue

And I'm missing the 3rd one with the behavior:

Run assertion code to detect bugs and don't abort, but instead log.It's
a clear bug, it's detected, so log it. There's no cheaper way to detect
it than right here, in the assert code that was written already.

Configure your IDE to run with assertions

It strikes me that the default runconfig in the most common IDEs does not have assertions enabled. It actually happened to me that I had spent way too much time chasing a bug, when the assertion would have shown it instantly - but they weren't on.

In IntelliJ IDEA: You need to modify the Defaults for all kinds that you're using: Application, JUnit, TestNG, ... and changing the defaults does not change the runconfigs you've created earlier. And you need to do this in every project, separately. (Please add a comment if you know how to set this once and for all!)

Suggestion 2: for the machine, program logic, concatenateable

Both have their advantages and drawbacks. It certainly needs a concatenateable method, but that can be named "getString()". The debug method is nice to have - you see, just by looking at the characters you can't tell whether it's the Latin or Cyrillic A, or what kind of whitespace it is.

Unfortunately, in this case, expanding the object in the debugger doesn't help, because solely the code point is a property of the object. The other information (character and name) are computed:

Analysis of Java's toString()

Returns a string representation of the object. In general, the
toString method returns a string that
"textually represents" this object. The result should
be a concise but informative representation that is easy for a
person to read.

Both my suggestions follow the specification. The one with debug info is easier to read for a person. But I'm not sold yet.

Who calls toString()?

JDK methods:String.valueOf(Object obj), and thus every method that uses this such as PrintStream.print(Object o).Arrays.toString(Object[] a) for every object in the array a.StringBuilder and StringBuffer: the toString() method is used as the build method.

Logging:System.out.println().Your favorite logging framework.

Debugging:Your favorite IDE in the debugger.

Hrm. So there are mainly 2 uses:

String representation: toString() returns the object's value "as string" as close as possible.It is absolutely required to override toString(), and to do it in this way.

How does Java in the JDK define toString() in their classes?

For some simple value classes there's not much choice. Integer for example: returning "-43" makes sense.

Character could return more than just the character as string, but it does not. String could tell the length and cut it if it's too much, but it does not. StringBuilder and StringBuffer could report the appended chunks separately, and tell how many, and cut, but they don't. If they would, the classes would need a separate method for string concatenation, and concatenation with + would not work anymore. Now here's an observation: They all implement CharSequence, which was added in JDK 1.4, and it overrides the toString() method signature just to say something about it:

Returns a string containing the characters in this sequence in the same order as this sequence. The length of the string will be the length of this sequence.

So that's why.

Conclusions

My class UnicodeCharacter is a wrapper around a unicode codepoint just like Character is a wrapper around the char primitive. It's a character supporting those that don't fit into a char. And as such it really should implement the CharSequence interface. Then the decision is made: toString() must be suggestion 2, only returning the character's string value.

In some rare cases it would be nice to have 2 different methods: one for the string value (toString()) and one for the debug info (toDebug() or toDebugString()). The method could be defined in Object, with a default implementation: calling toString().

Wednesday, October 30, 2013

There are numerous extensions to bring authentication and access control to your Play based website. For example the Deadbolt 2 authorization system lets you define access rights per controller and method. Or you can roll your own, as this step by step guide shows.

What to do if you just want basic http authentication for the whole site? As of today there's no such thing built in. (If this changes, let me know...)

If you're serving your Play pages through Apache as reverse proxy, you're lucky.

Going through a reverse proxy is a good idea anyway:

You get the option for load balancing and failover: run multiple instances.

Run multiple Play sites on the same machine, on whatever port number, and expose them all on port 80 to the outside.

Wednesday, October 9, 2013

(This post is only of interest to people having the same situation, probably finding this through Google search.)

Problem

After finishing the last batch of Windows 7 updates with some issues (blue screen), my computer was fine, all programs worked normally... except for one: IntelliJ IDEA just wouldn't start. I was running version 12.1.3, and upgrading to 12.1.6 would not fix it. Also, the earlier and still installed versions 11 would not work.

The process started normally, as seen here, but the application was missing:

The Windows event viewer had no message about it.

There was a thread dump in the IntelliJ IDEA log folder c:\users\my-username\.IntelliJIdea12\system\log\threadDumps-timestamp\...txt with a time of the program start, but the text within the file was not helpful to me.

Solution

What solves the problem is to right-click the program in the Windows start menu, and select "Run as administrator".

Monday, June 17, 2013

as a person concerned about my privacy I use multiple public profiles, multiple email addresses

sometimes I use a proxy server to watch ip-restricted tv from another country

Simultaneous: I don't want to log in and out of accounts all the time.Persistent: It's nice when my computer remembers logins, preferences, proxies, bookmarks, cookies and all.

My primary browser is Chrome. It's fast and stable and has all I need for most tasks. But it only lets me use one session at a time (besides the temporary inkognito ones). Some Google products such as Gmail allow multi-signin but as the page says at the end, some products don't. These include Google groups and Blogger. And I need them (this is a Blogger blog). But Google wouldn't be Google if it wouldn't offer solutions to the problem:

sign out, and in with other account

sign in with another browser

So another browser it is.

I've tried using Opera for one of my accounts. But it's no pleasure. Gmail in Opera sucks. Google groups in Opera suck. And Blogger to write text is horrible (it's bad enough in Chrome already). One week ago I had 5 tabs open in Opera and it consumed 1.8GB ram. No thank you.

Anyway, I might want more sessions than there are acceptable browsers.

Firefox to the rescue

Our relationship has become a bit rusty over the years, but I still remember the good times we've shared. It feels sluggish like a heavy KV-1 tank compared to a light Luchs in World of Tanks terms. But one great feature it has is that it not only allows multiple profiles (as does Chrome), but also to run them simultaneously. The trick is to not only make it ask for the profile at the first start, but at every start. And here it is:

Wednesday, April 10, 2013

This is a follow-up post to my last week's "ExtJS: desperately need a debug version".A user comment pointed me to the "-dev" version. Apparently there is such a file now containing some lines of error checking (not too many):

And, does it help?

A couple days forward and I upgraded my app (still in development) from ExtJS 4.1 to 4.2. All seemed fine. There were not too many backward-incompatible changes this time. Plus, I have the dev version now which would alert me of problems right?

A screen using 4.1:

The same screen with 4.2:
<link rel="stylesheet" href="http://cdn.sencha.io/ext-4.2.0-gpl/resources/css/ext-all-gray.css"/>
<script src="http://cdn.sencha.io/ext-4.2.0-gpl/ext-all-dev.js"></script>

The search field on the toolbar is missing. Just gone. Nothing in the console. The grid search extension is from here but that is irrelevant.

Another week passed by, and another case cost me 15 minutes of searching: I've created a new form panel with just one field, copy-pasted from an official example.

{ fieldLabel: 'My field', name: 'myField', allowBlank: false}

Loading it in the browser, but the panel remains empty (except for the button bar). No error or warning output of course. What's wrong? Something with the panel dimensions? The panel has no width or height?
(ExtJS developers will know...)
Trying all kinds of combinations. Turns out the problem is very, very basic. I forgot to specify what kind of field it is (xtype: 'textfield'). The example had it configured in the parent object as 'defaultType', and I did not copy that. So ext was supposed to render an element to the screen, but it was not informed about what kind of element it is. What does it do? Apparently nothing... How about at least logging a warning?

Conclusion

Nothing has changed for me. Making mistakes or upgrading ExJS means digging.

Thursday, April 4, 2013

I missed it when using ExtJS 2, then with ExtJS 3. Now I'm using ExtJS 4 for a new web user interface, and the situation is the same: I desperately need a debug version of ExtJS!

On the plus side: The best web gui toolkit

ExtJS is, in my opinion, the most complete JavaScript library for making desktop-like apps in the browser. It has been there for long time, and is actively developed. It's relatively easy to extend components and develop your own. The community is large, the documentation is OK, and it's possible to create useful, functional, "rich" user interfaces. It's possible to get the job done.

Beware: It's a full time job

The examples look nice, and one could get the idea that it's a quick and easy path to develop your own rich gui. Nah. It's all trial and error F5 style development. It takes a lot of time and effort to get halfway fluent with it. There are many gotchas to learn about.

It sucks the energy out of me

For me, working with ExtJS is no pleasure. It brings a new flow stopper way too often. Why?

The nature of JavaScript.JavaScript is not a type safe language. There's no safe refactoring. No compile time confidence. That's a fact, nothing to improve here.

ExtJS performs no checks.No input checking (preconditions, assertions), no checks for common errors.

Let's look at an example:
"Uncaught TypeError: Object #<object> has no method 'read'"</object>

Aha, some script execution error occurred. Let's check the detail:

Not that helpful. Something internal in ExtJS on line 39145 failed. ExtJS does not catch this one (or any other) and tell me what the real problem is. Nor does the stack trace go back to my userland code.

In this case the problem was that I did not include some data store model prior to using it. My fault. But it doesn't have to be. There are plenty of reasons for such time consuming debugging, just because problems are never detected, and throw at last moment, deep down:

Developer error: wrong API use

ExtJS bug

ExtJS
annoyance

Old code: upgrading an older app, or copying an older example from the internet

But wait: I'm using "ext-all-debug.js"

JQuery got it right:
The original JS file, with apidoc: http://code.jquery.com/jquery-1.9.1.js
The minified version for production: http://code.jquery.com/jquery-1.9.1.min.js

This has become the standard, and makes perfect sense. The minified file name contains the ".min". And they both contain the version number - this way there's never a web browser having a stale version in the cache.

What I (and probably every other ExtJS developer) need is a version that contains code with input checking to throw as early as possible, with a meaningful message. You know... common programming practices... absolutely mandatory for any library code.

Summing it up

As it is now, you have to test every single functionality to be sure it works. Upgrading an app from ExtJS 3 to ExtJS 4 is no fun. Lots of things don't work anymore, but you have to figure out a) which and b) what to change. And the same will be true for version 5, 6 and 7. And any minor upgrade can break things too.

If you have ui tests then at least you will know about some defects early. But you still need to figure out the hard way how to fix them. And writing and maintaining automated ui tests is another full time job.

Please, Sencha, do yourself and every of the 2 million developers a big favor and add error checking to your libraries, and strip it in the "min"-ified production version for equal performance and file size.

I never gave it much thought. A db connection pool just manages the connections... handing them out and taking them back. How hard can that be? It's not something I want to spend time on. There's that project from Apache, and I like their HTTP server. So it's probably a safe bet. I thought.

What's the quality of commons-dbcp?

The hottest question about
commons-dbcp
on stackoverflow is this one: Someone asking whether to use Apache DBCP or C3P0.

The top answer with many agreements says:

DBCP is out of date and not production grade. Some time back we conducted an in-house analysis of the two, creating a test fixture which generated load and concurrency against the two to assess their suitability under real life conditions.

DBCP consistently generated exceptions into our test application and struggled to reach levels of performance which C3P0 was more than capable of handling without any exceptions.

C3P0 also robustly handled DB disconnects and transparent reconnects on resume whereas DBCP never recovered connections if the link was taken out from beneath it. Worse still DBCP was returning Connection objects to the application for which the underlying transport had broken.

Since then we have used C3P0 in 4 major heavy-load consumer web apps and have never looked back.

The post (from 2009) then goes on and says that recently there was some work on commons-dbcp. Now, 4 years later, what happened in the meantime?

What's the status of commons-dbcp?

The website was last updated 2.5 years ago.

The last release 1.4 is 3 years old, 1.4.1 is a snapshot.

The website is broken:

It has dead links (for example to the official Javadoc).

The logo image on the left doesn't load.

And it has character set issues to display the text (those ? signs).

Maybe these errors are in place when the website url changed from http://commons.apache.org/dbcp/ to the one it redirects to. Anyway, they don't seem to be important to be fixed.

Commons-dbcp is a subversion repo at svn.apache.org (C3P0 is on GitHub).

My issue

The reason why I had to kick
commons-dbcp
out was that it broke my application. After a couple hours it stopped handing out connections, and all threads were in WAITING state.

After some reading I tried c3p0, run long running high concurrency tests where
commons-dbcp
choked, and it all worked fine. And after some more reading I was convinced that the problem was solved.

But it works for me...

Maybe it does. Maybe you just don't hit it hard enough yet. And maybe you don't get to see all errors. As reported by others there are issues with commons-dbcp, and it looks like a dead project. Also, Hibernate comes with c3p0, not commons-dbcp.

But dbcp is faster ...

I've seen statistics where one or the other product stands out. Any connection pool is fast enough for me... as long as it keeps doing its main job.

Other alternatives

I did not look into other, newer products because I'm happy with one that has been in use for long time.

Injection by type or by name

When the @Resource annotation was introduced, I switched to it to not have Spring as a hard dependency. I never used dependency resolving by name, and relied on the fallback by type. This bit me a couple of times with code like:

@Resource
private SomeVeryDescriptiveClassName className;

when a class with the name ClassName existed because then that was injected, or attempted to be injected.

Now I'll switch from @Resource to the newer @Inject which does exactly what I want.

Optional dependencies

Unfortunately, both of the standard Java DI annotations lack the attribute that @Autowired provided: @Autowired(required=false)

Every couple of months I find myself in the situation where I really need an optional dependency. Not having exactly one implementation throws (having multiple and using the @Qualifier annotation works too).

An example is an interface provided by a framework, and the userland code may provide an implementation if desired.

Without optional injections one way to solve this is with the "priority" int. I use this concept when multiple implementations of an interface might exist, maybe including a system provided default implementation, and only the most important one should be used:

public interface Foo {
int getPriority();
void doSomething();
}

Each implementation returns a number, and the highest one wins in the injection:

@Inject
private void setFoos(List foos) {
//choose
}

I prefer to keep it short, and thus kept using the proprietary @Autowired annotation in these rare cases. But @Inject has its way for handling this too (thanks to rliesenfeld's comment):

Friday, February 8, 2013

Recently I've launched the beta of a new website. The instant response I got from my programmer buddies was (did you click? you're probably a programmer too... does it match your first thought?)a consistent "Bootstrap!" shout.

My first impression was that it means "I know it too, I'm up to date". Maybe that's part of it, but it's certainly not all. The feeling I have now is that using Bootstrap is cheating.

Just like most people think that work must be hard and no fun, otherwise it's not work, do people believe that making a nice, consistent UI must be troublesome and hard? Or is using the standard theme without customization not acceptable?

For other technologies I never got such a reaction in the past. "HTML!" or "ExtJS!" ... never heard.

Bootstrap isn't perfect, but it serves its main purpose well. At a very low cost it brings a consistent, clean and clear user interface. Users want intuitive, standard interfaces, common patterns. Figuring out how each site works with a hand-knitted GUI sucks too much energy.

It wastes too many brain cycles to figure out which button to press. I've certainly pressed the wrong one in the past. And for the users who have to fall back to a secondary language because there is no translation for their primary language yet, it's worse:

Now compare this to bootstrap buttons:

I'm not sure I've chosen the correct Google translate offerings... and it doesn't even matter. Color and size suggest the meaning already.

The aforementioned Surfr platform uses the traditional "..." on button labels to indicate that no harm is done pressing this button, another screen with information or options will appear first. For actions that perform data modifications (such as saving a record) an exclamation mark is appended to the action's name: "Save!". And actions that can't be undone are additionally styled with a warning color:

The critical among you will say "you can do this with ExtJS too". Of
course you can. You can do everything from scratch. Fact is, the average
site using Bootstrap is easier to read and use compared to the average
self-made ui site - at almost no development cost.

I like standards. I like simplicity. I hate to deal with css and browser issues. Conclusion: Call me a cheater... but I like Bootstrap.

Wednesday, February 6, 2013

My posts are usually rants and complaints. If there's nothing wrong, there's no reason for change, so why bother to write. Not so this one, it's full of praise and worship.

Nah, just kidding. It starts with rant.

There are the classical types for dishonorable businesses. Telco is the worst that comes to mind. Contract lock in, intransparent calling and roaming costs, etc.

Another bad one is domain name registration... or at least used to be. It started for me in the good old days when internic (now Network Solutions) was the only registrar for com/net/org. Their address was at internic.net, and a cunning company from Australia registered internic.com, tricking the visitors into believing it's the real thing. The gangster Peter Zmijewski was later charged with fraud, and apparently some 13k victims got money back. Not me.

Today, there are a ton of official registrars who offer perfectly legal domain registration. The price is usually slightly above what they need to pay to the registry - for .com that is still Network Solutions at $6 per year. Just like Telco it's a tough business with small margins. Some charge a lot and can do it thanks to their market position. For example Network Solutions they ask for $34.99. Crazy. GoDaddy is very popular which allows them to charge $13.17, which means $7 to keep. Others offer low prices in the 1 digit range... and to maximize profit, they try to trick the customer in several ways. Not so namesilo.

Renewals cost the same. (That's where other registrars try to cheat. First year cheap, then whoops. Hard to switch.)

Extra features are included for free. (Whois privacy, preventing unauthorized record changes. That's where others try to sneak in costly services, possibly free the first year, and then charged automatically.)

No hidden cost. (I'm repeating myself here, but I just have to say it.)

That's the kind of business model I like. Just domains, no hosting and up-selling and crap. Fair, transparent, clear.

So if you see an offering like GoDaddy is currently advertising on Google Adwords "$2.95 COMs at Go Daddy" then you know something is wrong. Either you'll pay in the long run, or you have to buy their hosting, or so.

For the trendy .co domains I use namecheap. Don't like them, they're exactly the kind I described above. But namesilo doesn't do .co.

Monday, January 28, 2013

This post is about the very legitimate automated emails generated by any application, such as for transactions and signups. Getting these delivered to the inbox (instead of spambox or nirvana) is the goal.

The short answer:

Sooner or later you may hit the sending limit. It's not when you deploy, and maybe not while beta testing. So better know the limits, risks and alternatives.

The longer answer:

If your sender address is hosted on Gmail then using their SMTP server is the obvious choice because:

You already have access to it

High availability

Unlikely that Gmail's smtp servers get added to block lists such as this one

How to connect to Gmail SMTP

The email address you use to log in must be a real account, not an alias. If you prefer to send the address from a different address, use the "from" field. Not modifying the "from" and "reply-to" increases your chances of getting delivered to the inbox.

3 types of Gmail accounts

There are three types of Gmail addresses:

The common Gmail domains: anything @ gmail

Google Apps for free: yourname @ yourdomain

Google Apps pro: yourname @ yourdomain

If you go and sign up for a new Gmail address from your desktop, then try sending mail from your (remote) server location, you'll get the 535 error quickly (it's the typical spammer pattern). You need to verify your account by SMS, and mailing goes on. For a short moment. It appears that such accounts can only send to a handful of different email addresses per day. I was not able to find official statements and numbers. The number is probably so low for new accounts, so if you have an established one it may work longer.

If you have your own domain set up for Gmail then the limit is higher. It makes sense since you have a public whois record. Google disabled signups for the free apps service a couple weeks ago. That's probably why I cannot find official information about the limits. The number of recipients per day is quoted as 500 on the internet. If you have such an account already then you can continue using it.

For the paid account the official page says 2000 unique, external recipients per day.

Other risks

The official page has another fact:

"The value of these limits may change without notice in order to protect Google’s infrastructure"

Also, I've found unofficial/unverified information about Google lowering the daily send limit on high bounce rates. This makes perfect sense; spammers have high bounce rates. This is an open door to malicious users of your app: sign up with a couple invalid addresses, and your email system may be interrupted for a while.

Using your own SMTP

If you decide now that Gmail SMTP is not for you, there are some things to consider with your own.

If you access the SMTP of your provider, then you may face similar limits there. After all your provider has to make sure their customers don't spam, and not the whole server gets blacklisted. But this can and does happen nevertheless: Either because one of the other users spammed, or because one account was hijacked and abused. As a result your email messages may be accepted by the SMTP, but never make it to their destination. Maybe you get bounces, maybe not.

Conclusions

A combined approach

I still believe that using Gmail's outgoing mailserver has its advantages. They are reliable, and in case of denial they return clear status codes. A solution with Gmail as primary, and your server as fallback, sounds like a good idea to me.

Further reading

Not for marketing

Given the limits and risks, I'd definitely not use Gmail for sending anything that could be marked as spam by the receivers. Marketing, newsletters, even if the user at some point actively asked for it. Only send high priority mail such as transaction confirmations though Gmail SMTP.

Monday, January 21, 2013

The other day I needed to limit the items in a list based on traditional offset and limit criteria. I could not find any on the net. And because it bears potential for little bugs I wrote a library function with unit tests. Here's my take:

Step 1: Define the languages.

In conf/application.conf add/edit the variable:application.langs="en,de,ru"

Note:

I had to put the values in double-quotes. No quotes as in the documentation failed.

Order the languages by priority.

Step 2: Add the texts

As in the documentation create the files conf/messages, conf/messages.de and conf/messages.ru.

Create the files as UTF-8 files so that any text works, not just western. And don't add the BOM because Play 2.0.4 can't handle such files. On Windows you can create the files with Notepad: "save as..." and choose UTF-8.

Note:

If a text is not translated to a certain language then it always falls back to the default language.

It doesn't seem to be supported to separate content by file. All goes into the one messages file. As I said... small/medium sized sites only.

Step 3: Figure out that the default handling isn't suitable

Now you're ready to use the texts in Java code and in templates.

Messages.get("my.message.key")

Messages.get(new Lang("en"), "my.message.key")

The problem: Either you pass the user's language around everywhere (no way), or you settle with the built-in language selection (maybe). That is: using the first language that the user's browser settings mention for which you have translations.

For me, that was not acceptable. My website lets the user change the language. Unfortunately, Play does not offer a simple way to overwrite the lookup of the default language. I read that it will be supported in version 2.1, and that there are easier ways for overriding in the Scala version. So here's what I did.

The guest's language is auto-detected, and he's redirected to his language. This way I'm not serving duplicate content on multiple URLs. Currently I'm doing the same with other content pages (2 routes, with and without language) but it's not really necessary as long as no one links there.

Step 7: Client side i18n

My server-side messages file contains all texts as used in Java code, plus static webpage content. The client side only needs a couple phrases in JavaScript. That's why I've decided against streaming the whole messages file to the client. Instead I've created 3 files (UTF-8 again) messages.en.js etc. and serve only the one to the client: