Wednesday, November 15, 2017

The answer it will fail, since the second assert: assertTrue(set.contains(myEntity)) will not find myEntity in the HashSet.
But it's there, right? It was never removed.

So what we have here is: 1. Business problem, since it's impossible to get object from set that is there. 2. Memory leak.

But how did it happen?

The problem is with Lombok's @Data annotation. It's a very convenient annotation that auto-generates all methods the java utility methods: equals, hashCode, toString as well as relevant constructors and getters.
Yes, it's very convenient, but a hidden problem is introduced: equals and hashCode include all fields and when value of a field changes, the hashCode returns a different value. Therefore, the object cannot be found in HashSet anymore.

This problem is not unique to Lombok. Exactly the same problem will occur if you write the method yourself by using mutable fields or if you use any other code-generation or reflection tools.
However, if you write code yourself, it's a bit more visible, while with Lombok it's kind of woodoo.

The best practices here are not related to Lombok or other library and are quite simple:
1. As much as possible try to make your class immutable.
2. Even if a class is mutable, are all fields mutable? Use only immutable fields in hashCode and equals and you will be safe.
3. If a class is completely mutable and you cannot rely on some immutable fields, reconsider if you need to override hashCode and equals at all. Is default implementation sufficient?
4. If none of above doesn't work for you - document. Put a HUGE WARNING in javadoc explaining why the users of the class must be careful, if they decide to store the instances in HashSet or as a key in a HashMap.

Friday, October 20, 2017

Did you ever wanted to run an predefined command as auto-completion in zsh?
For example, you may have a script that accepts only predefined values as input.
Actually it's possible and quite easy.

First, you will need a file that will be invoked to auto-complete the command.
Here's an example:

You can replace the HERE_COMES_SHELL_COMMAND with any command. For example it can be "cat ~/myfile" to read options from a file.
#compdef defines the list of commands this file will auto-complete. In the above example it's going to be mycommand, change it to your actual command.
As I already mentioned, it can be also a list: #compdef mycommand myscript myprogram - will autocomplete any of the mycommand, myscript, myprogram.

Now, you need to tell zsh about your file:
1. Place your file to some directory. For example: ~/.myautocomplete
2. In your ~/.zshrc add the following line: fpath=(~/.myautocomplete $fpath)
3. After this line add the following lines:
autoload -U compinit
compinit

On MAC I used the following instead:
autoload -U compaudit compinit

4. You may need to delete files that start with .zcompdump in your home directory.
5. Start a new shell, and it should work.

If you are using oh-my-zsh, it's a bit easier:
1. Create your directory under ~/.oh-my-zsh/plugins
For example ~/.oh-my-zsh/plugins/myautocomplete
2. Place this file in this directory.
3. Edit ~/.zshrc, find "plugins" and add "myautocomplete" to the list.
4. Start a new shell.

Tuesday, October 10, 2017

There is a very important post about avoiding the "Unable to validate the following destination configurations" in AWS.
Too bad it's not mentioned just next to both S3 and SNS/SQS reference documentation.

BUT! This post is lacking some important part: You will get this error even if you didn't specify the TopicPolicy (or QueuePolicy) at all!
Furthermore, you will get this error even if you specified the policy, but it's not correct.
For example, if your policy is too restrictive and S3 would not be able to send events to SNS, you will also get this error! Is it clear from error's description? Not really. Is it clear from the above's AWS post? No, not at all.

So just remember, when you see "Unable to validate the following destination configurations" - check the Policy. It may be lacking. It may be too permissive or too restrictive, but the problem is with the policy, and not with a bucket.

Wednesday, October 4, 2017

1. Don't specify a resource name, unless absolutely must doing so. This way you can avoid names clashes, since CloudFormation will automatically assign unique names to your resources.
2. If you need to specify a name, include the stack name in it. This way you will reduce the potential name clashes. You can also include a partition and region for resources that are available globally (e.g. S3 bucket names). Note that this will NOT prevent the potential naming clash completely, since somebody else can also use the same name.
3. When creating any IAM resources in your stack, make sure to add DependOn in the resources that use these IAM resources. Apparently CloudFormation is not smart enough to resolve this dependency tree and handle it without additional configuration.
4. Sometimes the names the CloudFormation will give to your resources is completely unrelated to the stack name. Include the ARN of such resources in the Outputs, so you can easily find them later, when needed.
5. Very common scenario in AWS is a S3 bucket that fires events to SNS or SQS, when a file is uploaded. Apparently it's impossible to create it in single change. See this post.

Sunday, April 10, 2016

One way to print the project's Gradle dependencies is 'gradle dependencyReport'.
However it creates a very large file with many scopes that are sometimes hard to track.
Sometimes it can be useful just to print the list of dependencies of a specific scope.
A very small script can do the job, and here are some examples: