I’ve finally got an Android Wear device and It’s the Huawei Watch. At first I wanted the Moto360 2nd gen (for the larger screen) but I was unable to buy it from motorola.com and after several attempts I ended up with a paid order but no watch (money is deposited somewhere and should return after once month of the “purchase”). I’m glad I didn’t end up with a “flat tire” and a pixelated ambient display!

So anyway, I went back to my old app Hexscreen and extracted some code to a separate module then used it in a new Android Wear application and ended up with this nice and interactive test 🙂

A while ago I made a very small app around the idea of selecting an image, then revealing its details by touching the screen and moving your finger and once you’re happy with the result, you can share the photo to other apps.

Yesterday I published a new version where I changed the design and added the ability for users to sign in\register using a Google account then share what’s being created inside the app so other users can view and like.

In this post I’m going to share some technical details that I found interesting while improving the app and implementing the backend.

This post is the first of a series about the struggle I as an Android applications developer must endure when working with backend developers, designers or other people involved in building a mobile application.

It will contain some points you might not be aware of and some things you can do to make the mobile application developer’s life easier or at the very least not make it harder than it should.

Disclaimer: My personal opinions and preferences might be included in these articles and you’re welcome to correct any incorrect points in the comments below.

If you are a mobile application developer you might also find something useful here especially if you’re just beginning.

So to start, this first article is dedicated to people who write APIs for mobile applications, You the backender and me the app developer are starting from scratch and no previous API exists.

I wanted to make something similar to the ripple effect but since product-grade libraries for that specific purpose exist I just ended up experimenting with OpenGL ES2 and here’s the result:

Why did I go with OpenGL instead of a canvas? because shaders :D, with a fragment shader you can get various effects. add post processing and you’ll get even more effects! Here I tried something really simple as shown in the video above.

So a few years ago I did a project for the Genetic Algorithms assignment at college. My idea was to have tiny multi-cell organisms that have a simple goal: travel as far as possible! maybe to find new resources or escape from predators, you pick one 🙂
So the fitness of a creature is measured by how far it can go during its life span.

The creatures will start with random moving cells and evolve to more organized creatures with organized movements.

Please watch the video before reading. If the video failed to make you interested then I doubt that the text would succeed. You can also download the executable and the source code available at the end of this post.

I’m not that kind of guy who reviews stuff but I wanted to share my experience with the X12 zoom lens that are available for mobile phones.

First here’s a photo of the lens I used, I got it for about 23$, it comes with a tripod and an adapter which you can screw the lens into (Adapter & phone not in photo).

X12 Lens

In short:

Lens is good.

Tripod sucks (or I just don’t know how to use it?). no matter how hard I try to fix the angles the phone\lens just rotates slightly because of the phone’s weight.

It’s hard to take a video of moving objects since you’ll have to adjust the lens manually to keep the focus, it’s not always clear on the screen that the image is out of focus.

If you (like me) prefer to see results rather than read them then the rest of the post is for you. Before you continue please note that the post will contain lots of large images 🙂 .
Video samples will be provided at the end of the post too!.