I’ve been thinking about measuring developer productivity recently. I’m lucky to work in a great team with varying experience, everyone adding something and pushing eachother to do better, either by helping out on a tough problem, or giving a nudge to lay down a better foundation.

In this post I want to discuss the subject of Value Objects, their purpose and some ways of easily implementing them in Java, specifically, although not exclusively, within the context of Android development.

Messenger-based services, bots, agents, AI. It looks like app fatigue has led us to look to these for the next green field, something new for VCs to plough their money into, something that feels different.

From time to time technology comes full circle and here we are again using something like IRC, in the UX slam dunk that is Slack, and setting loose upon it an army of bots… again, like we do/did with IRC. Of course both of these things are significantly evolved from their forebears; the semi-public messaging platform (albeit, now less suited for massive audiences) but also the bots, who once were relegated to performing simple tasks like running file shares or hosting quizzes for a handful of geeks, are now powered by significant “AI” resource and connected to millions of people and myriad services from Uber to Dominos.

But what for and why now?

AI, in the sci-fi movie sense, feels like it’s been “a decade away” for as long as I can remember. In reality IA (as the case may actually be) is already in use and has been with us for some time, just in a very limited and low profile capacity, with the exception of IBM’s Watson kicking butt on Jeopardy perhaps. What we are now seeing is that potential being unleashed in consumer-space and the results are going to change HCI yet again.

Who are the trailblazers? IBM’s Watson we’ve mentioned, Facebook Messenger’s “M”, Amazon’s Alexa, the agent that lives in the Quartz news app and of course the numerous bots that will be hatched through Slack Bot Startups to name a few. Most of these interact through chat, be it text or voice, and when the AI isn’t feeling chatty it’s beating us at 2,500 year old board games. I remember configuring an AliceBot maybe 10 years ago and whilst at the time it felt like a scene from Bladerunner it was positively naive when compared to with complex behaviour on display today.

What caught my eye most recently however was Microsoft’s entry to the bot scene with Tay. Designed “to be entertainment”, Tay is a chat bot that pretends to be a 19-year old American girl, complete with acronym-heavy “text speak”, the ability to play games and a strong opinion on some pretty heavy thought experiments. Tay will be available through Kik, GroupMe, and Twitter initially and over time will learn new skills and presumably perform better at the Turing test.

Celebrity?

On the surface Tay seems like a bit of fun, a tech giant flexing its R&D muscle. But the ramifications could be profound. Tay got me thinking, how will these bots evolve, and how will we as a society perceive them?

Messaging bots + services: the ultimate brand advocate is a celebrity. If your brand can develop their own AI celebrities they can exact fine grained control over their message, and worry less about post-club drunken photos of their current “face” appearing in Heat magazine.

The bots we’ve grown accustomed to in the last few years are agents: Siri, Cortana, Amy, Alexa and erm… “OK Google” (the latter lacking the necessary persona to really grow on us), they’re fairly passive in their approach. They act out on our requests, very rarely instigating something. I think this is where a big shift is about to occur, we’ll see more impetus to create original content from the agents and ultimately they will begin to define their own goals.

It seems likely to me that agencies could in fact craft and tune personas powered by these underlying AI bot engines (AIaaS please?) to become nothing short of celebrities, with millions of followers across the (people) social networks and a genuine human connection, within certain groups at least.

Who might want this?

Well any media outlet for sure, if you want to disseminate a message you better have either a great story or a pretty face. Brands could engage with experts to craft their ultimate brand advocate, an entirely constructed celebrity. Infinitely scalable and international, the Celebribot might engage itself in real time media buying without the slightest of instructions, based on the agenda and campaign package currently being relayed to it. Hey if a mute Lara Croft can become a brand advocate for an energy drink, just think what could happen if she could talk, think, and plan for herself.

So this is where I think we are going with the new wave of bots. Can we look forward to manifestations of AI personalities hovering over us, dressed up drones, perhaps HAL 9000 from 2001: A Space Odyssey or maybe if we’re lucky something or someone more like Holly from Red Dwarf. Maybe I’ve been watching a little too much Black Mirror but it certainly looks like our engagement with these entities is about to see a pace change.

I’ve recently started a new job, yesterday was my birthday and in a few weeks baby number 2 will arrive. These kind of events or milestones often make people take a step back and think about their present direction. This morning I was out running, and I got thinking about work and life (read: personal time) and I wondered why I approach them so differently.

With work, or any business, I would never start without a plan, without a way to measure results, without distinct goals. With life, I, and many others I would propose, tend to either go through the motions, or deal with things in a more reactionary way. There’s no obvious focus or overarching direction at any one time. Why do we put so much effort into planning work, but perhaps not so much into planning “life”?

I’m a contradictory sort: I hate the phrase life-hack and I tend to balk at the concept of the quantified self and all kinds of measuring time (the use of timesheets in work, though useful from a business perspective, can be a little toxic to culture with self-starters), but at the same time I track things like exercise to the second. Nevertheless a seed was planted, I should experiment with this idea of focusing on specific objectives across my entire life, as I do with work.

A Plan

I don’t like hard and fast rules, things that are black and white. So I wanted to do something a little whimsical lest it become a chore. As with any plan you must first research, so I noted down some ideas for things I could focus on improving.

Finish something – I’d like to think of myself as a completionist, but the reality is I absolutely love starting things and too easily find something new to distract me. I currently have 6 Audible books all two-thirds through, and when it comes to games my Steam account or phone will testify to this point. It may also explain the big sack of unused clay in the garage.

Be more present* – In the 70’s Alvin Toffler popularised the term Information Overload. I’m sure today (at 87) things are a lot worse than he ever thought possible. The cognitive inbox fills up faster than you can clear it. Constant notifications, meandering through social networks. For me this can lead to me never quite being “in the moment” and is something I actively try to combat.

Family – A simple one, spend more time with the wife, kid(s) and other close or even distant family. Make a couple of journeys even if it’s only a “flying visit”.

Health and Wellbeing – Since my early 20’s I’ve tried to keep physically fit with running and the gym, sometimes even eating healthy, but now I’m in my early 30’s you start to notice you have to pay it back that little bit more, and if you ignore your own physical and mental health you can quickly find yourself swimming against the tide.

Go with the flow – The free card. Take a break from the process, let things just happen.

Socialise – It’s too easy to stay busy. Make time for someone.

Early to bed* – I am 100% guilty of staying up too late, almost every night. This habit never used to be a problem, before children.

The Focus Dartboard

The idea is simple. I write these on post it notes, put them on a dartboard and throw a dart once a week. The aim is to try something different each week, so there’s an element of skill, but ultimately it really doesn’t matter which I hit. The other thing to note is that these would not be my exclusive focus each week, but to be mindful especially of this one big thing each day for that week.

With the post-its done I got a cheap dartboard and threw a dart, this week I will “Finish Something”.

Some other ideas for the board may include:

Healthy eating

Workout more

Meditation

Disconnect

Why a Dartboard?

There’s really no reason; you could roll a dice or simply pick of your own accord, but turning something into habit or better yet a bit of fun in my experience is a much better way to get someone to stick to something. A nice side-effect of using this technique is that you’re always going to improve at something, if it’s not the objective you intended, at least you’ll be getting better at darts. 🙂

I’ll see how it goes and maybe look more at the macro scale later on. If you have any similar techniques or thoughts about this topic of concerted focus I’d love to hear them.

*You may have noticed the asterisk on a couple of these. These sorts of tasks are more related to mental attitude and as such I found a bit of a helper, an app called Calm. This app provides free and guided mediation, well of course meditation is “free” but here I mean “free form” with the app simply providing various audio and/or visual backdrops. The real win though is the paid guided meditations which are plentiful and cover a range of subjects to focus on over a series of days. I’m a total newcomer to meditation, in my mind it was always associated with Bhuddism and mystical Gurus, so much so I had no idea of the simplicity or broad range of techniques and applications for the modern world. This is one of the only apps I’ve been happy to drop $30 IAP on (paying for a year up front) primarily to support the ongoing high quality development, I highly recommend it.

Recently Usborne books made their beautifully illustrated 1980’s computing books for kids available for download. It turns out several of my friends and Twitter acquaintances picked up their love of coding from these books as youngsters, myself included.

I remember being in a dentist’s waiting room where an old battered copy of “Computer Space Games” lay on the bookshelf. I was so engrossed they actually let me take that book home, and thus began my journey.

Computer Space Games by Usborne Books

As an aside, today I’m a father of one (soon to be two), who absolutely loves Usborne’s latest “That’s not my…[Insert Subject]” touchy-feely book series. The pages of each book contain the phrase “That’s not my…” and the subject, which ranges from “Monkey” to “Snowman”. In some ways Usborne is continuing their logical thinking teachings with each page providing a condition that evaluates as true or false 😉 I highly recommend these for anyone with a young toddler.

Usborne’s That’s Not My Puppy book

Memory Lane

Flicking through these old computing books had me inadvertently taking a trip down memory lane. I didn’t have a computer for some time after I started “coding” (writing down programs in BBC Basic) but that just made it all the more enticing as one day I’d be able to see these programs crash run for real. The problem I had with my BBC Basic skills was that the BBC Micro was already a relic when I was a young teen, however I did eventually get an Amiga 600 on which I learned the programming language Amiga E (closely related to C). Later getting a Gateway PC which had Windows 95, a Cyrix 5×86 CPU (Intel was expensive!), a 56k modem, CD-Rom, VGA graphics card and a bucket load of power.

In those days kids like me hung around IRC, where after dinner I’d spend time chatting with quite a few “leet d00dz”. In these circles I came across a fantastic range of things: from mIRC-script and Sub7, to SoftIce and assembly language (ASM). ASM is something I would prompt any young coder to at least get some experience with. It may be all but useless these days, with even the most throw-away chips happily run the voluminous instructions output by much higher-level languages. The main thing you learn from ASM is the fundamentals of how a computer’s brain takes your instructions and uses a much more limited set of constructs and variables (registers) to do anything. Ultimately as a kid this was the thing that sold me on computers, they can do anything and all you needed was your brain and some time to create that anything.

Coding through necessity

When I was 15 or 16 we still used dialup modems to access the net. I think it cost something like 2p (£0.02 GBP) a minute to dial up, and during that time no-one could use the phone. It also made a racket so there was no sneaking online. We didn’t have a lot of money, so my internet time was limited to 30 minutes a day. So like a boy scout, in order to learn you had to be prepared. I ended up writing a Visual Basic app to spider and scrape sites, saving the pages to disk. This way I could dial up, have it scrape a bunch of sites to 3 levels deep and disconnect, reading at my leisure.

In chemistry class we were given homework of balancing symbol equations, hundreds of these things to work through. They aren’t hard, really it’s just just grunt work to apply some basic rules. As I later found out, it’s a core tenet of a coder to be lazy and never to repeat the same task more than once. So I wrote another little VB app which let you press buttons to input the elements, the numbers of units e.g. O² and hit go. I sold this program on floppy disk for £1 a pop to classmates, and the homework problem was solved.

With hindsight the above are early examples of situations where coding solved a real world problem for me personally, and I suspect that might be the case for a few of you reading. I also wonder if the huge amount and instant availability of free content gets in the way of this desire to create, but I like to think that this desire is universal.

Finding the “right” language?

At school we learned Pascal (and Delphi), a little Prolog, and for a final project we had an open choice (I opted for Visual C++ with MFC and Crystal Reports, so practical). We were also taught to finger trace which I believe helps to minimise common typos in later years. From there I started to do “real work” with ActionScript (for my sins, 10 years as a Flash developer), JavaScript (web and later nodejs), some Coldfusion and ASP.NET, some iOS projects in Objective-C and for the large part my days have been spent in Java (Android) in recent years. If you’re familiar with the 99 Bottles of Beer website you’ll know there are hundreds and hundreds of programming languages. The other day I was wondering whether those 10 years of Flash and Flex and the vast amounts of time, perhaps some 5000 hours, learning the ins and outs of a huge enterprise SDK was time that has been quite simply, lost. The thing I have learned though is that it doesn’t really matter what languages you’ve touched on over the years, it’s never a step backwards. ActionScript was based on ECMAScript 262 (as is JavaScript) and eventually evolved into something like Harmony-meets-Java. The thing is I learned from this was how to use a dynamically typed language, how to architect apps with (Pure)MVC, how to write testable code. It’s almost never time lost, well, maybe there are some exceptions.

Who know’s what comes next? What I know for sure is that this is not the end of my journey, something new will come along and it’ll be time to start again, leaning on previous experience but not being blinkered by it.

That was my story in a nutshell and time passed has me missing a lot out no doubt, but what did your journey look like? What were the key moments that made an impact on you, what you learned, and why?

The app is aimed at people wishing to regularly check the status of family or friends who may for example live alone and are vulnerable to accidents like a fall in their home, unable to call for help. Something like the reverse of a panic button system; if they don’t press a button every few hours, it sends an SMS message to selected contacts with a call to check in.

When asking “should I use a Fragment or Activity?” it’s not always immediately obvious on how you should architect an app.

My advice is try to avoid a single “god” Activity (h/t Eric Burke) that manages navigation between tens of Fragments – it may seem to give you good control over transitions, but it gets messy quickly*.

My go to is always to use a combination of Activities and Fragments. So here are some tips:

If it’s a distinct part of an app (News, Settings, Write Post), use a new Activity. This Activity may be fairly light-weight, simply inflating a Fragment in its layout XML or in code.

For everything else use Fragments.

This gives you flexibility when combining Fragments in Activity layouts for tablet.

Create a BaseActivity class which handles setup/styling of ActionBar and SlidingDrawerLayout if you have that kind of navigation.

Nullify or customise the transitions between Activities if for example if you don’t want to have an obvious transition with an ActionBar that’s already in place (and you can make use of new L Activity transitions to smoothly transitions).

Fragments don’t need to be visual, an Activity can use the FragmentManager to create a persistent headless Fragment with setRetainInstance() who’s job may be to perform a background task (update, upload, refresh) – this means the user can rotate the device without destroying and recreating the Fragment, and is sometimes and alternative to binding to a Service onResume().

Some good sources for how to architect apps, as always the Google I/O Schedule app:

There are two types of test I’ll describe below. First of all using Apple HLS streams, which is HTTP Live Streaming via port 80, supported by iOS and Safari, and also by Android (apps and browser). Then we have Adobe’s RTMP over port 1935, mostly used by Flash players on desktop, this covers browsers like Internet Explorer and Chrome on desktop. These tests apply to Wowza server but I think it’ll also cover Adobe Media Server.

All links to files and software mentioned are duplicated at the end of this post.

It’s worth noting that you can stick to HLS entirely by using an HLS plugin for Flash video players such as this one, and that is what we’re doing in order to make good use of Amazon’s CloudFront CDN.

For the purpose of testing you may also wish to simulate some live camera streams from static video files, see further down this post for info on how to do that on your computer, server or EC2.

Testing RTMP Live Streaming with Flazr

In this test we want to load test a Wowza origin server itself to see the direct effect of a lot of users on CPU load and RAM usage. This test is performed with Flazr, via RTMP on port 1935.

Assuming you’ve set up your Wowza or Adobe Media server already, for example by using a pre-built Wowza Amazon EC2 AMI. We’re using an m3.xlarge instance for this test as it has high network availabilty and a tonne of RAM – and we’re streaming 4 unique 720p ~4Mbit streams to it, transcoded to multiple SD and HD outputs (CPU use from this alone is up to 80%).

Installing flazr

First up, for the instance size and test configuration I’m using I modified flazr’s client.sh to increase the Java heap size to 8GB, otherwise you run out of RAM. Next up FTP and upload (or wget) flazr to a directory on your server/EC2 instance. Then SSH in and:

The order of parameters does seem to matter in later versions of flazr, but either way this test runs for 60 seconds, with a load of 1000 viewers. Given all the transcoding our CPU was already feeling the pain, but there was no sign of trouble. We managed 4500 before anything started to stutter in our test player from another m3.xlarge instance.

Wowza CPU Usage

Of course this only matters if you are not using a CDN, but it’s good to know this EC2 instance can handle a lot of HD viewers.

Testing HLS Live Streaming (or a CDN such as Amazon CloudFront) with hlsprobe

Onto HLS streaming, the standard for mobile apps and sites. We have used Wowza CloudFront Formations to set up HLS caching for content delivery, so that we can handle a very large number of viewers without impacting on the CPU load or network throughput of the origin server, and to giver us greater redundancy. Given CloudFront works with HLS streams we are not using RTMP for this test, so we cannot use Flazr again. To test HLS consumption –that being the continuous download of .m3u8 files and their linked .ts video chunks– we can use a tool called hlsprobe, which written in python.

If you’re on a Mac and don’t have python I recommend you install it via brew to get up and running quickly. If you don’t have brew, get it here.

#on a mac
brew install python
#on ubuntu/amazon
sudo apt-get python

hlsprobe also relies on there being an SMTP server running, not that you need a fully functional one but:

Running hls probe is as simple as this (note the -v verbose mode, you can turn that off once you have it working).

python hlsprobe -v -c config.yaml

Now if you fire up the Wowza Engine Manager admin interface, you should still see the connection count and network traffic, but the traffic. If you’re testing your CDN such as with CloudFront, you should note that your CPU usage does not increase substantially as you add thousands of clients.

Simulating cameras to Wowza via nodeJS

It’s good to be able to simulate live streams at any time, either from your computer or in my case, from some EC2 instances. To do this I’ve written a simple nodejs script which loops a video, optionally transcoding as you go. I recommend against that due to high CPU use and therefore frame-loss; in my sample script I am passing through video and audio directly, the video is already using the correct codecs, frame size and bitrate via Handbrake.

The script runs ffmpeg, so you’ll need to install that first:

#on a mac
brew install ffmpeg
#on ubuntu/Amazon you'll have to to install/compile ffmpeg the usual way

Edit the js script to point to your server, port, and video file, the run the script with:

node fakestream.js

If the video completes, it’ll restart the stream but there will be a second of downtime, some video players automatically retry, but make sure your video is long enough for the test to be safe.

These are just a couple of ways of load testing a live streaming server, there are 3rd parties out there but we’ve not had great success so far, and this way you have a lot more control over the test environments.

If you use the excellent Postman for testing and developing your APIs (and if you don’t yet, please give it a try!) you may find this little node script helpful when generating documentation.

It simply converts your downloaded Postman collection file to HTML (with tables) for inserting into documentation or sharing with a 3rd party developer. The Postman collection is perfect for sharing with developers as it remains close to “live documentation”, but sometimes you need a more readable form.

I’ve recently finished work on an app that registers itself as a handler for a given file extension, let’s call it “.mytype”, so if the user attempts to open a file named “file1.mytype” our app would launch and receive an Intent containing the informati…

I’ve recently finished work on an app that registers itself as a handler for a given file extension, let’s call it “.mytype”, so if the user attempts to open a file named “file1.mytype” our app would launch and receive an Intent containing the information on the file’s location and its data can be imported. Specifically I wanted this to happen when the user opened an email attachment, as data is shared between users via email attachment for this app.

There are many pitfalls to doing this, and the Stack Overflow answers I saw given for the question had various side-effects or problems. The most common was that your app would appear in the chooser dialog whenever the user clicked on an email notification, for any email – not just those with your attachment. After some trial and error, I came up with this method.

Create IntentFilters in AndroidManifest.xml

The first step is to add <intent-filter> nodes to the application node of the AndroidManifest.xml. Here’s an example of that:

Now something to note here, I’ve specified a filter for both “application/mytype” mimetype and also the more generic “application/octet-stream” mime type. The reason for this is because we can’t guarantee the attachment’s mime-type has been set correctly. We have iOS users and Android users sharing timers via email, and with iOS the mime type is set, with Android, at least in my tests on Android 4.2, the mime-type reverts to application/octet-stream for attachments sent from within the app.

Permissions

I initially put these IntentFilters on the “home” Activity of my app, however I soon started encountering security exceptions in LogCat detailing how my Activity didn’t have access to the data from the other process (Gmail). I realised this was because my Activity’s tag had the launch mode set to:

android:launchMode="singleTask"

Which prevents multiple instances of it being launched, this is important when users can launch the app from either the launcher icon or in this case via attachment (I didn’t want to have multiple instances of my home Activity running as that would confuse the user). So the solution was simply to create a new “ImportDataActivity” that handled the data import from the attachment, and then launched the home Activity with the Intent.FLAG_ACTIVITY_CLEAR_TOP flag added.

Importing Data

So in ImportDataActivity we need to import the data stored in the attachment, in my case this was JSON. The following shows how you might go about doing this:

Please note that “REQUEST_SHARE_DATA” is just an static int const in the class, used in onActivityResult() when the user returns from sending the email. This code will prompt the user to select an email client if they have multiple apps installed.

As always, please do point out any inaccuracies or improvements in the comments.