Archive for the 'Programming' Category

I had implemented an Angular 4 dynamic DOM prototype using the Angular Dynamic Component Loader and wanted to do the same thing with React JS. After doing some research I found that it was not very obvious how to accomplish this.

By the time I was done there ended up being two functional components worth sharing:

In the demo application, all available components register themselves via the ComponentService, which is just a singleton that maintains a simple hash map. For example:

JavaScript

1

ComponentService.addComponent('switch',SwitchComponent);

As highlighted on lines 17-18, the desired React Component is first fetched from the ComponentService and then passed to JSX via <this.component … />.

The JSX preprocessor converted this embedded HTML into Javascript with the ‘React Element’ type set to the passed Component along with the additional attributes. I.e. if the UI type was ‘switch’, the hard-coded HTML would have been <SwitchComponent … /> which is a perfectly acceptable JSX template.

Voilà, we have created a dynamic DOM element!

Note that Vue.js applications using JSX can use the same technique except they pass a Vue Component instead.

JSON Driven Dynamic DOM Generation

In order to demonstrate dynamic DOM generation I have defined a simple UI JSON structure. The demo uses Bootstrap panels for the group and table elements and only implements a few components.

The UI JSON is loaded from the server when the application is started and drives the DOM generation. A DynamicComponent is passed a context (i.e. its associated JSON object) along with a path (see below). Each UI element has the following attributes:

name: A unique name within the current control context. It is used to form the namespace-like path that allows this component to be globally identified.

ui: The type of UI element (e.g. “output”, “switch”, etc.). This is mapped by the ComponentService to its corresponding React Component. If the UI type is not registered, the DefaultComponent is used.

label: Label used on the UI.

controls: (optional) For container components (“group”, “table”), this is an array of contained controls.

value: (optional) For value-based components.

range: (optional) Specifies the min/max/step for the range component.

This structure can easily be extended to meet custom needs.

There are a number of implementation details that I’m not covering in this post. I think the demo application is simple enough that just examining and playing with the code should answer any questions. If not, please ask.

The resulting output, including console logging from the switch and range components, looks like this.

This is, of course, a very minimal implementation that was designed to just demonstrate dynamic DOM generation. In particular, there is no UI event handling, data binding, or other interactive functionality that would be required to make a useful application.

It was suggested on the Angular 2 Password Strength Bar post that I publish the component as an NPM package. This sounded like a good sharing idea and it was something I’ve never done before. So, here it is.

You should go to the Github repository and inspect the code directly. I’m just going to note some non-obvious details here.

src: This is where the PasswordStrengthBar component (passwordStrengthBar.component.ts) is. The CSS and HTML are embedded directly in the @Component metadata. Also, note that the tsconfig.json compiles the Typescript to ../lib which is what is distributed to NPM (app and src are excluded in .npmignore).

The index.d.ts and index.js in the root directory reference ./lib to allow importing the component without having to specify a Typescript file. See the How TypeScript resolves modules section in TypeScript Module Resolution. I.e. after the npm installation is complete you just need this:

Overall (and briefly), I find the Typescript/Javascript tooling very frustrating. I’m not alone, e.g.: The Controversial State of JavaScript Tooling. The JSON configuration files (npm (package.json), Typescript, Karma, Webpack, etc.) are complex and the documentation is awful.

The worst part (IMO) is how fragile everything is. It seems like the tools and libraries change rapidly and there’s no consideration for backward compatibility or external dependencies. Updating versions invariably breaks the build. On-line fixes many times take you down a rabbit hole of unrelated issues. If you’re lucky, the solution is just to continue to use an older version. Use npm-check-updates at your own risk!

Feedback:

If you have questions or problems, find a bug, or have suggested improvements please open an issue. Even better, fork the project, make the desired changes and submit a pull request.

Getting old may suck, but if problem-solving and building solutions are your passion being an old nerd (yes, I’m way over 35) really can look like this:
There’s a lot of reasonable advice in Being A Developer After 40, but I think this sums it up best:

As long as your heart tells you to keep on coding and building new things, you will be young, forever.

I recently attended a Deep Learning (DL) meetup hosted by Nervana Systems. Deep learning is essentially a technique that allows machines to interpret sensory data. DL attempts to classify unstructured data (e.g. images or speech) by mimicking the way the brain does so with the use of artificial neural networks (ANN).

Deep learning involves training a computer to recognize often complex and abstract patterns by feeding large amounts of data through successive networks of artificial neurons, and refining the way those networks respond to the input.

This article also presents some of the DL challenges and the importance of its integration with other AI technologies.

Properly training an ANN involves processing very large quantities of data. Because of this, most frameworks (see below) utilize GPU hardware acceleration. Most use the NVIDIA CUDA Toolkit.

Each application of DL (e.g. image classification, speech recognition, video parsing, big data, etc.) have their own idiosyncrasies that are the subject of extensive research at many universities. And of course large companies are leveraging machine intelligence for commercial purposes (Siri, Cortana, self-driving cars).

Relative to the size of a standard Ubuntu Docker image I thought the 250MB CoreOS image was “lean”. Earlier this month I went to a Docker talk by Brian DeHamer and learned that there are much smaller Linux base images available on DockerHub. In particular, he mentioned Alpine which is only 5MB and includes a package manager.

Here are the instructions for building the same Apache server image from the previous post with Alpine.

The Dockerfile has significant changes:

Dockerfile

1

2

3

4

5

6

7

8

# Build myapp server Docker container

FROM alpine:latest

MAINTAINER MyName

RUN apk--update add apache2

RUN rm-rf/var/cache/apk/*

ENTRYPOINT["httpd"]

CMD["-D","FOREGROUND"]

COPY dist/var/www/localhost/htdocs

Explanation of differences:

line 2: The base image is alpine:latest.

lines 4-5: Unlike the CoreOS image, the base Apline image does not include Apache. These lines use the apk package manager to install Apache2 and clean up after.

lines 6-7: Runs the exec form of the Dockerfile ENTRYPOINT command. This will run httpd in the background when the image is started.

line 8: The static web content is copied to a different directory.

Building and pushing the image to DockerHub is the same as before:

Shell

1

2

$sudo docker build-tdockeruser/myapp.# This will create a 'latest' version.

$sudo docker push dockeruser/myapp

Because of the exec additions to the Dockerfile, the command line for starting the Docker image is simpler:

Shell

1

2

# Instead of 9001, use 80 or 8080 if you want to provide external access to the application

$sudo docker run-d-p9001:80--name my-app dockeruser/myapp

The resulting Docker image is only 10MB as compared to 290MB for the same content and functionality. Nice!

Introduction

The most common use for JavaScript frameworks is to provide dynamic client-side user interface functionality for a web site. There are situations where a JS application does not require any services from its host server (see example unhosted apps). One of the challenges for this type of application is how to distribute it to end users.

This post will walk through creating a static AngularJS application (i.e. no back-end server) and how to create and publish a lean Docker container that serves the application content. I will mention some tooling but discussion of setting up a JS development environment is beyond the scope of this article. There are many resources that cover those topics.

Also note that even though I’m using AngularJS, any static web content can be distributed with this method.

Side Note on AngularJS

One of the major advantages of using AngularJS over the many JavaScript framework alternatives is its overwhelming popularity. Any question or issue you may encounter will typically be answered with a simple search (or two).

Creating an AngularJS application

The easiest way to create a full-featured Angular application is with Yeoman. Yeoman is a Node.js module (npm) and along with its Angular generator creates a project that includes all of the build and test tools you’ll need to maintain an application.

Shell

1

2

$npm install-gyo

$npm install-ggenerator-angular

Generate the Angular application with yo. Accepting all the defaults will include “Bootstrap and some AngularJS recommended modules.” There’s probably more functionality included then you’ll need, but modules can be removed later.

Shell

1

2

3

$mdir myapp

$cdmyapp

$yo angular myapp

The yo command will take a little while to complete because it has to download Angular and all of the modules and dependencies.

Creating and Publishing the Docker Container

A typical Ubuntu Docker container requires more than a 1GB download. A leaner Linux distribution is CoreOs. The coreos/apache container has a standard Apache server and is only ~250MB.

Add a Dockerfile file to the myapp directory:

Dockerfile

1

2

3

4

# Build myapp server Docker container

FROM coreos/apache

MAINTAINER MyName

COPY dist/var/www/

The key here is the COPY command which copies the content of the dist directory to the container /var/www directory. This is where the Apache server will find index.html and serve it on port 80 by default. No additional Apache configuration is required.

Create the docker container:

Shell

1

$sudo docker build-tdockeruser/myapp.# This will create a 'latest' version.

Output:

Now push the container to your Docker hub account:

Shell

1

$sudo docker push dockeruser/myapp

The dockeruser/myapp Docker container is now available for anyone to pull and run on their local machine or a shared server.

Starting the Application with Docker

The application can be started on a client system by downloading the running the dockeruser/myapp container.

Shell

1

2

# Instead of 9001, use 80 or 8080 if you want to provide external access to the application

The run command will download the container and dependencies if needed. The -d option runs the Docker process in the background while apache2ctrl is run in the container in the foreground. The application will be running on http://localhost:9001/#/.

To inspect the Apache2 logs on the running Docker instance:

Shell

1

2

3

4

5

$sudo docker exec-it my-app/bin/bash

root@bfba299706ad:/# ls /var/log/apache2/

access.logerror.logother_vhosts_access.log

root@bfba299706ad:/# exit # Exit the bash shell and return to host system

$

To stop the server:

Shell

1

$sudo docker stop my-app

If you’ve pushed a new version of the application to Docker hub, users can update their local version with:

Shell

1

$sudo docker pull dockeruser/myapp

This example shows how Docker containers can provide a consistent distribution medium for delivering applications and components.

My 6 year old Lenovo T400 finally gave up the ghost. It didn’t totally die (it probably never will, thank you IBM), but the screen was starting to flicker and it reliably rebooted itself whenever I was doing something useful. Very annoying.

Decision Process

I’m not going to detail all of my system requirements or decision making process, but here’s a high level outline:

I primarily need a Ubuntu development machine. My T400 is a dual boot 12.04/XP. In recent years I’ve rarely used Windows, but there are some tools that are nice to have around (e.g. Visual Studio).

I looked hard at the MacBook Pro but at the end of the day I just couldn’t bring myself to go that route. Besides the higher hardware cost/performance ratio re: the alternatives, I guess I’m just not a Mac person.

I really wanted to get an Ultrabook form factor. Not only for the portability, but I’m not ashamed to say that the ‘cool factor’ played a part in the decision.

I looked at all of the standard Ultrabook offerings: Lenovo, ASUS, Dell, System76, Acer, etc. No touch, no ‘convertible’ (if you need a tablet, buy a tablet), no Windows 8. The deciding factor for me was reliability. Besides the T400, I have a T60 in the closet that still runs fine.

Buying Experience (not good!)

Products that are discontinued, overstocked, or returned unopened. These items are in their original factory sealed packaging and have never been used or opened.

Boy was I disappointed when the package arrived! First, the only thing in the box was the laptop. No AC power adapter, no docs, no nothing. To my amazement, the machine was in suspend mode. When I opened the lid it came out of hibernation to a Win7 user password prompt! I didn’t even try to guess a password. I couldn’t believe it!

The machine was in pretty good shape physically, a little dirty and missing a foot pad, but no dents or scratches. Certainly opened and used! At least the BIOS confirmed that I got the correct hardware (i7, 8G RAM, 256G SSD).

After many calls to multiple Lenovo service centers I got nowhere. No return, no exchange. Maybe I should write a letter to The Haggler, but even then I probably wouldn’t return the machine anyway. I got a great price (much better than what I could find on eBay) and the Lenovo Outlet no longer has any i7 X1 Carbon’s listed. Also, I’m a techie so disk partitioning and re-installed OS’s is not a problem.

I’m thinking now that Lenovo might have screwed up a repair shipment and I ended up wiping some poor schmuck’s SSD. Oh well.

Anyway, as unpleasant as this was, I now have a development laptop that should meet my needs for many years to come.

Installation Notes

Dual boot. Here’s the right way: WindowsDualBoot, but because I installed Ubuntu first (mistake) here’s what I did:

Used GParted to partition the disk to my liking. Don’t forget to add a Linux swap partition (8G for me). The Ubuntu installer will complain if it’s not there and find it automatically if it is.

Created a bootable Windows 7 USB stick. The Universal USB Installer above works fine for this. Install Windows 7 on the Windows partition.

After Step #3 the system will only boot Windows. Use Boot-Repair (option #2) to re-install GRUB.

Ubuntu 14.04 seems to work flawlessly on the X1. There were only two hardware compatibility issues that I read about:

Not waking up from suspend mode. This is resolved by updating the BIOS firmware. Upgrading from v1.09 to v1.15 fixed it for me. The Lenovo firmware only comes as a CD image (.iso) or a Windows update application. Because the X1 does not have a CDROM drive the only reasonable way to upgrade is via Windows. People have upgraded the firmware via USB (see BIOS Upgrade/X Series), but it’s really ugly.

Fingerprint reader. Haven’t tried to use it, and probably won’t.

Happy Ending (I hope)

Like most things in life, nothing is ever perfect. This experience was no exception.

I have a JRuby/Rails project with some Rspec tests that take 80 seconds to complete on the T400 and 20 seconds on the X1. I can live with that improvement. 🙂

It’s too bad that this is usually a “fork in the road” decision. I don’t think companies are necessarily trying to pigeonhole developers, but they certainly have specific roles (with associated job descriptions) they are trying to fill. It makes sense though. For large software projects, being a manager (and probably a scrum master) isa full time job. Put another way, if you try to split your time between being a contributor and a manager, you’ll probably do both jobs poorly.

My advise is to make this type of decision with your eyes open. If you have a management opportunity and it’s something you’re interested in, take it. Treat it like the career change that it is. Get the additional training and improve your skills just like you would be doing if you were learning a new technology. Management isn’t easy. It takes time and work get good at it.

Also, good companies do not pigeonhole technical managers. You will probably have the ability to switch back to a development or architect role as business needs and priorities change in the future. This could mean moving to a different company, but both you andyour current employer know this. Switching from management back to a technical track will require yet another skills learning curve and a mindset change.

On 21-Jun-2013 version 4.2.14 was released with a fix to this problem.

GA 4.2.14 does not cause graphics problems but it did not resolve the re-size issue for me. The Ubuntu VM starts up okay, but the display would still not re-size when the host (Windows 7 x64) window size was changed.

The solution to this problem was to change /etc/X11/xorg.conf back to its original configuration (this is the complete un-commented file content):

1

2

3

4

5

6

7

8

9

10

11

<strong>Section"Device"

Identifier"Configured Video Device"

EndSection

Section"Monitor"

Identifier"Configured Monitor"

EndSection

Section"Screen"

Identifier"Default Screen"

Monitor"Configured Monitor"

Device"Configured Video Device"

EndSection</strong>

The xorg.conf this replaced had device/monitor/screen sections with specific driver details, e.g.:

1

2

3

4

5

6

7

Section"Device"

Identifier"Configured Video Device"

Boardname"vesa"

Busid"PCI:0:2:0"

Driver"vboxvideo"

Screen0

EndSection

I’ve been using this VM for a number of years and I suspect that older versions of the guest additions made this change. Apparently newer GAs no longer require this.

Anyway, if you’re having a Ubuntu guest display re-size problem give this a try.