Simple Threadhttps://www.simplethread.com
Better software, together.Mon, 08 Oct 2018 15:50:06 +0000en-UShourly1Subscribe with My Yahoo!Subscribe with NewsGatorSubscribe with My AOLSubscribe with BloglinesSubscribe with NetvibesSubscribe with GoogleSubscribe with PageflakesSubscribe with PlusmoSubscribe with The Free DictionarySubscribe with Bitty BrowserSubscribe with Live.comSubscribe with Excite MIXSubscribe with WebwagSubscribe with Podcast ReadySubscribe with WikioSubscribe with Daily RotationThanks for subscribing to CodeThinked!Total Recall: Memorize a Tech Bookhttps://www.simplethread.com/total-recall-memorize-a-tech-book/
https://www.simplethread.com/total-recall-memorize-a-tech-book/#respondTue, 05 Jun 2018 19:00:25 +0000https://www.simplethread.com/?p=1439Bill Gates called the book Moonwalking with Einstein: The Art and Science of Remembering Everything “absolutely phenomenal” and “one of the most interesting books I’ve read this summer.” I followed Bill’s advice, read it, and then read a half-dozen other books on memory techniques. I wrote about how those books changed my learning strategies in... Read more »

I left out that little itty bitty bit because I needed to first prove to myself that I could memorize a tech book, since it seemed like such a daunting task. I have now done that with “Effective Ruby: 48 Specific Ways to Write Better Ruby.” I read and memorized that book in a little over two weeks on about two to three hours a day. In this article I will describe how I adapted the strategies illustrated in “Moonwalking with Einstein” for the memorization of a technical book. But, as Bill Gates said: “Don’t believe anybody who tells you it’s easy,” and “You have to be very serious about it.”

But Why?

One of the developers at work asked me what you might be thinking: “Why would you want to memorize a technical book when you can search your books or google?” I told him: “Many times I know I’ve read a technique or idiom on a better way to do something but the shadow of that memory is too vague. I may not even remember which book it was in or if it was a blog post. Yeah, I could google but I just need to get the thing done.” But it goes much further than that. I have read more books on Ruby than most Ruby devs I’ve worked with, yet, there are times where they’d suggest a technique that I had forgotten. Sometimes I’ll find out they learned the approach from the same damn book.

My developer idioms have been developed from over 30 years of coding. I’ve been doing Ruby for six years or so and, to be honest, I don’t remember Ruby-specific idioms because my Rubyish variant of a Groovy, Java, or C++ idiom all too often suffices.

I’m currently employing the Method of Loci in the construction of memory palace for Refactoring, more specifically: The Ruby Edition. Martin Fowler says of his Refactoring book: “The bulk of the book is around seventy refactorings described in detail: the motivation for doing them, mechanics of how to do them safely and a simple example.” That’s not a book you sit down, consume, and remember. I’m putting those seventy-some refactorings in a memory palace.

Method of Loci

The memorization technique Joshua Foer described in “Moonwalking with Einstein” is not, like Angular and Vue, newfangled. It was discovered in 477 BC by Simonedes, a Greek poet, after he was requested to identify the crushed and mangled bodies in a palace — the roof of which collapsed immediately after he (and maybe Elvis) left the building. Simon identified the bodies not by sight — they were unrecognizable — but by the location in the various rooms where he had last seen a person. The technique worked so well that Simon began to memorize poems by visualizing a palace and placing words in visually rememberable spots (memory pegs) of each room. Loci, by the way, is the latin word for a specific point or position.

Humans have evolved to remember places, people, smells, and sounds. Our minds have not evolved to remember words and numbers and otherwise the stuff we are expected to learn by rote memory in school. When early man painted on cave walls it was not for art but to remember.

Ok, so you’ve seen one of those memory geeks on Johnny Carson (sorry, I’m old, let’s say Jimmy Fallon) that could remember a shuffled deck of cards in less than thirty second using the memory palace technique. And you’re thinking: “Really! You actually think this will work for tech?” Sure, it can work for tech, it is already working for other vocations where memorization is even more important.

As an example, my son is in his last year of medical school at UVA. The standardized tests in medical school are brutally complex, the results of which determine student futures. Sure, no one gets into medical school without being adept at studying and memorization, but listen to this: For the last 5 years all medical students pay to use courseware from sketchymedical.com, which is entirely based on the memory palace technique. Take 20 seconds, visit sketchymedical.com, and just watch the animation on its homepage. Their landing page says: “These guided sketches help you create a memory palace by associating medical topics with memorable visual elements.” Passing a pathology exam is a bit more complex that memorizing a deck of cards, yet med students do it with goofy cartoons.

I actually signed up for a sketchymedical trial and took a free lesson. While I had fun watching the cartoon graphics and listening to the audio, I didn’t remember much but the silly graphics. That’s because I did not have the context. The same goes for creating a memory palace for a tech book or seminar. You have to understand the material. There are two basic types of memory strength: storage and retrieval. Us geeks do a pretty damn good job of storing information we’ve read (or listened to) and understand. The problem is: we often are unable to recall that information. Putting that information in a memory palace makes it retrievable.

I published a text-based version of my memory palace for Effective Ruby in copious detail in a the following blog:The Memory Palace of Effective Ruby. You need not go to such lengths — I created that readable version so I could share what I visualize in my head.

Building A Memory Palace

Your memory palaces (that’s right, you’ll need a bunch of them) don’t have to be rooms in a palace. You can use any building of which you have strong visual memories. Or a journey with roads, turns, landmarks, lights, and signs — like the U-turn sign my 16 year-old son hit when, he said, he was distracted by a roaming pig. Each of the rooms, and specific spots within that room, are considered pegs — because you “peg” visual images of things you wish to remember on them.

For my Effective Ruby palace I used the offices from a job I had in the early 1990s in San Antonio, Texas. I worked there 5 years so I have plenty of pegs. For my Italian conjugation palace I used the classroom from 7th grade French. Each desk has a peg for a visual of a different tense. I won’t do the whole list but it includes: Tony Romo, Yoko Ono, a purple vampire, the SoHo skyline, and Elmo.

In general, anything in a room that you can find in the dark — lamps, chairs, desks, desk drawers, and closets — work well as a memory peg. In my living room, our TV sits on a desk with 10 drawers so, with the table top, the TV, and drawers, that’s 12 pegs. Sure, some of the drawers are small, but my imagination is not. One of my rooms in my Effective Ruby palace holds the Atlantic ocean with a sinking Titanic. (The more ridiculous, the better it sticks.) Note that transient items — like magazines, flowers, or a coffee cup — should not be used as pegs.

You will need to always have a half-a-dozen or so empty memory palaces available. You’ll be consuming them so be sure to keep the queue stacked. Once you use a memory palace, it will house those images until you no longer need to remember them. And, actually, it’s kind of hard to clean up and reuse a memory palace. That’s right, once built, it’s hard to forget.

I find it fun to scour my past for memory palaces just before I go to sleep. For example, I’m working on reconstructing one that is the elementary school where I attended first and second grade.

A Memory Palace Construction Strategy for Tech

Read (or listen) actively while taking creative notes. Focus, using all parts of your brain: left, right, sensory, musical — all of it. Focus your whole self and don’t think that computer scientists with advanced degrees are more adept at focusing. Use your active brain. Yours — the one has a degree in literature or biology or art. The brain that played in the school band and spent a summer abroad or went to space camp or immigrated to the US from Sudan. The brain that will form analogies, jokes, and puns. The creative brain that will construct visual images using people and places from your past. The mind that has hopes and dreams for the future. After all, you are reading this post because it could change that future (otherwise don’t read it much less create a memory palace for it.)

Take notes of the thoughts you make as you read. The ah ha moments and the analogies and puns and visuals. Sure, highlight the important text but your notes should translate that verbiage into your words of comprehension. If you think of a mnemonic as you read, great, but don’t push it. Read to understand and comprehend and your notes should be triggers to those understandings.

Take more breaks from reading than you might normally. Active reading is hard. Your subconscious brain needs time to assimilate and process information during so called breaks.

Review your notes and begin to look for a visual. One with which you can attach a word or phrase. Simplify your notes: they are to jog your memory, not to be your memory. Bubble up your technical notes into visual descriptions adding action. Create an absurd and exaggerated scene, with characters (ranging from Wile E. Coyote to Barack Obama.) Perhaps make that action sensual or violent. Remember, you will only remember a scene if you can clearly visualize it. I find, when I’m having problems remembering, my vision is lacking clarity or absurdity and I need to add exaggerated action and improve the visual. Be sure to use tech keywords in your story. Puns and acronyms work as well.

Walk through your memory palace, in your head, without notes, regularly. The first phase of constructing your memory palace is to be able to rattle off the word/phrase for each peg. Initially, don’t try to visualize the full action of the scene until you can recall the peg sequence of the word/phrase by rote. But, with the memory palace strategy, you’ll find you can almost do that without trying. Be sure to start fleshing out your memory palace very early in the process of reading a book.

Continue reviewing and refining your notes until most of the tech notes have bubbled up into absurd visuals. Again, reviewing in your head is best. Here’s a list of scenarios where I have done memory palace walk throughs:

Just before I go to sleep and, In the middle of the night, when I can’t sleep

While watching reruns of my wife’s favorite episodes of Too Cute

While driving or standing in checkout lines

During dental procedures

Notes on Notes

I use Apple Notes. I take the option to keep my notes in the iCloud so I can edit them on my Mac and study them on my iPhone. I like to highlight or italicize text and change font and color to make keywords pop. Although, once I have a clear visual, the text itself, much less the formatting, is no longer important. For each memory palace, I create a folder. Each pegged visual has its own note, the top line of which I number and place the word/phrase followed, optionally, by a short description. Here’s a view of Notes on my iPhone from my palace of Effective Ruby:

Manson: Atrium full of people…

Neil Armstrong: Room full of…

Crypt Keeper wearing pearls

Duct taped girl: Constance

Yellow flashing light

Gramps: Dad and Gramps sitti…

Superman with/without parents

I read from a variety of Kindle devices: Fire, Paperwhite, and iPhone. I make heavy use of highlighting and notes. The Kindle has a feature that allows you to send all your notes to your Kindle email in HTML format. I open the emailed HTML on my Mac and copy-and-paste to my Apple Notes. It is from my organized Apple Notes that I do my palace construction and walkthroughs.

Sample Note from my iPhone

Below is a sample Apple Note from my Effective Ruby palace. One point about the last two sections of bullets: they are pure tech notes. They are now extemporaneous as they have been obsoleted because their concepts have been bubbled up into the absurd scene.

16: Mice: Two guys doing scientific tests on mice

first guy handed a bunch of mice to the second

who was wearing a marshal’s badge

the second guy cloned or duplicated the mice

actually, most of the time he duplicated them

sometimes he cloned them

every now and again he used his Marshall capabilities and did a deep copy

he hands them back to the first guy and then runs tests on the copies — tests that mutated the mice in indescribable ways

args are passed as refs, not values

but for Fixnum objects

dup and clone make shallow copies

Marshall can be used to create deep copies

when needed, but be careful

dup: best option

always allows mutation

won’t pull singleton methods

clone:

honors freeze so might not be able to mutate

pulls singleton methods

A Tech Idiom to Memory Palace Peg Walk Thru

Let me step through my process of converting a tech note to a visual that I have pegged into my Effective Ruby memory palace.

Item three of Effective Ruby is “Avoid Ruby’s Cryptic Perlism.” Right off I clue in on the word: Crypt. I mean, come on… Crypt Keeper! And then the word: Perl. Pearl. That was too easy: “Crypt Keeper wearing a string of pearls.” But that’s just the title of the idiom. Item three was covered in several pages of Peter Jones’s book as it detailed three recommended alternatives to the use of the Perl-like cryptic syntax supported in Ruby:

Avoid the use of =~ and a regex as you then need to use the cryptic $1, $2, and so forth, variables. Peter recommends the use of String.match which returns a MatchData object on which you can use brackets to retrieve matches. “A man is holding a match in one hand while incessantly pointing at it with the slowly curling index finger of his other hand.” The curling finger looks like the tilde operator.

Don’t use Perl’s $: instead use the more descriptive $LOADPATH. You load a path with a wheelbarrow full of some kind of filler: “I stumble into a wheelbarrow. It’s full of chipped rubber tires of the type you use for paths in a park. Oddly, there’s a crisp dollar bill on top of the load destined for some path.”

Enable the use of other meaningful global Perl variables by loading the English library: require(‘English’) and then you can use meaningful global variables with names like $OUTPUT_FIELD_SEPARATOR instead of $,. “The last thing strangeness of the room is that there’s an English dictionary sitting on the middle of the table.”.

Here’s a few other mnemonic word/phrases I used build the palace of Effective Ruby:

Superman with/without parents: Best practices for parameters and the use of optional parentheses when invoking parent methods.

Spaceship: The definition of <=> method, commonly known as the spaceship operator

For many of the items in Effective Ruby it was relatively easy, and fun, to create mnemonics. Others, like “Judge Ruby and the Gallows” (which is on use of the eval and exec methods to create or run dynamic code) took me some time. But, then again, it was a complex subject and, until I had the analogy worked out, I really didn’t have a solid grasp of the Ruby rules and usage patterns for eval and exec.

Don’t Forget

Why aren’t more programmers using the Method of Loci? Maybe because coders just aren’t aware of it. Or perhaps because it’s initially hard to get started and takes practice to do efficiently. But, probably because they haven’t heard of it! You can create a memory palace methodically and the extra time you spend constructing one will result in better memory retrieval strength, if not understanding. Lewis Smile, in his book “The Memory Palace”, said: “There’s no such thing as a bad memory, only an untrained one.”

I’m not going to construct memory palaces for every book I read or session I listen to. I’m going to create them for books full of idioms on languages and frameworks and design patterns — stuff I want on the tips of my fingers as I develop. I certainly will be constructing a palace to memorize basic syntax for my next programming language (Go, Haskell, or something, not sure yet.)

Elvis may have left the building but you, Simonedes, and I can construct palaces so our memory stays.

]]>https://www.simplethread.com/total-recall-memorize-a-tech-book/feed/0RailsConf 2018 – Top 10 Favorite Talkshttps://www.simplethread.com/railsconf-2018-top-10-favorite-talks/
https://www.simplethread.com/railsconf-2018-top-10-favorite-talks/#respondFri, 25 May 2018 18:47:36 +0000https://www.simplethread.com/?p=1431Videos of RailsConf 2018 went up last week on Confreaks. When our team got back from RailsConf, we all made notes of our favorite talks, which I’ve summarized below. Note: These are not necessarily the 10 best talks at RailsConf 2018. They’re just the top 10 talks our team happened to attend and enjoy at... Read more »

]]>Videos of RailsConf 2018 went up last week on Confreaks. When our team got back from RailsConf, we all made notes of our favorite talks, which I’ve summarized below.

Note: These are not necessarily the 10 best talks at RailsConf 2018. They’re just the top 10 talks our team happened to attend and enjoy at RailsConf this year, with those enjoyed by more of our team roughly clustered towards the top.

Sarah Mei
Sarah gave a stand-in keynote when the planned speaker had travel delays. This is a deeply thoughtful talk pulling in Brooks’s No Silver Bullet, changes in software development over recent decades, flaws in the analogy of construction & architecture, among other topics. She ties it all together into the idea of treating our applications as spaces we live in, rather a thing we build, finish, and move on from. We need our codebases to be livable and sustainable, and Sarah gives practical advice to help. It’s good, even if you don’t live in Rails. Watch it.

Eileen Uchitelle
Eileen talks about what’s happening in Rails development right now, plans for Rails 6, etc. She presents a perfect mix of conceptual goals and the technical implementation changes being made to accomplish them. It’s always enlightening to listen to a Rails core team member talk through their thoughts and how they’re approaching problems.

Nickolas Means
Several of us attended this talk, and we all thought it was fantastic. In this entertaining, detailed explanation of the event, Nickolas references the work of Sidney Dekker, acclaimed system safety expert, and makes excellent points about how “human error” is never the root cause. Human error is a reflection of flaws or oversights in the system. We need to dig deeper to find the “second story” to understand why people made the decisions they did, how the system allowed or encouraged those decisions, etc. I’ve been meaning to read Professor Dekker’s Field Guide to Understanding Human Error for a while, and this bumped it higher on my list.

David Heinemeier Hansson
Thoughtful, opinionated keynote from DHH, as always. We all especially liked his idea of “conceptual compression” as a way to think about how we, as the Rails community and as an industry, are making progress. We discuss this idea more in our RailsConf 2018 Takeaways post. He also talks about “Just-in-Time Learning” and personal (ethical) responsibility for the systems we build, among other interesting topics.

Jordan Raine
This was one of my personal favorites, surprisingly. I expected the usual tips about how to upgrade an application, but instead this is about changing your team culture and habits to make upgrades less painful over time and reduce the delay in upgrading. Jordan shares practical tools, methodologies, and mindset shifts to help. If your team has the typical pattern of waiting to upgrade Rails and then finally going through a painful upgrade sprint/project, this is worth watching.

Akira Matsuda
Akira, a committer to both Ruby and Rails, shows how even though Rails is a mature platform, there are still a lot of places that have not been highly optimized from a performance perspective. He also talks about improvements that could be made by leveraging asynchrony. We’re hopeful some of the ideas he proposed eventually make it into Rails, e.g., (easier) async queries in ActiveRecord and async view rendering.

Sasha Grodzins
Sasha provides a quick, easy-to-follow introduction to using React and GraphQL in a Rails application. If you’re already using those tools in production, you might skip this one, but if those are on your “look into someday” list, this is a good talk to start with. The associated GitHub repo is worth perusing too.

Chris LoPresto
Chris discusses how they leveraged Rails conventions and specific features, in combination with feature flags, to completely replace their UI at Betterment over the course of 8 weeks – while deploying their work constantly so they could test it in production. I haven’t personally watched this one yet, but from talking to colleagues, they especially liked the discussion around managing complex deployments with feature flags.

Aja Hammerly
Aja presents a myriad of smart techniques one can use to keep a production system running smoothly – and know if it’s not. Developers tend to have exhaustive test suites that run before deploying to production, but then we often just fall back on basic monitoring for production. This talk is inspiring for thinking about what more we could be doing.

Bonus Talks!

These are talks that none of our team attended but we heard good chatter about – and are on our shortlist to watch soon.

]]>https://www.simplethread.com/railsconf-2018-top-10-favorite-talks/feed/0Node.js Doesn’t Share Your Values, And That’s Okay.https://www.simplethread.com/node-js-doesnt-share-your-values-and-that-is-okay/
https://www.simplethread.com/node-js-doesnt-share-your-values-and-that-is-okay/#commentsTue, 22 May 2018 19:00:23 +0000https://www.simplethread.com/?p=1422Breathe. It’s okay. You and Node.js, or <insert other framework here>, don’t share a lot of the same values. It’s okay. There are a lot of fish in the sea. You’ll be just fine. Typical Pedantic Developer Disclaimer Bear with me, I’m going to start this off by deflecting some early criticisms. I want to... Read more »

You and Node.js, or <insert other framework here>, don’t share a lot of the same values.

It’s okay. There are a lot of fish in the sea. You’ll be just fine.

Typical Pedantic Developer Disclaimer

Bear with me, I’m going to start this off by deflecting some early criticisms. I want to say that Node.js, and many of the frameworks built on top of it are great tools. There is a whole world of problems that Node.js is perfectly suited to solving. In fact, I would say that many of the best places for Node.js to plug in are at companies that have strong engineering cultures like Netflix, LinkedIn, Paypal, etc… Places where there are large teams of skilled engineers who want control over every line of their application, need insane performance, and who want to make the tradeoff between using the latest tools… and having to keep up with those tools as they change (or just write their own tools).

And yes, I know that Node.js is a runtime and not a framework. I really do understand that. The problem is that the Node.js ecosystem has a huge number of popular frameworks, so picking one and using it as an example would just get me yelled at by everyone else. By not picking one, I just get yelled at by everyone.

It’s Not You, It’s Me

Okay Justin, so you agree that Node.js is great technology, then what are you trying to say? Node.js is exploding in popularity right now. Why wouldn’t I want to use it for my business (or startup)?

Well first, I’m not saying you don’t. There is a good reason that Node.js is popular. It comes with a lot of legitimate strengths:

Active Community – The JavaScript community is active. Insanely active. About 500 NPM packages a day are published!

Scalable – Node.js uses event driven I/O which means that it can handle a ton of concurrent connections. So it might not be fast as something like Java in pure speed, it does allow a ton of lightweight connections on the same process.

Accessible – Since Node.js is written in JavaScript, it allows any developer familiar with JavaScript (ALL OF THEM!) to jump in. This removes a significant barrier to entry.

Simple – In general, the Node.js ecosystem values small frameworks that do one thing well, which can allow developers to get in and get started quickly.

All I am saying is that before you choose to use Node.js on your next project, consider a bit what your needs and goals are.

For example, what is more important to you, flexibility or productivity? Do you value an integrated toolset that pushes (forces?) you down a particular path, or do you want the flexibility to mix and match all of your tools? Now you might be saying “Of course we value flexibility AND productivity!” And that is great, of course you value both, but what do you value more? There is no wrong answer here. There are a lot of companies that value control and flexibility over productivity, and for very valid reasons. Both are practical, but both are very different applications of practicality.

What is more important to you, flexibility or productivity? Of course you value both, but what do you value more?

The Spectrum Of Control And Productivity

There are companies that need total control, and there are other companies that want to spend less time focusing on their technology, and more time focusing on their business problems. I like to think of these companies as falling on two opposite ends of a spectrum. On one side you have companies that just want things built quickly and for them to be somewhat usable, and somewhat functional. On the other extreme you want companies that want to write everything themselves, and even go so far as to do things like write their own web server software (I’m looking at you Google).

For right now, I just want to focus on the “Full Stack” – “Lightweight” portion of the spectrum. To understand the difference between these portions of the spectrum, I’ll use the example of user authentication and registration. Below is a post detailing how to add authentication and user registration to a node.js app using passport:

That’s a thorough and great tutorial! The steps that tutorial goes through are:

Pulling in passport

Setting up your app to integrate the passport middleware

Adding a database.js config file

Creating the routes file and wiring it up to our views (we would probably want to move these routes out into another file in a larger app)

Put the code in to login/logout the user

Create views for signup/signin/signout

Create the user model

Add code to the user model for hashing the password,

Checking if the password is valid

Configure the passport local strategy to load the user and authenticate it

Set it up to return any validation errors, etc… from login

There is quite a lot of code and configuration there. Many developers love the explicitness of wiring everything up like this, they hate it when frameworks hide things behind magic. It’s cool, I get it! There is a certain simplicity to it. However, if you’re like me, and you love “batteries included” frameworks like Django, Laravel, or Rails then you probably look at this and think that you’d rather just leverage the community’s work and knowledge and let the framework handle it. I can get the functionality really easily, and I know that my framework’s built-in authentication system gets a lot of use and is probably pretty high quality.

Time For Some Apples and Oranges

As a comparison, both Django and Laravel ship with authentication out of the box. The Rails community largely uses Devise as its authentication method. Just to give an example, here is how you’d setup Devise in a Rails app:

Put Devise in your Gemfile, run bundle

Run ‘rails g devise:install’

Run ‘rails g devise user’

Run ‘rails db:migrate’

Optionally run ‘rails g devise:views’ to spit out the default views for customization

I’m not saying that one is inherently better than the other, they are just different. But one thing to note is that Devise gives you a *lot* more functionality out of the box than what the above tutorial provides. You’re really only just scratching the surface there.The important thing to note is that a Node package couldn’t ever provide this functionality. Because one of the things the Node.js ecosystem values is ultimate flexibility, you cannot depend on there being defaults. Because Devise knows where views go, how to wire in routes, where controllers go, where to generate a model, how to wire up validation, how to add helper methods to controllers, etc… it can do all of this work for you.

The same holds true for a whole world of Rails gems (and Django or Laravel packages) that do everything from authorization to logging. Need a library to automatically take all of your CSS for your emails and inline it into the HTML at runtime so that it renders properly in most email clients? Yep, there is a gem for that. Need a library to automatically save versions of a specific model every time you save it to the database? Yep, there is a gem for that too. Do you want a simple, no configuration way of pulling webpack into your Rails app? Yep, you guessed it, it is now part of Rails. And yes, the goal was for it to work out of the box without any configuration, and to automatically combine, minify, and hash your assets. It even provides easy integrations for popular front-end libraries and gives you an easy way to spin up a webpack dev server.

It Isn’t Better, Just Better For Me

You see, the values in the Rails ecosystem are just different. DHH tried to elucidate those values a few years ago by publishing The Rails Doctrine. In it he explained the nine pillars that make Rails what it is, and why it continues to flourish. I know that Hacker News these days might make it seem like Rails is on life support, but the reality is that Rails had almost 4700 commits over the last year. That number is absolutely insane, and flies in the face of comments that Rails is dead or dying. I know that commits aren’t the be-all-end-all of activity measurements, but there are very few tools or frameworks that are anywhere near as active as Rails.

Now I know what you’re probably thinking. Justin, if you love Rails so much, then why don’t you just marry it? And to be honest, I really do like Rails. You might say I love Rails. For much of the work I do, I don’t think any other framework has done such a great job of providing me the basic building blocks for quickly and easily building a production web application. But my idea of what a makes a production web application probably varies a lot from your definition, and that is the whole point.

But my idea of what a makes a production web application probably varies a lot from your definition, and that is the whole point.

I think that too far too often the default tech stack for new projects is chosen without ever stopping and considering the real needs of the business. Many companies want to have applications that are built on frameworks that are moving and changing quickly, while other companies value more stability. Some companies want to choose every piece of their stack, while other companies want a set of constraints that allows them to focus more time on solving business problems.

Just Consider It

I just ask you to think about it. The next time you’re starting a project, take a bit of extra time to really think about the needs of the business and then carefully consider your choice of stack. Does your business really need ultimate flexibility? Even if it comes at the cost of lower productivity?

Maybe what you’re doing now works great for you! Awesome! But remember, don’t just go with the flow, you don’t have to use the same thing all of the cool kids are using (although that sure sounds like a lot of fun!). Sometimes it is better to use mature and stable technologies.

If, after a month of monastic-level solitude and quiet contemplation of all of your life choices, you decide that you could use something a little more full-stack; then I recommend checking out the following frameworks:

Rails – Ruby – I just spent a couple pages blabbing about this, I doubt I need to tell you much more.

Django – Python – Django is actually slightly older than Rails, but is also a wonderfully powerful stack. It has gained a lot of popularity with the rise of Python over the last few years.

Laravel – PHP – I haven’t used Laravel in anger, but I’ve talked to several people who love it. If you’re in the PHP world, I’d recommend checking it out.

Sails.js – JavaScript – A framework based on Rails built on Node.js – I’ve explored this a good bit, and it does seem to be gaining a bit of a following in the Node.js world. The biggest issue I have had is because integration just isn’t something the Node.js ecosystem values, Sails.js gives you an integrated stack, nothing else in the ecosystem can really integrate in the same way. However, if you’re in the JavaScript ecosystem and looking for something more integrated, this might be what you’re looking for.

Phoenix – Elixir – A framework built by a prominent member of the Rails community. Built on BEAM, the Erlang VM, it is designed to be productive and fast.

Do you have anything you’d like to see added to that list? Let me know!

]]>https://www.simplethread.com/node-js-doesnt-share-your-values-and-that-is-okay/feed/2Takeaways From RailsConfhttps://www.simplethread.com/takeaways-from-railsconf/
https://www.simplethread.com/takeaways-from-railsconf/#respondTue, 24 Apr 2018 18:55:21 +0000https://www.simplethread.com/?p=1406At Simple Thread, Rails is our favorite framework for building web applications. Even after all these years, we still think it is the most productive framework out there for building awesome web applications. Because we love Rails so much, more than half of the team traveled to Pittsburgh this week to attend RailsConf 2018. Here’s... Read more »

]]>At Simple Thread, Rails is our favorite framework for building web applications. Even after all these years, we still think it is the most productive framework out there for building awesome web applications. Because we love Rails so much, more than half of the team traveled to Pittsburgh this week to attend RailsConf 2018. Here’s a bit of what we learned from our time at RailsConf.

Compress All The Things!

Conceptual compression: Look at all the things I’m not doing! #RailsConf

RailsConf started out with DHH making a keynote about progress and “Conceptual Compression”. Conceptual Compression is the idea that over time, we make progress by tightly packing the mental load on concepts so that we no longer need to think about the underlying complexity of them in order to leverage them.

When Rails was introduced, there were many complexities to web development which Rails smoothed over and which appealed to developers of that time. While understanding these details might be beneficial for experienced developers today, how are we to quickly bring new developers into the fold if we require them to understand every underlying detail and the context of web programming from ten years ago? Do you need to know the underlying details of your operating system in order to use it effectively? We all get to stand on the shoulders of giants and now it is the new Rails developers turn.

For those new developers, Rails might be a huge framework, but the beauty is that you don’t need to understand every nuance to be productive. The lesson here is to take heart in building something even if you don’t know how everything works from the outset. Learning the conventions, rules, and constraints is very important, but never underestimate the power of building something as a means for gaining more knowledge to then apply later on. Knowledge which will then be compressed on the next big thing and on and on it goes.

RailsConf Got Real

Programmers who are guiding business units through the process of digitally exploring new ventures or mapping existing workflows to applications need to embrace the authority, responsibility, and stewardship that come along with it.

We shouldn’t try to specialize to the point where folks become experts in tiny pieces of the whole picture. Complexity of software forces specialization. Systems built in Rails are not magically less complex, but managing the complexity requires less effort. The history of Rails is largely a story of identifying complexity shared by the community at large and then pushing that complexity upstream into the framework, with well-reasoned abstractions and opinionated conventions for handling the complexity.

In other words, Rails is a force for compressing concepts to the point that small teams of people can manage highly complex systems and own the whole experience. Seeing the whole picture provides a context for responsible stewardship and other higher level concerns.

Engineers should be guides to the business teams — not just transactional ticket-takers. Software developers got into the field to change the world… and as the economic explosion of the last ten years have proven, it’s working.

But once software eats the world, like the ouroboros, will it eat itself? If we don’t assume the ethical and moral responsibility that goes with it … it just might.

Rails Isn’t Dying – Rails Is Maturing

Rails is one of the best ways to quickly build stable and scalable codebases, and while other frameworks have exploded in popularity, Rails is still one of the most actively developed in the world with almost 4700 commits made to it over the last year. The takeaway here is that frameworks are tools to the task and Rails has been optimized for a specific purpose that few other frameworks can match.

Are there ways that Ruby could be better? Of course.

I’m still processing the week at #RailsConf. I think the overriding feeling I have so far from the week is that we’ve grown up. We are a mature technology. So what are we going to do about that? Make things easier, even more robust, and even more enjoyable.

For instance, scalability isn’t the issue it is perceived to be. So battling those sort of constraints – or once they’re removed, the perceptions of those constraints – seems like a major theme for the community in the last few years and ahead in 2018.

Dev.to’s Ben Halpern wrote a stirring article about how, as he puts it, “the Hacker News mindset on Rails eats at my insecurities” but attending RailsConf made him realize how vibrant and crucial the Rails developer community is to what software developers are building and maintaining.

In other words, Rails might be getting older, but it definitely isn’t slowing down.

Slow Down And Do Your Dishes

What is worse than coming home to a sink full of dishes with dried up food stuck to them? Not much, but allowing for technical debt to build up in your codebase is essentially the same thing.

The solution to a big mess of code isn’t a big rewrite but changing your programming habits, so every small change turns the tide and leaves the codebase better every time. “Everyone has to do the dishes” – @sarahmei#RailsConf

Take a lesson from the White Rabbit which can apply to programming, business and the rest of life … sometimes you have to SLOW DOWN to go further!

We know what makes a codebase most equipped to be its best self: low complexity, high test coverage, upgraded dependencies, minimal duplication, sound data schemas and stable churn rates. Violating those ideals is like leaving a dried up lasagna dish in the sink. There’s a time and a place for it, but don’t let it sit too long and don’t do it everyday for a year.

Final Thoughts

We learned a lot at RailsConf this week, but I think the most important lesson we learned is even after all these years, the Rails community is as powerful as ever. If you want to hear more from us about what we learned and how it could help you and your business, please get in touch. We’d be excited to tell you more!

]]>https://www.simplethread.com/takeaways-from-railsconf/feed/0You’re Not Actually Building Microserviceshttps://www.simplethread.com/youre-not-actually-building-microservices/
https://www.simplethread.com/youre-not-actually-building-microservices/#commentsMon, 26 Feb 2018 18:28:45 +0000https://www.simplethread.com/?p=1397I recently read a post called The False Dichotomy of Monoliths and Microservices by Jimmy Bogard, which I absolutely loved. While reading through it I noticed he touched on a witticism I use, which is to refer to most microservices implementations as “distributed monoliths”. I very briefly thought that maybe I had coined the term,... Read more »

]]>I recently read a post called The False Dichotomy of Monoliths and Microservices by Jimmy Bogard, which I absolutely loved. While reading through it I noticed he touched on a witticism I use, which is to refer to most microservices implementations as “distributed monoliths”. I very briefly thought that maybe I had coined the term, but Google quickly disabused me of that ridiculous notion. I say “ridiculous” because I should have known that the phrase would have been previously used, since it so perfectly describes what most greenfield microservices implementations turn into.

Before we get into the details of that, I have to lay out a confession: I’m a big fan of the monolith. It has received so much bad press in recent years due to the microservices craze. But I think the real root of the problem is that developers work with applications that slowly accrete more and more functionality (often without tests) over a course of many years and those applications become very brittle and difficult to change. I’m not going to deny it, this is a huge problem! This is one of the core problems that Service Oriented Architecture and now Microservices (are Microservices a subset of SOA?) are designed to solve.

I’m not building microservices?

Before answering that question, let’s define what most folks would consider the opposite of microservices… a monolith. One of the challenges these days is finding a good definition of a monolith, because they are most often defined in the context of implementing SOA or Microservices. I think that the Wikipedia definition is actually pretty good though:

A software system is called “monolithic” if it has a monolithic architecture, in which functionally distinguishable aspects (for example data input and output, data processing, error handling, and the user interface) are all interwoven, rather than containing architecturally separate components.

The basic idea is that instead of being broken out into separate architectural components, things are instead intertwined. To Jimmy’s point from his post above, there is nothing in this definition about the fact that these pieces are, or are not, physically distributed from each other. We like to think of monoliths as a large single codebase, running on a single system, but that really isn’t the case. The core tenet of a monolith is that the system functionality is intertwined.

So, if we follow that course of logic, then what would a distributed monolith be? Well, it would be a set of physically distributed services that are intertwined. Sound familiar? I hope for your benefit, it isn’t too familiar.

You see, a distributed monolith is the worst of all worlds. You’ve taken the relative simplicity of a single monolithic codebase, and you’ve traded it in for a deeply intertwined distributed system. So, are you building microservices? Take a look at a few of these symptoms, and decide for yourself:

A change to one microservice often requires changes to other microservices

Deploying one microservice requires other microservices to be deployed at the same time

Your microservices are overly chatty

The same developers work across a large number of microservices

Many of your microservices share a datastore

Your microservices share a lot of the same code or models

Now, I’m not saying that if your microservices implementation checks one of these boxes that you have a problem… there are always exceptions. But for the most part, if you’re nodding your head at a number of the points above, you might not be working with microservices.

How do I get to microservices?

I’m a big fan of baby steps. And when it comes to microservices my thoughts are no different. For many greenfield systems I would almost always recommend starting off with a monolith. Monoliths are easy, and you can logically separate your system where it makes sense within a single codebase. But many systems have a lot of concepts that are inherently tightly bound, which are difficult to separate cleanly until you have a deep knowledge of the problem domain. Sometimes you realize that those concepts can never be separated!

Stefan Tilkov has a thoughtful essay arguing against the premise of starting with a monolith when you suspect you’ll need microservices. But he still acknowledges the difficulty of defining boundaries in a greenfield system: “I agree you should know the domain you’re building a system for very well before trying to partition it, though: In my view, the ideal scenario [for starting with microservices] is one where you’re building a second version of an existing system.”

Designing service boundaries isn’t easy, even with knowledge of a system, and getting them wrong is very costly. Starting out with a monolith allows you to figure out where the natural boundaries are in your system, so you can design your service boundaries in an informed way.

Avoiding the Distributed Monolith

I believe that many of the problems that people run into with small and medium sized monoliths can be alleviated by having a good automated test suite and automated deployments. Having a single, non-distributed codebase can be a huge advantage when starting out with a new system. It allows you to more easily reason about your code, allows you to more easily test your code, and it allows you to move quickly and change quickly without having to worry about orchestration between services, distributed monitoring, keeping your services in sync, eventual consistency, all of the things you’ll run into with microservices. Things that might not seem like huge challenges at first glance, but in reality are huge hurdles to overcome.

If you start off with a monolith, your goal should be to keep it from growing into a monstrosity. The key to doing this is to simply listen to your application! I know, this is easier said than done, but as your monolith grows, keep asking yourself a few questions:

Is there anything that is scaling at a different rate than the rest of the system?

Is there anything that feels “tacked-on” to the outside of the system?

Is there anything changing much faster than the rest of the system?

Is there a part of the system that requires more frequent deploys than the rest of the system?

Is there a part of the system that a single person, or small team, operates independently inside of?

Is there a subset of tables in your datastore that isn’t connected to the rest of the system?

As the system size and developer count gets larger you’ll find that splitting out services will become commonplace. You hear about companies like Netflix or Uber that have hundreds of microservices in their ecosystems. These companies are at the extreme end of the scale. They have thousands of developers and are often deploying thousands of times per day. They need tons of fine-grained services in order to make any of this sane and manageable. But what they might think of as a fine-grained service is probably a medium sized application to a much smaller company. That is something that is important to keep in mind, the scale of some of the systems that are successfully deploying large numbers of microservices.

When you move significantly down from that extreme, you will start to find that leaning more and more towards having a handful of medium sized monoliths, surrounded by an ecosystem of services is what make more sense. When you get down to the other extreme end of the scale, where you have less than five developers working on an entire system, you’ll find that there are huge advantages to sticking with a monolith and potentially pulling out a service or two — if it really makes sense.

If there is anything you take away from this post, I hope it is that microservices are useful, but shouldn’t always be your default architecture of choice. Splitting out services makes some things easier and other things harder, so depending on your system and organization you’ll want to maybe think through things before jumping in head first.

To sum it up, I’ll leave it to Martin Fowler: “don’t even consider microservices unless you have a system that’s too complex to manage as a monolith.”

]]>https://www.simplethread.com/youre-not-actually-building-microservices/feed/1Software Complexity Is Killing Ushttps://www.simplethread.com/software-complexity-killing-us/
https://www.simplethread.com/software-complexity-killing-us/#commentsMon, 29 Jan 2018 20:07:06 +0000https://www.simplethread.com/?p=1394Since the dawn of time (before software, there was only darkness), there has been one constant: businesses want to build software cheaper and faster. It is certainly an understandable and laudable goal – especially if you’ve spent any time around software developers. It is a goal that every engineer should support wholeheartedly, and we should... Read more »

]]>Since the dawn of time (before software, there was only darkness), there has been one constant: businesses want to build software cheaper and faster.

It is certainly an understandable and laudable goal – especially if you’ve spent any time around software developers. It is a goal that every engineer should support wholeheartedly, and we should always strive to create things as efficiently as possible, given the constraints of our situation.

However, the truth is we often don’t. It’s not intentional, but over time, we get waylaid by unforeseen complexities in building software and train ourselves to seek out edge cases, analysis gaps, all of the hidden repercussions that can result from a single bullet point of requirements.

We get enthralled by the maelstrom of complexity and the mental puzzle of engineering elegant solutions: Another layer of abstraction! DRY it up! Separate the concerns! Composition over inheritance! This too is understandable, but in the process, we often lose sight of the business problems being solved and forget that managing complexity is the second most important responsibility of software developers.

So how did we get here?

Software has become easier…in certain ways.

Over the last few decades, our industry has been very successful at reducing the amount of custom code it takes to write most software.

Much of this reduction has been accomplished by making programming languages more expressive. Languages such as Python, Ruby, or JavaScript can take as little as one third as much code as C in order to implement similar functionality. C gave us similar advantages over writing in assembler. Looking forward to the future, it is unlikely that language design will give us the same kinds of improvements we have seen over the last few decades.

But reducing the amount of code it takes to build software involves many other avenues that don’t require making languages more expressive. By far the biggest gain we have made in this over the last two decades is open source software (OSS). Without individuals and companies pouring money into software that they give freely to the community, much of what we build today wouldn’t be possible without an order of magnitude more cost and effort.

These projects have allowed us to tackle problems by standing on the shoulders of giants, leveraging tools to allow us to focus more of our energy on actually solving business problems, rather than spending time building infrastructure.

That said, businesses are complex. Ridiculously complex and only getting moreso. OSS is great for producing frameworks and tools that we can use to build systems on top of, but for the most part, OSS has to tackle problems shared by a large number of people in order to gain traction. Because of that, most open source projects have to either be relatively generic or be in a very popular niche. Therefore, most of these tools are great platforms on which to build out systems, but at the end of the day, we are still left to build all of the business logic and interfaces in our increasingly complex and demanding systems.

So what we are left with is a stack that looks something like this (for a web application)…

That “Our Code” part ends up being enormously complex, since it mirrors the business and its processes. If we have custom business logic, and custom processes, then we are left to build the interfaces, workflow, and logic that make up our applications. Sure, we can try to find different ways of recording that logic (remember business rules engines?), but at the end of the day, no one else is going to write the business logic for your business. There really doesn’t seem to be a way around that… at least not until the robots come and save us all from having to do any work.

Don’t like code, well how about Low-Code?

So if we have to develop the interfaces, workflow, and logic that make up our applications, then it sounds like we are stuck, right? To a certain extent, yes, but we have a few options.

To most developers, software equals code, but that isn’t reality. There are many ways to build software, and one of those ways is through using visual tools. Before the web, visual development and RAD tools had a much bigger place in the market. Tools like PowerBuilder, Visual Foxpro, Delphi, VB, and Access all had visual design capabilities that allowed developers to create interfaces without typing out any code.

These tools spanned the spectrum in terms of the amount of code you needed to write, but in general, you designed your app visually and then ended up writing a ton of code to implement the logic of your app. In many cases you still ended up programmatically manipulating the interface, since interfaces built using these tools often ended up being very static. However, for a huge class of applications, these tools allowed enormous productivity gains over the alternatives, mostly at the cost of flexibility.

The prevalence of these tools might have waned since the web took over, but companies’ desire for them has not, especially since the inexorable march of software demand continues. The latest trend that is blowing across the industry is “low code” systems. Low code development tools are a modern term put on the latest generation of drag and drop software development tools. The biggest difference between these tools and their brethren from years past is that they are now mostly web (and mobile) based and are often hosted platforms in the cloud.

And many companies are jumping all over these platforms. Vendors like Salesforce (App Cloud), Outsystems, Mendix, or Kony are promising the ability to create applications many times faster than “traditional” application development. While many of their claims are probably hyperbole, there likely is a bit of truth to them as well. For all of the downsides of depending on platforms like these, they probably do result in certain types of applications being built faster than traditional enterprise projects using .NET or Java.

So, what is the problem?

Well, a few things. First is that experienced developers often hate these tools. Most Serious Developers like to write Real Software with Real Code. I know that might sound like I’m pandering to a bunch of whiney babies (and maybe I am a bit), but if the core value you deliver is technology, it is rarely a good idea to adopt tools that your best developers don’t want to work with.

Second is that folks like me look at these walled platforms and say “nope, not building my application in there.” That is a legitimate concern and the one that bothers me the most.

If you built an application a decade ago with PHP, then that application might be showing its age, but it could still be humming along right now just fine. The language and ecosystem are open source, and maintained by the community. You’ll need to keep your application up to date, but you won’t have to worry about a vendor deciding it isn’t worth their time to support you anymore.

…folks like me look at these walled platforms and say “nope, not building my application in there.” That is a legitimate concern and the one that bothers me the most.

If you picked a vendor 10 years ago who had a locked down platform, then you might be forced into a rewrite if they shut down or change their tooling too much (remember Parse?). Or even worse, your system gets stuck on a platforms that freezes and no longer serves your needs.

There are many reasons to be wary of these types of platforms, but for many businesses, the allure of creating software with less effort is just too much to pass up. The complexity of software continues on, and software engineers unfortunately aren’t doing ourselves any favors here.

What needs to change?

There are productive platforms out there, that allow us to build Real Software with Real Code, but unfortunately our industry right now is far too worried with following the lead of the big tech giants to realize that sometimes their tools don’t add a lot of value to our projects.

I can’t tell you the number of times I’ve had a developer tell me that building something as a single page application (SPA) adds no overhead versus just rendering HTML. I’ve heard developers say that every application should be written on top of a NoSQL datastore, and that relational databases are dead. I’ve heard developers question why every application isn’t written using CQRS and Event Sourcing.

It is that kind of thought process and default overhead that is leading companies to conclude that software development is just too expensive. You might say, “But event sourcing is so elegant! Having a SPA on top of microservices is so clean!” Sure, it can be, but not when you’re the person writing all ten microservices. It is that kind of additional complexity that is often so unnecessary.

We, as an industry, need to find ways to simplify the process of building software, without ignoring the legitimate complexities of businesses. We need to admit that not every application out there needs the same level of interface sophistication and operational scalability as Gmail. There is a whole world of apps out there that need well thought-out interfaces, complicated logic, solid architectures, smooth workflows, etc…. but don’t need microservices or AI or chatbots or NoSQL or Redux or Kafka or Containers or whatever the tool dujour is.

A lot of developers right now seem to be so obsessed with the technical wizardry of it all that they can’t step back and ask themselves if any of this is really needed.

It is like the person on MasterChef who comes in and sells themselves as the molecular gastronomist. They separate ingredients into their constituent parts, use scientific methods of pairing flavors, and then apply copious amounts of CO2 and liquid nitrogen to produce the most creative foods you’ve ever seen. And then they get kicked off after an episode or two because they forget the core tenet of most cooking, that food needs to taste good. They seem genuinely surprised that no one liked their fermented fennel and mango-essence pearls served over cod with anchovy foam.

Our obsession with flexibility, composability, and cleverness is causing us a lot of pain and pushing companies away from the platforms and tools that we love. I’m not saying those tools I listed above don’t add value somewhere; they arose in response to real pain points, albeit typically problems encountered by large companies operating systems at enormous scale.

What I’m saying is that we need to head back in the direction of simplicity and start actually creating things in a simpler way, instead of just constantly talking about simplicity. Maybe we can lean on more integrated tech stacks to provide out of the box patterns and tools to allow software developers to create software more efficiently.

…we are going to push more and more businesses into the arms of “low code” platforms and other tools that promise to reduce the cost of software by dumbing it down and removing the parts that brought us to it in the first place.

We need to stop pretending that our 20th line-of-business application is some unique tapestry that needs to be carefully hand-sewn.

Staying Focused on Simplicity

After writing that, I can already hear a million developers sharpening their pitchforks, but I believe that if we keep pushing in the direction of wanting to write everything, configure everything, compose everything, use the same stack for every scale of problem, then we are going to push more and more businesses into the arms of “low code” platforms and other tools that promise to reduce the cost of software by dumbing it down and removing the parts that brought us to it in the first place.

Our answer to the growing complexity of doing business cannot be adding complexity to the development process – no matter how elegant it may seem.

We must find ways to manage complexity by simplifying the development process. Because even though managing complexity is our second most important responsibility, we must always remember the most important responsibility of software developers: delivering value through working software.

]]>https://www.simplethread.com/software-complexity-killing-us/feed/52What part of your job can you automate?https://www.simplethread.com/part-job-can-automate/
https://www.simplethread.com/part-job-can-automate/#respondThu, 04 Jan 2018 16:15:55 +0000https://www.simplethread.com/?p=1369Let me first start off this post by saying, if you’ve never read the book “The Passionate Programmer” by Chad Fowler, then you’re doing yourself a disservice. You should go right now and check it out, I’ll wait. “The Passionate Programmer” is a book about creating a great career as a software engineer, and is... Read more »

]]>Let me first start off this post by saying, if you’ve never read the book “The Passionate Programmer” by Chad Fowler, then you’re doing yourself a disservice. You should go right now and check it out, I’ll wait.

“The Passionate Programmer” is a book about creating a great career as a software engineer, and is packed with tips such as “Automate Yourself into a Job”. It is this tip that I want to talk with you about today. Unfortunately many people look at software engineers as being fungible. We have some work, we need to get it done, we can throw X software engineers at it! They don’t understand why if you can do a task with one programmer in three months, then why can’t we just throw three programmers at it and get it done in one month? Why do we need all of these expensive software engineers? Can’t we can just go hire a bunch of cheap developers and get much more work done?

Unfortunately (or maybe fortunately for you), this isn’t the way it works. Generally speaking an expert software engineer will be able to produce more in a given time frame, not necessarily in total output (inexperienced software engineers can often produce a *ton* of code), but in terms of designing systems and producing software that will run reliably and won’t cause a constant stream of bugs and downtime.

When it comes to enhancing software throughput, your options are…

1) Get faster people to do the work
2) Get more people to do the work, or
3) Automate the work.

It is hard to measure if one programmer is faster than another and adding more developers to a project actually tends to slow the project down. Therefore, an experienced engineer will recognize that their time is wasted by doing routine and repetitive tasks, and will go out of their way to automate them. This is why Larry Wall (the creator of the Perl programming language) very tongue-in-cheek declared laziness to be one of the three virtues of a great programmer:

Laziness – The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don’t have to answer so many questions about it. Hence, the first great virtue of a programmer.

While there are many tasks in our daily lives as developers that can be automated, many of them fall into the realm of DevOps. One of the tasks that I realized I had been wasting too much time on was deploying server updates.

My Automation – AWS OpsWorks Command Deployments

AWS OpsWorks is a service that allows you to use code to automate server configurations through tools such as Chef and Puppet. We use OpsWorks to bootstrap AWS EC2 instances, allowing us to have an easily repeatable environment for our applications to run in.

As every good engineer knows, regular and prompt patching of your servers and libraries is critical to ensure that any vulnerabilities are quickly remedied. We had long ago created tasks to run updates on our servers via Chef, but we were still logging in every week to run the commands on the servers, waiting for the servers to re-enter the load balancers, then running it against the next batch. A process which doesn’t take a ton of time, but still time that could be best spent doing other things.

One way to do this is by logging in to AWS and deploying commands to the servers. Kicking off the Chef commands to patch the servers involves:

1) Log into the AWS web console (entering credentials and using multi-factor authentication)
2) Navigate to OpsWorks, navigate to each Stack to update
3) Execute the deployment commands for each batch of servers
4) Repeat the deployment commands for each Stack, and
5) Repeat the entire process for each staging and production environment you have.

It isn’t a ton of work, but there is a lot of waiting in between steps, which means a lot of context switching. Now that seems like prime candidate for automation, but we put off automating it for a long time, because the *right* solution felt like a lot of work and logging in and clicking a few buttons once a week didn’t seem like a ton of work. This is the trap engineers often fall in. Putting off time-saving efforts because solving a problem in the way we feel like it *should* be solved would take a lot of effort, when in fact, a much simpler solution would suffice.

My Simple Solution: Bash Script

After a bit of research we realized that we could kick off the Opsworks agent directly on the server without going through the AWS api, and so writing a shell script to ssh into each server and kick off the commands was pretty straightforward.

In the future we could clean up this solution further and use the AWS api to auto-discover servers in each AWS account in each Opsworks layer, detect when the servers re-enter the load balancer, etc… but this solution was fast and served our current needs. Never let perfect be the enemy of good!

There are many other opportunities for quick wins like this, but it makes me wonder, what part of your job can you automate?

]]>https://www.simplethread.com/part-job-can-automate/feed/0RVA JavaScript Conf 2017 – Thank You RVA!https://www.simplethread.com/rva-javascript-conf-2017-thank-you-rva/
https://www.simplethread.com/rva-javascript-conf-2017-thank-you-rva/#respondThu, 09 Nov 2017 16:00:34 +0000https://www.simplethread.com/?p=1361After many long months of planning, RVA JavaScript Conf 2017 went off last Friday without a hitch! We are truly grateful to all of the attendees and the Richmond/Charlottesville software development communities for making it such a huge success. The conference sold out over a week ahead of time, which is more than we could... Read more »

]]>After many long months of planning, RVA JavaScript Conf 2017 went off last Friday without a hitch! We are truly grateful to all of the attendees and the Richmond/Charlottesville software development communities for making it such a huge success. The conference sold out over a week ahead of time, which is more than we could have hoped for our first year. Just look at this crowd!

We sent out a survey after the conference, and so far we have had almost half of all attendees respond. Out of those responses, over 96% responded positively when asked how likely they were to recommend a friend attend RVA JavaScript Conf next year. We know the conference wasn’t perfect this year (we are taking everyone’s feedback to heart!), and so that level of positive response was truly overwhelming for us.

I would personally like to thank the organizers whose hard work made the conference possible. Hundreds of hours of work went into organizing this conference. Al Tenhundfeld, Trish Mahan, Gaelen Kash, and A’braham Barakhyahu put a ton of time and effort into making this conference a reality. Without their hard work it wouldn’t have happened.

In addition to the organizers, we also had a wonderful group of volunteers that gave up their sleep and arrived at the conference in the wee hours of the morning to help us get everything setup and get everyone checked in. The volunteers made the registration process go incredibly smoothly, and for that we are immensely grateful.

I would also like to thank our sponsors for taking a chance on a first year conference! For those who have been involved in conferences in the past, you probably know that conferences are expensive! Especially hosting it at a nice facility like the Westin. Altria was our gold sponsor, which had a huge impact on the conference. Our silver sponsors were Snagajob, CapTech, Singlestone, Carmax, and Simple Thread (that’s us!). Our bronze sponsors were Vaco, maconit, and RTS Labs. I know I’m starting to sound like a broken record, but without these sponsors, the conference would not have happened. The ticket sales alone are nowhere near enough to pay for a conference like this.

And last, but certainly not least, I want to thank all of the presenters who volunteered their own time to come and speak at RVA JavaScript Conf. We had a ton of great presenters, and we hope we can get many of them to come back next year!

]]>https://www.simplethread.com/rva-javascript-conf-2017-thank-you-rva/feed/0Examples of Digital Transformationhttps://www.simplethread.com/examples-of-digital-transformation/
https://www.simplethread.com/examples-of-digital-transformation/#respondThu, 09 Nov 2017 14:02:09 +0000https://www.simplethread.com/?p=1352In my last post, I submitted my simple definition of digital transformation: “Digital transformation is the change to a digital-first approach that improves the experience of doing business for everyone.” I also promised to give some examples of digital transformation. In doing so, I feel like it’s important for the examples to meet the criteria... Read more »

]]>In my last post, I submitted my simple definition of digital transformation: “Digital transformation is the change to a digital-first approach that improves the experience of doing business for everyone.” I also promised to give some examples of digital transformation.

In doing so, I feel like it’s important for the examples to meet the criteria for my definition of true digital transformation:

Let’s start with an organization in a sector you might least expect: municipal government.

Cary, N.C.

Like almost every other government entity in the U.S., Cary, North Carolina was drowning in a sea of legacy applications with minimal integration and loads of technical debt.

Cary, NC CIO Nicole RaimundoPhoto: LinkedIn

CIO Nicole Raimundo’s answer was a digital-first, platform strategy. She chose Salesforce to replace dozens of legacy applications and workflows used for things like work orders, permits, and onboarding. “We’re looking at things more strategically, adding a focus on mobile, cloud systems and platforms,” Raimundo said.

With a platform approach, Raimundo was able to get some quick wins, and the town now has a more comprehensive look at residents and their needs.

“Instead of looking at a singular application for one department, it’s about creating a platform that goes across the town that everyone can interact with.” Because of Raimundo’s approach and digital-first thinking, the town now also has an open data portal and public wifi in town-owned facilities.

For her strategy and results, she was chosen the Public Sector CIO of the Year in 2016 by the North Carolina Technology Association.

The advantage for the town, not just its residents, is that with a proven, demonstrated digital-first approach, it becomes a stronger magnet for technology savvy organizations.

JetBlue

In terms of airlines thinking digital first, JetBlue is leading the way. You have them to thank for satellite-based in-air wifi, among other things.

JetBlue CIO Eash Sundaram Photo: JetBlue

CIO Eash Sundaram says that he sees technology as an enabler. He feels like if he can put the right technology into the hands of the right people, especially customers, then everyone can do more with less.

Eliminating friction points like passenger check-ins has become a priority for companies like Jet Blue and Delta, and Sundaram’s team is employing technologies like NFC (near field communications, the same radio technology that underpins mobile payment experiences like Apple Pay) to eliminate or streamline the check-in process.

Attendants will soon be armed with devices containing passenger info, like who in their travel party has diet restrictions, so that in-flight service can be more personalized.

JetBlue is so invested in digital innovation that Sundaram is also head of JetBlue’s venture arm, JetBlue Technology Ventures. He is one of a new breed of CIOs not simply responsible for digital, but also for innovation, and this is the very intersection at which digital transformation lives.

Dominion Due Diligence Group

And lastly, if you haven’t read our case study on D3G, it is a stellar example of digital transformation and what an organization in a traditionally non-digital business can do with a digital-first approach.

In short, D3G had been duct taping their entire project management process together with a legacy application, that was broken at best, and various offline workflows.

Through asking questions and listening, we uncovered several issues. First, the application didn’t provide a holistic view of their project delivery pipeline. It required a multitude of external systems – like email, spreadsheets, and more – to manage projects on a day-to-day basis. There were issues when multiple users tried to edit the same projects in the application. And it was also extremely slow for remote employees and didn’t allow mobile access.

Once we knew the pain points, it was clear that the company needed an answer that:

Could scale easily

Could be accessed remotely and on a mobile device, no matter the data size required

Would be familiar and comfortable use

Would combine its many systems into a single tool, and

Would integrate with their sales and delivery pipelines.

By changing their thinking to digital first and partnering with us, they transformed their entire project management and bid management processes to digital, reducing wasted time, effort, duplicate work, and errors, and increasing productivity, efficiency, customer service, data collection, and profit margin.

]]>https://www.simplethread.com/examples-of-digital-transformation/feed/0Unblock Rails UI Testing with Cypresshttps://www.simplethread.com/unblock-rails-ui-testing-cypress/
https://www.simplethread.com/unblock-rails-ui-testing-cypress/#commentsThu, 02 Nov 2017 15:31:50 +0000https://www.simplethread.com/?p=1341Unblock Rails UI Testing with Cypress Everybody knows that Cypress is the best way to test your UI. Actually, that’s probably not true yet as Cypress went into public beta in October… but soon everybody will know. Anyway, Cypress is a great way to test your Rails UI (or any web UI for that matter).... Read more »

Everybody knows that Cypress is the best way to test your UI. Actually, that’s probably not true yet as Cypress went into public beta in October… but soon everybody will know. Anyway, Cypress is a great way to test your Rails UI (or any web UI for that matter). Cypress is especially effective when your Rails view has complex JavaScript that make Capybara tests cumbersome or impossible.

We’ve become so accustomed to dealing with or ignoring Selenium/PhantomJS issues that we’ve just come to accept them. There are blog posts and articles that advocate Selenium/PhantomJS but then say things like “intermittent PhantomJS issues.” I mean, I love Capybara with its page DSL and methods like: visit, click_on, fill_in, choose, check, select, page.has_*, expect, find_field|link|button, and within. But as soon as I start to test JS-heavy code, things become problematic.

Does this sound familiar to you: “Hmmm… how do I test this? Let’s try this…” Try, fail, Google, and repeat. Try, fail, Google, and repeat. Then you just give up on Capybara for that feature and set `:js => false`.

The issue (as Kamil Ogorek exposed in Integration Tests Can Be Fun) is that Selenium/PhantomJS are black boxes. Your Ruby test asks Selenium/PhantomJS to invoke a user UI action, then Selenium/PhantomJS then asks the browser. If you don’t have timing issues (which all tests seem to have intermittently) the browser responds to Selenium/PhantomJS which responds back to your test.

Your test code has no knowledge of the browser’s lifecycle. Cypress lives in the browser. It knows about states and event loops and application code. It doesn’t have the timeout issues prevalent with Selenium/PhantomJS based UI testing.

Don’t Trust Me

At this point you may be calling “Bullshit.” Well, click here to see a video of a sample test that works against Cypress’s own web site. Better yet, take a minute, install Cypress, and open its desktop application:

$ npm install cypress
$ cypress open

In the Cypress UI, add an application by clicking on the Add Project link:

Add Project

Traverse to a working or temporary directory. Then click on the newly added link that has the same name as the folder you selected. You’ll get the following popup:

Help Getting Started

Click “OK, got it!” then click on the example_spec.js link (that you can see behind the popup above.) And bam!, you are testing:

The Kitchen Sink Test

The sample kitchen sink test has examples for more commands than you might ever need.

Look at the runtime UI. That’s Chrome! Your tests ran in Chrome! (I apologize for the overuse of exclamation points but…. Wow! This is cool stuff!). Anyway, open up Chrome Inspector and you can find the application’s source as well as the Cypress infrastructure code.

Now do this for me: With Chrome Inspection open in the Cypress Desktop app, edit your copy of example_spec.js.

Before doing anything else, note that the DSL syntax is very RSpec-like. Cypress was built on top of Mocha, a very heavily used JavaScript unit testing framework. Now add `debugger` to just before the first `should` statement and rerun the test.

You can go to the console, look at variables, checkout network requests and responses, whatever. What I do all the time is: 1) use debugger to stop the test, 2) review the state of page variables, 3) test a solution in the Chrome Console, 4) change my test (removing the debugger statement) and it reruns automatically. I’ve found that the development cycle for writing Cypress tests is very fast.

“Yeah, but I’m not that good with JavaScript” you say. Know that the JavaScript required for writing tests is very simple. The hard part is getting comfortable with the myriad of available methods. But, if you are like me, you’ll have a short list of go-to methods that allow you to automate some powerful tests.

Cypress Promise to Tests

Cypress based tests do not have the timing issues of Selenium/Capybara because, besides the fact that it runs in the browser, Cypress places all the “command” method calls in a stack of JavaScript ES6 promises. That means that click this, Ajax that, should-equal the other thing series of tests commands only happen after the prior command finished. In truth they will timeout after reasonable and configurable defaults but you can override those global timeout values on a per command basis. This promises-based strategy removes the need for wait or sleep blocks as waiting logic is build into Cypress.

That said, you may still may have timeout issues due to slow responses to Ajax requests. Know that Cypress has a very simple strategy to wait for Ajax calls to complete. And Cypress makes it crazy easy to stub responses (more on that later) so your tests run blazingly fast. Check out my VIM screenshot with a Cypress JSON fixture on the top left (ignoring NERDTree,) a test on the top right, and, on the bottom, setup code that defines browser routes that, when called, will automatically use a fixture and a stub.

Fixtures and Stubs

Expect a subsequent blog post on building Cypress stubs and fixtures for Rails apps.

Selling Cypress

The JavaScript Angular, Node.js, etc. crowd is taking to Cypress like bees to a honey pot. The large group of folks that were on the private beta (which was very easy to get on) understood immediately that Cypress was great for end-to-end testing and feature tests. They were already familiar with other JavaScript testing frameworks. I’m not in that cool crowd. I’m in the server-side group. But I’ve written Cypress tests for C#, Python, and Rails based server side applications. Some of them had Angular and Backbone front-ends and some just had prototypical proliferation of untested JavaScript. It doesn’t matter what language your application is written in. If it has a browser interface — even if there’s no JavaScript — Cypress makes it easy to verify behavior

I’ve written Cypress tests for Rails applications that had few or no tests. One was a very successful Rails application that made its creator a millionaire. I was brought in to refactor the spaghetti infected Ruby code — some methods of which had over a thousand lines of code. So, yeah, they were experiencing some flakiness when deploying production changes. For that application, writing Cypress tests was the quickest way to verifiy existing behavior and to gain confidence in new code. In a few weeks, the Cypress test suite was quite comprehensive, taking 10 minutes to run in Circle CI (that’s right, Cypress integrates well with your favorite Continuous Integration tool.) I also configured the Ruby Coverband gem to track the lines of Ruby code touched by the test suite. The Coverband report made my client even more comfortable being an early adopter of Cypress.

Because Cypress runs as an external browser-based JavaScript client to your Rails application, it doesn’t have the control RSpec has to wrap tests in transactions. To setup and tear-down model objects I created a controller that had routes callable by my Cypress tests to create and destroy nested objects via FactoryGirl. Expect another blog post that provides more detail on that strategy.

When would a Rails developer use Cypress? When what they are working on has a complex UI that has proven to be flakey. Or when it has JavaScript. Wait… That’s all Rails applications. I’ve just scratched the surface of Cypress features. Expect more blog posts on Rails-specific Cypress and be sure to hit us up with questions.

Here are a few useful links. Note that Cypress was builT on top of Mocha, Chai, and Sinon: