Meta

Category: 30days

I’ve read before about how willpower and attention are akin to finite resources that get depleted and need to be allowed to recover, and I think that model has helped me realize something about my own cycles of productivity.

I have early successes, and this adds to my general level of energy and excitement, and I take on one or more other projects that interest me, thinking that I’ll ride this wave of motivation.

If I have not been careful or realistic about how much stuff I voluntarily take on, it rapidly gets to the point where I can’t possibly make progress on everything. If I have been careful and realistic, it doesn’t matter, because something else will come along that I must take on, and it rapidly gets to the point where I can’t make progress on everything.

Suddenly I feel like I’m failing at half or more of the stuff I’ve taken on, and things get set aside, sometimes indefinitely.

Lather, rinse, repeat.

This can happen on a time scale anywhere from two weeks to three months.

If you’re a software process nerd (or possibly a general productivity nerd), you may have heard of Kanban, a method of process control. One of its central tenets on Kanban is “limit your work-in-progress“. For Kanban, that’s usually expressed at the task level, but I think for some of us (read: me) it might be wise to look at that at a higher level, and limit the number of projects I try to handle.

This is not necessarily a new insight, in general or for me personally, but I clearly need to be reminded.

Note: I’ve had a couple things holding my attention this week, and as a result missed a couple of days of the writing challenge. I’ll catch up.

One more note: I’m having a slightly rant day. Bear with me.

There are a bunch of things that could be done to make the tech culture more sane and humane. Here are three that rank highly on my list:

1. Working more hours does not necessarily make you more productive.In fact, it may make you far, far less so. We work in one of the few professions where it is possible to do negative work on a daily basis – that is, to make the code worse than we left it. We are more likely to do this when we work long hours. Unfortunately, both American work culture and the tech subculture seek to twist overwork into a virtue. It’s not. Overwork leads to bad decisions. If your boss doesn’t understand this, give him the slide deck I linked earlier in this paragraph (which contains a ton of researched information on productivity topics beyond just hours). If he willfully ignores the facts and says he doesn’t believe it, go work for someone smarter, and let him fail on someone else’s broken back. Also: If you think you’re somehow the exception to this, you’re not. There’s ample research out there – I urge you to look it up.

2. Trading sleep for work just makes you dumber, not more productive. This goes hand-in-hand with the issue of long hours; as with overwork, our culture makes a badge of honor out of sleep deprivation. (I was guilty of this myself when I was younger.) When we don’t get enough sleep, it degrades the quality of our work, and our ability to notice how much our work has degraded. This may be a reason so many people think they’re exceptional in this regard. Spoiler: They’re not. Again, there’s loads of research; Google is your friend.

3. The software profession is not a meritocracy. At least, it’s not if you’re black or a woman. This is made worse by the fact that white guys in the profession often think they’re too smart to have unconscious biases about race, gender, sexuality, &c. It’s made worse still by the fact that most of us in the profession who are any good at it actually did work hard to get there, and feel there’s merit in the rewards we’ve gathered. But if it’s not a meritocracy for everyone, it’s not a meritocracy for anyone, and those of us on the inside need to check our privilege and start examining our own behavior.

There are a few things I find it almost impossible to get through a workday without:

The Internet: This seems so obvious that it almost feels like cheating to include it. SDK documentation, SDK bugs that are not in the documentation, algorithms, computing language tricks, example code, security alerts, third party libraries… And, for break time, everything else.

A good pair of headphones: Music, binaural beats, pink noise, phone calls, Google hangouts… Many days, I spend more time with the phones on my head than off.

A zipper-front hoodie: I’m sensitive to temperature when I’m working. There’s a lot of benefit in being able to regulate my insulation.

The Dvorak keyboard layout: 60% less finger travel. QWERTY is so 19th century.

If you are designing or writing software for someone other than yourself, you are ethically bound to give your client or employer your best advice on how to meet the project’s goals. (Being persuasive in this is one of the big reasons you should cultivate your communication skills.) Whoever is paying for your skills will then take your advice, or they will not. It could go either way, for reasons that are largely out of your hands.

Whichever way that goes, you are then ethically bound to do it the way the customer wants. Their project, their money. Sometimes, that means implementing things – often user experiences, but sometimes deeper technical details – in a way you know to be somewhere on the scale from “suboptimal” to “imbecilic”. Sometimes, deliberately delivering less than the best possible product* at someone else’s request can fall on an emotional scale from “mildly annoying” to “soul-eroding”.

If that’s too much for you, you could go and make your own product. If you don’t have the savings cushion to take that leap, you could work on a side project on your own time – that can be a real sanity-saver (and a great way to sharpen your skills). And when all else fails, learn this mantra: “Not my circus, not my monkeys.” However you do it, it can be valuable to learn to distance yourself emotionally from work that does not belong to you, when the need arises. Software and business are both complex endeavors, and their intersection will always involve compromise.

Give your customer your best advice, then build the very best version of the thing they are paying for, and know that at the end they are getting what they asked for and deserve the results, for good or ill.

And, of course, make sure that best advice you gave earlier in the project is documented somewhere, with your name attached. Couldn’t hurt.

* If you’re working on medical devices or air traffic control software, and someone could end up maimed or dead, the advice in this post is less applicable. Buck up and push your case harder, in that event.

If you are designing or writing software for someone other than yourself, you’ll spend some about of time wanting to roll your eyes at people thinking they know how to do your job. Non-technical product managers will suggest specific technical solutions that they’ve heard of but don’t clearly understand. (MongoDB seems to turn up a lot in this context, for no reason I can discern.) Salespeople with no special background in UX design will prescribe inappropriate or outdated UI idioms. (Hamburger menus, a.k.a. pancake menus, still retain a lot of mindshare.)

Your natural and completely appropriate reaction to this sort of thing might be to want to start hitting people with a shovel, but this is bad for repeat business, and not legal in some places.

The thing you have to remember is that these nice people hired or contracted you (or your employer) because they don’t know how to do what you do, even if they sometimes forget this during requirements definition. You’re the expert. When a client falls into buzzword glossolalia, thinking he’s offering a valid technical solution, a better approach is to holster your shovel and say something like: “Let’s not get bogged down in technical details this early. Why don’t we take a step back, and you tell me what you want to accomplish by doing that, and then I can do a proper assessment of whether X is the right tool for the job.”

Gently draw the non-technical client’s attention away from the technical questions best left to you; bring your focus and theirs to the business problem they want to solve, and about which you may reasonably hope they know something. I’ve yet to have a client object to me caring about their problems and wanting to choose the best way to solve them. (Of course, then you’re on the hook to actually solve them, but that’s a topic for another post.)

There is, of course, a flip side to this, but I’ll write about that tomorrow.

Some years ago – let’s call it twenty – I was having an espresso at Sonsie on Newbury St. in Boston. The big front windows were open to the sidewalk. A guy in his twenties went by on a bicycle, smoking as he rode. Not only was he smoking, but there was an ashtray attached to his handlebars by a bit of hose clamp. This was, for obvious reasons of aerodynamics, not going to hold any ash or butts – it was clearly more of a statement. I suspected he thought he was pretty hardcore.

A few days ago, at the Starbucks on Capitol St. in Indianapolis, I saw another guy on a bike. He was in his sixties or later. He had oxygen tanks hanging off his pannier rack, connected to a nasal cannula that he was wearing as he rode. It did not look like a statement.

Warren Ellis, who is one of my favorite (male) authors, wrote a blog post about a woman on Tinder who refused contacts from men who could not name five books by female authors which they had read. (You can surely guess the results if you’re… well, awake.) Since I haven’t patted myself on the back for being a New Age sensitive guy in a couple days, I thought I’d give it a go.

I won’t use Boneshaker by Cherie Priest, because Ellis used it in his list, and I don’t want to appear to be cheating. Sticking with works of fiction that I enjoyed:

The Left Hand of Darkness by Ursula K. LeGuin: LeGuin is an astute writer of stories about the future that are not about the future.

The Handmaid’s Tale by Margaret Atwood: I could say the same of Atwood that I just said of LeGuin. Possibly more so.

Frankenstein by Mary Shelly: This reformulation of the tale of Prometheus is widely credited as the first science fiction novel, and is still among the best.

Like Water for Chocolate by Laura Esquivel: Someday I’ll make it through the whole thing in the original Spanish. I still recommend the translation.

Any discussion of the Dependency Inversion Principle should start by answering the question: What, exactly, is being inverted?

A lot of object-oriented systems start with classes mapping to the higher-level requirements or entities; these get broken down and individual capabilities get drawn out into other classes, and so on. The tendency is to wind up with a pyramidal dependency structure, where any change in the lower reaches of the pyramid tends to bubble up, touching higher and higher-level components.

As an example, let’s think about the Service/Endpoint/Marshaller classes I discussed in my earlier post on the Single Responsibility Principle. It would be very easy to start writing the service class, decide to break out the Endpoint class, and do so in a way that made assumptions that you were calling an HTTP web service – for example, you might assume that all responses from the service would have a valid HTTP response code, or that parameters had to be packaged as a URL-encoded query string.

So what happens if your requirements change such that you must directly call a remote SQL store using a different protocol? You’re going to have to change at least two classes, because of assumptions you made about the nature of your data source.

With the Dependency Inversion Principle, we are told that first, we should not write high-level code that depends on low-level concretions – we should connect our components via abstractions; and second, that these abstractions should not depend on implementation details, but vice versa. I’ve seen the “inversion” part of DIP explained a few different ways, but what I see being inverted is the naïve design’s primacy of implementation over interface.

When you start thinking about how to break down subcomponents, take a step back and think about the interfaces between components, and do your best to sanitize them – remove anything that might bite you if implementation details change.

In the case of the Endpoint, that might mean writing an interface that takes a dictionary of parameter names and values, with no special encoding, and providing for success and failure callbacks. A success callback could give you some generic string or binary representation of the data you requested (which can be passed to a parser/marshaller next). The arguments to the failure callback would be a generic error representation (most platforms have one), with an appropriate app-specific error code and message – not an HTTP status code, or anything else dependent on your data source.

DIP is a key way of limiting technical risk; in this example, after we have changed the interface to be generic with respect to the data source being called, a change to the Endpoint class requirements necessitates little or no corresponding change to the Service class, and vice versa.

The Obligatory Recap

Over these past five posts, I’ve covered five principles for building resilient object-oriented systems, with resiliency being defined as resistance to common classes errors, low cost of change, and high comprehensibility (i.e., well-managed complexity).

Here are all five once more, not with their canonical formulations (you could get that from the Wikipedia page on SOLID), but with my own distillation of the core lesson (IMHO) from each:

Single Responsibility Principle: Give each class one thing to do, and no more.

Note: This is the fourth of five posts I’m writing on the SOLID principles of object-oriented programming. Part 1: S, Part 2: O, Part 3: L

The Interface Segregation Principle is probably the easiest of the five SOLID principles for most programmers to grasp, if for no other reason than they’ve been exposed to it constantly if they’ve been working with an object-oriented language. The ISP says that small, single-purpose interfaces are to be preferred to large, omnibus interfaces.

Finding examples is easy. Here are a few lines pulled from the Cocoa Foundation headers:

For those of you who aren’t fluent in Objective-C: In each of those lines, the identifier immediately before the colon is the name of a class or protocol being declared (as distinct from being defined), an identifier immediately after the colon but outside the angle brackets is the parent class of the class being defined, and identifiers inside the angle brackets are protocols to which the class or protocol being defined will conform.

If you’re a native Java speaker, @protocol is very similar to interface; if C++ is your thing, @protocol is akin to a pure abstract base class. In all three cases, it’s all contract and no implementation.

NSSecureCoding offers all the NSCoding methods for archiving an object, and additionally allows an object to assert that it unarchives securely.

NSFastEnumeration is “implemented by objects wishing to make use of a fast and safe enumeration style.”

…and of course, each class has the methods that make it special: an NSArray has the the operations you’d expect for an ordered, randomly-accessible collection of objects; NSString allows you to search for substrings, and so on.

Each interface defines a very specific capability – you could almost call them atoms of functionality (or promised functionality).

So why do we break up our object declarations into these separate interfaces?

First, it offers you a certain amount of protection. NSCoder (non-Cocoa heads: it archives objects complying with the NSCoding protocol) only needs to know about those methods relating to object serialization. Someone writing an NSCoder subclass doesn’t know and doesn’t need to know about copying or enumeration or any of the other things Foundation objects commonly do, and therefore can’t do anything surprising to an object that is passed into that subclass (like mutate it unexpectedly via a method having nothing to do with archiving). It allows you to expose only those methods a particular caller should care about, and in that way avoid surprises.

Second, it allows you more freedom in how you express the capabilities of a class. Imagine modeling a bird in Objective-C:

Objective-C

1

2

3

4

5

6

7

8

9

10

@interfaceBird

-(void)hatch;

-(void)squawk;

-(void)fly;

-(void)poopOnWindshield:(Car*)targetCar;

// ...more properties and methods

@end

This looks straightforward, but what about subclasses that don’t need all of those capabilities? Should Ostrich or Penguin throw an exception when you call -fly? Should it be a no-op? What is it reasonable for calling code to expect? You could make Bird a protocol instead of a base class, and make flying-related operations optional:

Objective-C

1

2

3

4

5

6

7

8

9

10

11

@protocolBird

-(void)hatch;

-(void)squawk;

@optional

-(void)fly;

-(void)poopOnWindshield:(Car*)targetCar;

@end

…but then what do you do when it comes time to model a Bat? Flying is a very similar operation, but all the code you wrote that needs -fly is expecting a Bird. You don’t want to duplicate the same code for a Bat, and you certainly don’t want to start checking types and casting, because you’re eventually going to have to implement FlyingSquirrel, and FlyingFish, and who knows what else, and that code will turn into an error-prone hairball. If the -fly operation is used in the same way on each class, the calling code shouldn’t care about the specific type, only whether -fly is implemented.

With interface segregation, we can declare all of these things very flexibly:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

@protocol Flying

-(void)fly;

-(void)poopOnWindshield:(Car*)targetCar;

@end

@interfaceBird

-(void)hatch;

-(void)squawk;

@end

@interfaceMammal

-(void)beHairyAndWarmBlooded;

@end

@interfacePigeon:Bird<Flying>

-(void)begOldLadiesForBreadCrustsInThePark;

@end

@interfacePenguin:Bird

-(void)waddle;

@end

@interfaceBat:Mammal<Flying>

-(NSArray*)useSonarToFindBugs;

@end

Behavior that is shared across class hierarchies is broken out into a special-purpose interface. A method to check the altitude of a flying animal doesn’t need to know whether it’s a flying bird or a bat; the method signature - (NSFloat)checkAltitude:(id<Flying>)flyingAnimal; makes it clear that this code cares only about flying animals. You can’t even pass a Penguin to this method. (Another note for the non-ObjC-ers: id<Flying> means any object that conforms to the Flying protocol.)

Going back to the Foundation classes I referenced at the beginning of this post: It might be tempting to say that most of the classes need most of the same functionality, so why not put all the copying, archiving, and enumeration methods on NSObject, or make a subclass or protocol called NSFoundationObject that offers all the relevant methods?

That would work fine for the collection classes, all of which implement all the interfaces. Then we get to NSString… What does it mean to enumerate a string? Our first naïve thought might be to treat the string as a collection of characters, but nothing in NSCopying says anything about a character encoding, so that’s not going to work. Someday, someone is going to try to enumerate a string, and it will… crash? Throw an exception? Behave like an empty collection? Doing nothing isn’t even an option, because the lone method on NSFastEnumeration has a return value. (And the answer is not to have NSCopying‘s method take a character encoding enum, and have classes that don’t need it ignore it – that’s making the problem worse, not better.)

It gets even sillier with NSNumber. What does it mean to enumerate over an atomic value type? What does it mean to have a mutable copy of it? It would be senseless for NSNumber to claim that it offers these capabilities.

So, it doesn’t. Every type advertises only those capabilities that are meaningful to it, with interfaces that describe those capabilities minimally and generically.

Note: This is the third of five posts I’m writing on the SOLID principles of object-oriented programming. Part 1: S, Part 2: O

The Liskov Substitution Principle is probably the deepest and most “academic” of the SOLID principles of object oriented design. It states that replacing an object with an instance of a subtype of that object should not alter the correctness of the program.

If you read the Wikipedia page for the Liskov Substitution Principle, you’ll see that there is a whole lot packed into that word “correctness”. It touches on programming by contract, const correctness, and a lot of terms that will have limited meaning to people who don’t have a degree in computer science or mathematics. It can also be difficult to see how some of the more academic-sounding constraints apply to the real-world systems that we write. I’m going to try to back into it with a couple of “ferinstances” that motivate a more practically applicable (if slightly less rigorous) formulation of the LSP.

The “classic” LSP example is the square/rectangle problem. It’s natural for us to think of a square as a “specialization” of a rectangle; if you say, “a square is just like a rectangle except that all its sides are of equal length”, most people won’t object.

When you try to bring this abstraction to an object design, however, things break down. Let’s lay out this object hierarchy in Swift – where I had to jump through a couple of hoops to get the square’s constraint to work as needed:

Square and Rectangle

Swift

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

classRectangle{

varheight:UInt=0

varwidth:UInt=0

}

classSquare: Rectangle{

private var_side:UInt=0

overridevarheight: UInt{

get{

return_side

}

set(newHeight){

_side=newHeight

}

}

overridevarwidth: UInt{

get{

return_side

}

set(newWidth){

_side=newWidth

}

}

}

funcdoStuffToRect(rect:Rectangle){

rect.height=5

rect.width=10

println("rectangle measures \(rect.height) by \(rect.width)")

}

doStuffToRect(Rectangle())

// output: "rectangle measures 5 by 10" 👍

doStuffToRect(Square())

// output: "rectangle measures 10 by 10" 😱

Calling code that expected a rectangle to have its height and width vary independently, or code that had expectations about any derived quantity (like area or the position of a vertex) is at risk for being broken now.

What’s the general principle we can draw from this? It might help to restate the square/rectangle relationship: “A square satisfies all of the constraints of a rectangle, and adds the constraint that its sides must be of equal length.” For the operation of setting width, the Rectangle allowed us to expect that its height would be invariant. The Square breaks that expectation – because of its extra constraint, its property setters mutate state that the parent class’s setters don’t touch. This is part of what it means in that Wikipedia article when it says that “Invariants of the supertype must be preserved in a subtype.”

There are other kinds of constraints that break expectations of calling code. You might be writing an object in a payroll system that has a method to compute compensation, and it might have a method signature like Currency computeCompensation(Employee emp, Timesheet latestTimesheet). That’s a very specific contract made with the calling code, and a subclass may not add a constraint by, for example, demanding that emp must be of the subclass OvertimeEligibleEmployee. Calling code has the reasonable expectation that it may pass in any Employee object or any instance of a subclass of Employee, and further constraining the type of emp breaks that expectation – so badly, in fact, that every OO language that I’ve worked in (which isn’t all of them, by any means, but it’s a fair sample of the common ones) disallows changes to overridden method signatures. You could get around it in the child class’s overridden method by downcasting to OvertimeEligibleEmployee. If you’ve ever been warned against downcasting, this is exactly why – you’re basically saying, “the caller says this is an instance of Employee, but I know better”, and sometimes you’ll be right, but at some point you’re going to be wrong about that and introduce a crash or a hard-to-trace logic error.

This, to me, is the core of the Liskov Substitution Principle: it’s all about constraints and expectations. If your child class introduces a constraint that would break any plausible expectation of the code calling an instance of the parent class, you’re breaking the LSP, and you may or may not be breaking your program.

The LSP is the most restrictive of the five SOLID principles and the easiest to break, either unintentionally, or because you decided that a downcast or an extra property mutation in a child class is okay just this one time. And the LSP gets broken all the time in production code and even well-regarded framework code, sometimes productively. For you Cocoa heads: You’ve seen mutable subtypes of immutable types – NSMutableArray is a subtype of NSArray, NSMutableString is a subtype of NSString… How does that stack up against the “history constraint” cited in the Wikipedia article? Bonus question (that might lead you to drink): How would you change this hierarchy of types to “fix” that?

I encourage you to do some reading on it, and to develop a feel for the innocuous-seeming changes in the lower reaches of your class hierarchies that might break expectations of code written against your parent classes – and likewise for the times when you can profitably but deliberately break the LSP to get things done.