Do we really need a big, fancy framework to enhance our app’s architecture? What if we could drastically improve our app’s architecture with just a simple snippet of code?

The problem

Many of the apps I’ve seen have a very naive approach to screen navigation. Usually the view controller decides itself what view should be pushed/presented next. It looks a bit like this:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

TutorialViewControler{

…

funconNext(){

letvc=LoginViewController()

self.navigationController?.pushViewController(vc,animated:true)

}

…

funconBack(){

self.navigationController?.popViewController(animated:true)

}

…

}

LoginViewControler{

…

funconNext(){

letvc=MainScreenViewController()

self.navigationController?.pushViewController(vc,animated:true)

}

…

funconBack(){

self.navigationController?.popViewController(animated:true)

}

…

}

// etc…

It’s not a very good design. It doesn’t show clearly what the flow in the application is. If you are seeing to codebase for the first time, and would like to understand what pushes LoginViewController – it would be hard without reading most of the codebase (unless you use search usages…). This approach spreads application flow throughout the app. It would be much better if have application flow grouped in one, easy to read and maintain place.

The alternative approach I came up with is possible with a short snippet of code called “Flow” (It’s so short that I wouldn’t even call it a framework or a library). Imagine you define flow in your app in a dedicated Flow classes:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

classMainFlow{

/// ….

lazy varloginScreen:Flow=Flow{[unownedself]lets in

letscreen=LoginViewController()// <

screen.onLogin=lets.push(self.mainScreen)

screen.onBack=lets.pop()

screen.onSayHello={print("Howdy?")}

returnscreen

}

}

classLoginViewController{

varonLogin:(()->Void)?

varonBack:(()->Void)?

varonSayHello:(()->Void)?

}

In above snippet LoginViewController doesn’t know what happens after successful login. It doesn’t care since it’s not his responsibility. It only informs, that login was successful. The part of interaction and transition to other view controllers was extracted and moved into a Flow part. Within a flow we define how LoginViewController should be created and what transitions are possible to/from it. Flow decides when the (de)initialization of the view controller happens. It also provides a helper called lets helper, that allows to easily define push/pop/present/dismiss operations. In this concrete example LoginViewController pushes MainViewController when a successful login happens.

Good parts here are:

LoginViewController doesn’t know what happens after a successful login. It doesn’t even care about that and that’s good, since it’s not its responsibility

All views are loosely coupled, coupling is done only through MainFlow class

Transitions are described using concise and descriptive syntax, without boilerplate code

ViewControllers in the app can be grouped by user scenario into different Flows, e.g. LoginFlow, Registration Flow, BlogPostFlow, etc…

Let me show that on a slightly bigger example

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

classMainFlow{

lazy vartutorialScreen:Flow=Flow{[unownedself]lets in

letscreen=TutorialViewController()

screen.onContinue=lets.push(self.loginScreen)

returnscreen

}

lazy varmainScreen:Flow=Flow{[unownedself]lets in

letscreen=MainViewController()

screen.onBack=lets.pop()

screen.onLogOut=lets.popTo(self.loginScreen)

screen.onExit=lets.popToRoot()

returnscreen

}

lazy varloginScreen:Flow=Flow{[unownedself]lets in

letscreen=LoginViewController()

screen.onLogin=lets.push(self.mainScreen)

screen.onBack=lets.pop()

screen.onSayHello={print("Howdy?")}

returnscreen

}

}

As you can see, all transitions that happens in our application are stored within one file, making it easy to see how app works at first sight.

This is the first approach to the solution I call FlowKit. In upcoming days/weeks I’d like to publish Flow implementation on GitHub, please drop me a comment if you’re interested in it. At this place it’s worth mentioning Krzysztof Zabłocki’s excellent and inspiring article about FlowControllers. The main difference between his approach and mine is that his one doesn’t require any new entities (like Flow), however more boiler-plate code needs to be written because of that.

This post begins a series of articles about algorithms, inspired by my recent “lazy evaluation” contribution to Lo-Dash. Stay tuned for more!

I always thought libraries like Lo-Dash can’t really get any faster than they already are. Lo‑Dash almost perfectly mixes various techniques to squeeze out the most from JavaScript. It uses JavaScript fastest statements, adaptive algorithms, it even measures performance to avoid accidental regressions in subsequent releases.

Lazy Evaluation

But it seems I was wrong – it is actually possible to make Lo-Dash significantly faster. All you need to do is stop thinking about micro-optimizations and start figuring out the right algorithm to use. For example, in a typical loop we usually tend to optimize a single‑iteration time:

1

2

3

4

varlen=getLength();

for(vari=0;i&lt;len;i++){

operation();// &lt;- 10ms - how to make it 9ms?!

}

But it’s often hard and very limited. Instead, it makes a lot more sense in some cases to optimize getLength() function. The smaller the number it returns, the less 10ms cycles we have.

This is roughly the idea behind the lazy evaluation in Lo-Dash. It’s about reducing the number of cycles, not reducing a cycle time. Let’s consider the following example:

1

2

3

4

5

6

7

8

9

10

11

functionpriceLt(x){

returnfunction(item){returnitem.price&lt;x;};

}

vargems=[

{name:'Sunstone',price:4},{name:'Amethyst',price:15},

{name:'Prehnite',price:20},{name:'Sugilite',price:7},

{name:'Diopside',price:3},{name:'Feldspar',price:13},

{name:'Dioptase',price:2},{name:'Sapphire',price:20}

];

varchosen=_(gems).filter(priceLt(10)).take(3).value();

We want to take only first 3 gems with price lower than $10. Regular Lo-Dash approach (strict evaluation) filters all 8 gems – then returns the first three that passed filtering:

It’s not cool though. It processes all 8 elements, while in fact we need to read only 5 of them. Lazy evaluation algorithm, on the contrary, processes the minimal number of elements in an array to get the correct result. Check it out in action:

We’ve easily gained 37.5% performance boost. But it’s not all we can achieve, actually it’s quite easy to find an example with ×1000+ perf boost. Let’s have a look:

1

2

3

4

5

6

7

8

varphoneNumbers=[5554445555,1424445656,5554443333,…×99,999];

// get 100 phone numbers containing „55”

functioncontains55(str){

returnstr.contains("55");

};

varr=_(phoneNumbers).map(String).filter(contains55).take(100);

In such an example map and filter is ran on 99,999 elements, while it may be sufficient to run it only on e.g. 1,000 elements. The performance gain is massive here (benchmark):

Pipelining

Lazy evaluation brings another benefit, which I call a “pipelining”. The idea behind is to avoid creating intermediary arrays during the chain execution. Instead we should perform all operations on a single element in place. So, the following piece of code:

1

varresult=_(source).map(func1).map(func2).map(func3).value();

would translate roughly to this in regular Lo-Dash (strict evaluation)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

varresult=[],temp1=[],temp2=[],temp3=[];

for(vari=0;i&lt;source.length;i++){

temp1[i]=func1(source[i]);

}

for(i=0;i&lt;source.length;i++){

temp2[i]=func2(temp1[i]);

}

for(i=0;i&lt;source.length;i++){

temp3[i]=func3(temp2[i]);

}

result=temp3;

While with the lazy evaluation turned on it’d perform more like this:

1

2

3

4

varresult=[];

for(vari=0;i&lt;source.length;i++){

result[i]=func3(func2(func1(source[i])));

}

Lack of temporary arrays may give us a significant performance gain, especially when source arrays are huge and memory access is expensive.

Deferred execution

Another benefit that was brought together with the lazy evaluation is deferred execution. Whenever you create a chain, it’s not computed until .value() is called implicitly or explicitly. This approach helps to prepare a query first and execute it later with the most recent data.

1

2

3

4

5

6

7

8

9

varwallet=_(assets).filter(ownedBy('me'))

.pluck('value')

.reduce(sum);

$json.get("/new/assets").success(function(data){

assets.push.apply(assets,data);// update assets

wallet.value();// returns most up-to-date value

});

It may also speed up execution time in some cases. We can create a complex query early and then, when time is critical, execute it.

Wrap up

Lazy evaluation is not the new idea in the industry. It has already been there with excellent libraries like LINQ, Lazy.js and many others. The main difference Lo-Dash makes, I believe, is that you still have good ol` Underscore API with a new powerful engine inside. No new library to learn, no significant code changes to make, just a pending upgrade.

But even if you’re not going to use Lo-Dash, I hope this article inspired you. Now, when you find a bottleneck in your application, stop trying to optimize it in jsperf.com try/fail style. Instead go, grab a coffee and start thinking about algorithms. Creativity is important here, but a good math background won’t hurt too (book). Good luck!

TBC… I’d like to write another – more advanced – post explaining how the Lazy algorithm is implemented in detail. If you like the idea, vote on it by following me on Twitter.

Proper usage of compile, link and controller may cause a headache in Angular.
If you’re still struggling with – despite good answers – choosing the right one, that’s a sign you need to familiarize yourself with a Directive LifeCycle.

What functions are invoked in following directive?

1

2

3

app.directive('a',function(){

return{};

});

1

&lt;a&gt;&lt;/a&gt;

1

2

3

4

1.directive.compile($element,...)

2.directive.controller($element,$scope,...)

3.directive.preLink($element,$scope,...)

4.directive.link($element,$scope,...)

Yes – even for such a simple directive a full LifeCycle is triggered. However – in this case the only function you need is the link. Compile, controller and preLink are practically useless. That leads to the second question:

Why do we need to have a controller?

To answer that let’s have a look on a directive construction life-cycle diagram:

On the above diagram you can see an order in which directives are created. It happens twice, first the DOM is traversed and all tags are compiled. In the second iteration controllers and link functions are called.

As you see <b> link function is called before <a>’s link – that means <b> can’t retrieve any information/configuration from <a> at this point. As a solution for this a controller is created before directive is fully initialized. Thanks to this, <b> can access <a> controller, even though <a> hasn’t yet finished creation process. So to sum up: Controllers allow directives talk to each other before they get fully initialized.

Why do we need a compile function?

For optimization purposes. Directives such as ng-repeat compiles repeated elements first, and then just clone. After cloning only controller and link function is called. So if you can, try to put stuff into compile function, so it’s not repeated when not necessary.

You can try it yourself in Fingers, where such a constructions are possible:

1

on(sprite).click+=clickHandler1+clickHandler2;

Before you start, please get familiar with a prototype language concept [1][2] (if you haven’t played with it already).

LET’S START: valueOf

In AS3 every object has a valueOf() method, which in general returns the object itself. In an experiment below, you can see, that valueOf is called whenever AVM2 doesn’t know what to do with an instance of a specified type.

1

2

3

4

5

6

7

8

9

10

11

12

13

publicclassO{

public functionvalueOf():O

{

return"valueOf";

}

public functiontoString():String

{

return"toString"

}

}

trace(newO()+newO());// valueOf valueOf

trace(newO()+"str"+newO());// toString str toString

trace(1+newO()+newO());// 1 valueOf toString

As you can see, results are not obvious. AVM2 sometimes chooses valueOf method, sometimes toString method. Here are some rules, that I’ve found. Let’s say AVM2 is performing A + B , while the types of A and B are different.

1. When A is String, then B is casted to String = toString() method is called (this applies also when B is String).

2. When A and B are not strings, then on each of them valueOf method is called (which by default return the object itself).

a) If results of valueOf() calls are both Numbers or Strings, then the + operator is applied on them.

b) If the types are different, they’re casted to String and String concatenation is performed.

* In above example I’ve treated int, uint also as a Number.

Please note, that in the trace below:

1

trace(1+newO()+newO());

we got: 1 valueOf toString. That’s because when the first 1 + new O() operation is computed, the result is of type String (rule 2.b). Then we have got another operation string + new O(), and the rule 1. is used.

Extend the idea with: prototype

What is cool, thanks to prototype concept, we can override valueOf method of any class in the real-time. So, let’s make use of that. Assume that we’d like to have following functionality of summing up arrays:

1

[1,2,3,6,4]+[8,4,6,1,3]=38

So, we need to inject a functionality of summing up an array into a valueOf method:

We can solve that. The idea is to change prototype only when needed. If you know, when you’re going to use operator overloading, then you can inject your concrete valueOf at the concrete time, and later on revert it. Your changes are “local” – that’s is good! (This concept was used in Fingers).

Let’s continue our example

1

budget.money+=[1000,1200,2000]+[30,50,30];

In this particular example, we can override Array.valueOf, while accessing money getter, and then revert it in setter phase.

1

2

3

4

5

6

7

8

9

10

11

12

public functiongetmoney():Number

{

// inject special valueOf method

Array.prototype.valueOf=summingValueOf;

return_money;

}

public functionsetmoney(value:Number):void

{

_money=value;

// revert valueOf to defaults

Array.prototype.valueOf=defaultValueOf;

}

This approach has one serious drawback. Take a look at a following code:

1

<em>budget.money=[1,2,3]+[3,4];</em>

Only setter hadn’t been called, a valueOf hadn’t been injected. In Fingers, I’ve solved this, by injecting valueOf during on(…) call.

Squeezing out most from valueOf + prototype.

Let’s say we’d like to concatenate arrays (should be much more useful):

Whenever Array.valueOf is called on an array instance, this instance is getting stored in a global register (and get an identifier, on which position is it stored).

Array.valueOf should return position of an array in a global register.

hero.items is of type *. It can get the resulting value, check which bits are set and recognize which arrays where used.

If you would like to read a more detailed explanation, just drop me a line. I’d be happy to write it in a next post. If you can’t wait, feel free to explore Fingers source-code.

One more thing, in some rare cases this approach can fail. Calling functions during getter/setter access, results in having valueOf changed then. Therefore some rare operations in these functions body can fail, because of this modified valueOf.

You can also use this approach for other operators, like -, |, ^, … But you’ll probably need to cheat a compiler a bit. So, instead of:

1

2

3

hero.items-=newO();// compiler complains, that new O() is not a Number (minus operator is restricted to Numbers)

hero.items-=Number(newO());// you need to cast to a Number.

hero.items-=newO()<strong>+0</strong>;// this is the most convenient way I'd found.

For me it’s really cool, but still rather an AS3 experiment (except from Fingers). You should definitely let me know, if you’re using it in a production. What’s more, the concept can be easily ported to JavaScript as well.