Personal Reference Blog

9/02/2010

A great article! I love it.http://www.alistapart.com/articles/no-one-nos-learning-to-say-no-to-bad-ideas/

Quote:No One Nos: Learning to Say No to Bad Ideasby Whitney Hess

* Published in: Business, Project Management and Workflow *

Discuss this article » | Share this article »

No One Nos: Learning to Say No to Bad Ideas

No. One word, a complete sentence. We all learned to say it around our first birthday, so why do we have such a hard time saying it now when it comes to our work?

Guilt. Fear. Pressure. Doubt. As we grow up, we begin to learn that not doing what others expect of us can lead to all sorts of negative consequences. It becomes easier to concede to their demands than to stand up for ourselves and for what is right.Need to no

As a user experience designer, I have made a career out of having to say No. It is my job to put an end to bad design practices within an organization before I can make any progress on improving the lives of our customers. And it’s rarely easy.

My client says, “I want to build a spaceship!” I say, “No, we need to make a kite.”

My client says, “We need to keep that space blank for my next genius idea!” I say, “No, we’ll find space for your idea once you have one.”

My client says, “I want this done tomorrow!” I say, “No, it will take a month.”

I am a human brake pad.

Each one of us brings an area of specialization to our projects, and it is our responsibility to exhibit that expertise. If you don’t know anything that no one else on your team knows, then it’s probably time to walk away. But if you do, it is your duty to assert that capability and share your knowledge for the betterment of the final product.

Mahatma Gandhi said, “A ‘no’ uttered from deepest conviction is better and greater than a ‘yes’ merely uttered to please, or what is worse, to avoid trouble.” As people who create stuff with the hope that other people will use it, it is outright cowardly for us to protect ourselves before defending the needs of our users.When to no

When I’m incredibly passionate about something, I tend to be stubborn. And when I recognize a problem, I’m not one to keep it inside. As a result, I have had some situations with teammates and clients in which I have been rather abrasive with my delivery of a no. Fearful that I won’t be heard or understood, I have overemphasized my position to the point that people don’t hear what I said but how how I said it.

Having been made aware of this issue and given the opportunity to fix it, I can freely admit now that it was getting in the way of my ultimate goal—helping people. As practitioners in design and development, there are many common difficult situations in which we may find ourselves, and there are tactful ways to handle them. Perhaps you will recognize a few of the following.Citing best practices

When you’re hired to serve a specific function on your team but are asked to do something you’re not comfortable with, often the best way to say no is to simply educate the other on best practices.

Kelly Andrews, owner of 1618design, recently received a client request to remove a quick e-mail-only mailing list signup from their site in favor of a full-page signup form.

Fearing that this would significantly decrease their number of subscribers, Andrews informed them that it is common practice for websites to include a quick subscribe since most people don’t want to spend the time filling out a form. A simple but powerful business case: The shorter option “would allow for immediate capture of interested people,” he explained. And they were sold. They hadn’t considered that before, but once they had that information, it armed them with the power to make a better choice. “The client was happy with the decision,” Andrews said. “She thanked me for being an expert and educating her instead of just doing what was asked.”Data reigns

When Samantha LeVan worked as a user experience designer at Corel, she was surrounded by a large team of engineers who were also accustomed to doing design. Most of the time, they had really interesting ideas that LeVan enjoyed riffing off of, but now and then they got stuck in the details and LeVan would have to make her case.

In one particular design, one of the engineers insisted that a drop-down component was necessary for the selection of three options. LeVan urged that three radio buttons would be more appropriate, but the engineer was unconvinced. The disagreement went on for a few days before LeVan realized that she needed data to support her case.

She turned to the CogTool—a UI prototyping tool that automatically evaluates the effectiveness of a design based on a “predictive human performance model”—developed at Carnegie Mellon University. The results showed that expert use task time was dramatically reduced with radio buttons over drop-downs. Seeing the facts, the engineer relented.

“Your opinion won’t matter,” says LeVan. “It’s important that you prove your point with numbers.”Pricing yourself out

Sometimes the best way to say no to bad design is not to take on the project in the first place. When Charlene Jaszewski, a freelance content strategist, was recently asked to help a friend’s brother with a website for his concrete company, she knew he had a limited budget but expected that she could help him limit the scope.

“Besides wanting ‘flying’ menus on each and every page, in a different style for each page,” Jaszewski recounts, “he wanted huge orange diamonds for the menus on the front page, and to top it off, he wanted a custom-made animation of a concrete truck on the front page and in the sidebar of every other page—the barrel rolling around with the logo of his company.” Now that just gives me shivers.

Jaszewski advised that his customers would be more interested in some relevant content, such as a portfolio of his previous work, but he was convinced that he needed lots of flashy extras to impress his visitors. And he wouldn’t give up.

Not wanting to overtly turn down the work, Jaszewski contacted animators and Flash designers, and came back with a price that was five times the business owner’s budget. He demanded a lower price, but Jaszewski just apologized and said that that’s what she would have to pay the appropriate people to do the work. Unsurprisingly, he took a pass, and Jaszewski later found out that he’d been trying to get his dream site built for the past eight years. Happily, she wouldn’t be the one to give it to him.Shifting focus from what to who

In April 2009, Lynne Polischuik, an independent user experience designer, was hired by an early stage startup—a private photo-sharing web app—to act as project manager to get them to launch. The product was intended to be an alternative to Facebook for parents who desired private groups of friends with whom to safely share photos of their young children.

Because the team envisioned the product as appealing to all members of the family, they wanted people of all ages to be able to use the app—including children and elderly grandparents without e-mail addresses. To allow for this, they developed a login system that relied extensively on cookies and technological trickery to provide secure access without requiring the user to enter credentials. Things were constantly breaking, and as a result, no one could log in.

Polischuik felt she had to step in. “I ended up making the argument that they needed to design not for extreme edge cases, but for the more probable, and revenue generating, ones,” she explains. “Would someone who doesn’t have an e-mail address be savvy enough to want to share images and photos on the web? Probably not.”

To sway the team, Polischuik took a step back and did some user research to develop personas to guide their decisions. Once the team was refocused on who they were really designing for, they were able to move forward more strategically. As disagreements in execution came up along the way, she would do a few quick usability tests of the proposed idea, and let the team see with their own eyes how their prospective customers struggled. By reframing the argument away from their opinions and demonstrating the negative impact on the user, the opposition was quickly defeated.How to no

Last October while on the phone with Harry Max—a pioneer in the field of Information Architecture, co-founder of Virtual Vineyards/Wine.com, (the first secure shopping system on the web), and now an executive coach—I complained about having way too much on my plate and desperately needing someone to give me a break.

He made me realize that it was actually I who was to blame, taking on more than I could handle by not protecting my time, and recommended that I read The Power of a Positive No by William Ury.

The book changed my life.

Ury proposes a methodology for saying no “while getting to Yes.” He argues that our desire to say no is not to be contradictory, but rather to stand up for a deeper yes—what we believe to be true, right, virtuous, and necessary. And that instead of making our defense a negative one, we can frame it in a positive light that is more likely to lead to a favorable outcome.

The following may sound really corny, but bear with me. It has completely transformed how I handle conflict and decision-making.

The structure of a positive no is a “Yes! No. Yes? statement.” In Ury’s words: The first Yes! expresses your interest; the No asserts your power; and the second Yes? furthers your relationship. For example, you might say “I, too, want prospective customers to see our company as current and approachable, but I don’t feel that a dozen social media badges at the top of the page will help us achieve that. What if we came up with a few alternative approaches and chose the most effective one together?”

He advocates not for just delivering your no in that manner, but also preparing for it and following through on it in the same way. Without a plan and without continued action, your assertion is a lot less believable—and a lot less likely to work.

Some of the most powerful takeaways from the book just might help you when it comes time for you to fight the good fight.

* Never say no immediately. Don’t react in the heat of the moment, or you might say something you don’t really mean. Things are rarely as urgent as we believe them to be, so take a step back, go to your quiet place, and really think through the issue at hand. Not only will your argument be clearer once you’ve had a chance to rehearse it, but it’s more likely the other will be ready to hear it. * Be specific in describing your interests. When saying no, it’s better to describe what you’re for rather than what you’re against. Instead of just maintaining a position, help the other person to understand why you are concerned and what you’re trying to protect. You may just find that you share the same goal, and can work together to find the right solution. * Have a plan B. There will be times that other people just won’t take your no for an answer. So you’re going to need a plan B as a last resort. Are you going to go over the person’s head? Are you going to prevent the project from moving forward? Are you going to quit? By exploring what you’re truly prepared to do ahead of time, you’ll have considerably more confidence to stand your ground and you won’t be afraid of what might come next. * Express your need without neediness. Desperation is never attractive and won’t get you anywhere. Present your case with conviction and matter-of-factness. Does your assertion cease to be true if the other person refuses to agree? No. So don’t act like it does. Needing the other to comply makes you look unsure and dependent, diminishing yourself and putting them in a position of power. * Present the facts and let the other draw their own conclusions. I’d venture to guess that most of the time you’re working with people who are pretty smart, pretty logical, and pretty well-intentioned. Perhaps they just don’t have all of the information that you do. Instead of telling them what to think, it is more useful to provide the necessary facts on which they can base their own judgment. Sometimes allowing the other person to feel like the decision was partially their own will help you get your way. * The shorter it is, the stronger it is. Pascal famously said, “I wrote you a long letter because I didn’t have time to make it shorter.” The longer the argument, the sloppier and less well-thought out it appears. You don’t need five reasons why something won’t work; just one good one will do. * As you close one door, open another. Don’t be a wet blanket. If you strongly believe that something shouldn’t be done, devise an alternative that the team can get behind. You aren’t helping anyone—let alone yourself—if you simply derail the project with your objections. Being a team player instead of a contrarian will help build trust and respect for your ideas. * Be polite. Ninety-nine times out of 100 we’re talking about issues of mild discomfort and dissatisfaction of our users, not life-or-death issues. There’s no reason to raise your voice, use inappropriate language, or cut anyone down. When you do, you prevent people from hearing the essence of what you’re trying to communicate. So keep your cool, be kind, and give your teammates and clients the respect they deserve. Just because you might understand something that they don’t doesn’t mean you’re a better person than they are.

Good to no

By taking pride in your work and upholding your role on a team, you will help to create a positive environment for all involved. No doubt other people will follow in your footsteps, and each person will become more responsible for themselves and for the greater good of the project. You’ll be seen as more professional, more authoritative, and more reliable.

Also consider the possibility that you may be steamrolling over other people’s ideas, and they’re too afraid to speak up. One of my favorite sayings is: “God gave us two ears and one mouth to use in proportion.” Let this be a reminder not only to say no, but to be willing to hear no, and to encourage others to do the same.

This is the script for my talk at YAPC::EU::2002. It's not a transcript of what I actually said; rather it collects together the notes that were in the pod source to the slides, the notes scribbled on various bits of paper, the notes that were only in my head, and tries to make a coherent text. I've also tried to add in the useful feedback I got - sometimes I can even remember who said it and so give them credit.

The slides are here, and hopefully it will be obvious where the slide changes are.Introduction

So you have a perl script. And it's too slow. And you want to do something about it. This is a talk about what you can do to speed it up, and also how you try to avoid the problem in the first place.Obvious things

Find better algorithm Your code runs in the most efficient way that you can think of. But maybe someone else looked at the problem from a completely different direction and found an algorithm that is 100 times faster. Are you sure you have the best algorithm? Do some research. Throw more hardware at it If the program doesn't have to run on many machines may be cheaper to throw more hardware at it. After all, hardware is supposed to be cheap and programmers well paid. Perhaps you can gain performance by tuning your hardware better; maybe compiling a custom kernel for your machine will be enough.mod_perl For a CGI script that I wrote, I found that even after I'd shaved everything off it that I could, the server could still only serve 2.5 per second. The same server running the same script under mod_perl could serve 25 per second. That's a factor of 10 speedup for very little effort. And if your script isn't suitable for running under mod_perl there's also fastcgi (which CGI.pm supports). And if your script isn't a CGI, you could look at the persistent perl daemon, package PPerl on CPAN.Rewrite in C, er C++, sorry Java, I mean C#, oops no ... Of course, one final "obvious" solution is to re-write your perl program in a language that runs as native code, such as C, C++, Java, C# or whatever is currently flavour of the month.

But these may not be practical or politically acceptable solutions.Compromises

So you can compromise.

XS You may find that 95% of the time is spent in 5% of the code, doing something that perl is not that efficient at, such as bit shifting. So you could write that bit in C, leave the rest in perl, and glue it together with XS. But you'd have to learn XS and the perl API, and that's a lot of work.Inline Or you could use Inline. If you have to manipulate perl's internals then you'll still have to learn perl's API, but if all you need is to call out from perl to your pure C code, or someone else's C library then Inline makes it easy.

Here's my perl script making a call to a perl function rot32. And here's a C function rot32 that takes 2 integers, rotates the first by the second, and returns an integer result. That's all you need! And you run it and it works.

Compile your own perl? Are you running your script on the perl supplied by the OS? Compiling your own perl could make your script go faster. For example, when perl is compiled with threading, all its internal variables are made thread safe, which slows them down a bit. If the perl is threaded, but you don't use threads then you're paying that speed hit for no reason. Likewise, you may have a better compiler than the OS used. For example, I found that with gcc 3.2 some of my C code run 5% faster than with 2.9.5. [One of my helpful hecklers in the audience said that he'd seen a 14% speedup, (if I remember correctly) and if I remember correctly that was from recompiling the perl interpreter itself]Different perl version? Try using a different perl version. Different releases of perl are faster at different things. If you're using an old perl, try the latest version. If you're running the latest version but not using the newer features, try an older version.

Banish the demons of stupidity

Are you using the best features of the language?

hashes There's a Larry Wall quote - Doing linear scans over an associative array is like trying to club someone to death with a loaded Uzi.

I trust you're not doing that. But are you keeping your arrays nicely sorted so that you can do a binary search? That's fast. But using a hash should be faster.regexps In languages without regexps you have to write explicit code to parse strings. perl has regexps, and re-writing with them may make things 10 times faster. Even using several with the \G anchor and the /gc flags may still be faster.

pack and unpack pack and unpack have far too many features to remember. Look at the manpage - you may be able to replace entire subroutines with just one unpack.undef undef. what do I mean undef?

Are you calculating something only to throw it away?

For example the script in the Encode module that compiles character conversion tables would print out a warning if it saw the same character twice. If you or I build perl we'll just let those build warnings scroll off the screen - we don't care - we can't do anything about it. And it turned out that keeping track of everything needed to generate those warnings was slowing things down considerably. So I added a flag to disable that code, and perl 5.8 defaults to use it, so it builds more quickly.

Intermission

Various helpful hecklers (most of London.pm who saw the talk (and I'm counting David Adler as part of London.pm as he's subscribed to the list)) wanted me to remind people that you really really don't want to be optimising unless you absolutely have to. You're making your code harder to maintain, harder to extend, and easier to introduce new bugs into. Probably you've done something wrong to get to the point where you need to optimise in the first place.

I agree.

Also, I'm not going to change the running order of the slides. There isn't a good order to try to describe things in, and some of the ideas that follow are actually more "good practice" than optimisation techniques, so possibly ought to come before the slides on finding slowness. I'll mark what I think are good habits to get into, and once you understand the techniques then I'd hope that you'd use them automatically when you first write code. That way (hopefully) your code will never be so slow that you actually want to do some of the brute force optimising I describe here.Tests

Must not introduce new bugs The most important thing when you are optimising existing working code is not to introduce new bugs.Use your full regression tests :-) For this, you can use your full suite of regression tests. You do have one, don't you?

[At this point the audience is supposed to laugh nervously, because I'm betting that very few people are in this desirable situation of having comprehensive tests written]Keep a copy of original program You must keep a copy of your original program. It is your last resort if all else fails. Check it into a version control system. Make an off site backup. Check that your backup is readable. You mustn't lose it. In the end, your ultimate test of whether you've not introduced new bugs while optimising is to check that you get identical output from the optimised version and the original. (With the optimised version taking less time).

What causes slowness

CPU It's obvious that if you script hogs the CPU for 10 seconds solid, then to make it go faster you'll need to reduce the CPU demand.RAM A lesser cause of slowness is memory.

perl trades RAM for speed One of the design decisions Larry made for perl was to trade memory for speed, choosing algorithms that use more memory to run faster. So perl tends to use more memory. getting slower (relative to CPU) CPUs keep getting faster. Memory is getting faster too. But not as quickly. So in relative terms memory is getting slower. [Larry was correct to choose to use more memory when he wrote perl5 over 10 years ago. However, in the future CPU speed will continue to diverge from RAM speed, so it might be an idea to revisit some of the CPU/RAM design trade offs in parrot] memory like a pyramid

You can never have enough memory, and it's never fast enough.

Computer memory is like a pyramid. At the point you have the CPU and its registers, which are very small and very fast to access. Then you have 1 or more levels of cache, which is larger, close by and fast to access. Then you have main memory, which is quite large, but further away so slower to access. Then at the base you have disk acting as virtual memory, which is huge, but very slow.

Now, if your program is swapping out to disk, you'll realise, because the OS can tell you that it only took 10 seconds of CPU, but 60 seconds elapsed, so you know it spent 50 seconds waiting for disk and that's your speed problem. But if your data is big enough to fit in main RAM, but doesn't all sit in the cache, then the CPU will keep having to wait for data from main RAM. And the OS timers I described count that in the CPU time, so it may not be obvious that memory use is actually your problem.

This is the original code for the part of the Encode compiler (enc2xs) that generates the warnings on duplicate characters:

It uses the hash %seen to remember all the Unicode characters that it has processed. The first time that it meets a character it won't be in the hash, the exists is false, so the else block executes. It stores an arrayref containing the code page and character number in that page. That's three things per character, and there are a lot of characters in Chinese.

If it ever sees the same Unicode character again, it prints a warning message. The warning message is just a string, and this is the only place that uses the data in %seen. So I changed the code - I pre-formatted that bit of the error message, and stored a single scalar rather than the three:

How do you make things faster? Well, this is something of a black art, down to trial and error. I'll expand on aspects of these 4 points in the next slides.

What might be slow? You need to find things that are actually slow. It's no good wasting your effort on things that are already fast - put it in where it will get maximum reward.Think of re-write But not all slow things can be made faster, however much you swear at them, so you can only actually speed things up if you can figure out another way of doing the same thing that may be faster.Try it But it may not. Check that it's faster and that it gives the same results.Note results Either way, note your results - I find a comment in the code is good. It's important if an idea didn't work, because it stops you or anyone else going back and trying the same thing again. And it's important if a change does work, as it stops someone else (such as yourself next month) tidying up an important optimisation and losing you that hard won speed gain.

By having commented out slower code near the faster code you can look back and get ideas for other places you might optimise in the same way.

Small easy things

These are things that I would consider good practice, so you ought to be doing them as a matter of routine.

AutoSplit and AutoLoader If you're writing modules use the AutoSplit and AutoLoader modules to make perl only load the parts of your module that are actually being used by a particular script. You get two gains - you don't waste CPU at start up loading the parts of your module that aren't used, and you don't waste the RAM holding the the structures that perl generates when it has compiled code. So your modules load more quickly, and use less RAM.

One potential problem is that the way AutoLoader brings in subroutines makes debugging confusing, which can be a problem. While developing, you can disable AutoLoader by commenting out the __END__ statement marking the start of your AutoLoaded subroutines. That way, they are loaded, compiled and debugged in the normal fashion.

... 1; # While debugging, disable AutoLoader like this: # __END__ ...

Of course, to do this you'll need another 1; at the end of the AutoLoaded section to keep use happy, and possibly another __END__.

Schwern notes that commenting out __END__ can cause surprises if the main body of your module is running under use strict; because now your AutoLoaded subroutines will suddenly find themselves being run under use strict. This is arguably a bug in the current AutoSplit - when it runs at install time to generate the files for AutoLoader to use it doesn't add lines such as use strict; or use warnings; to ensure that the split out subroutines are in the same environment as was current at the __END__ statement. This may be fixed in 5.10.

Elizabeth Mattijsen notes that there are different memory use versus memory shared issues when running under mod_perl, with different optimal solutions depending on whether your apache is forking or threaded.=pod @ __END__ If you are documenting your code with one big block of pod, then you probably don't want to put it at the top of the file. The perl parser is very fast at skipping pod, but it's not magic, so it still takes a little time. Moreover, it has to read the pod from disk in order to ignore it.

#!perl -w use strict;

=head1 You don't want to do that

big block of pod

=cut

... 1; __END__

=head1 You want to do this

If you put your pod after an __END__ statement then the perl parser will never even see it. This will save a small amount of CPU, but if you have a lot of pod (>4K) then it might also mean that the last disk block(s) of a file are never even read in to RAM. This may gain you some speed. [A helpful heckler observed that modern raid systems may well be reading in 64K chunks, and modern OSes are getting good at read ahead, so not reading a block as a result of =pod @ __END__ may actually be quite rare.]

If you are putting your pod (and tests) next to their functions' code (which is probably a better approach anyway) then this advice is not relevant to you.

Needless importing is slow

Exporter is written in perl. It's fast, but not instant.

Most modules are able to export lots of their functions and other symbols into your namespace to save you typing. If you have only one argument to use, such as

use POSIX; # Exports all the defaults

then POSIX will helpfully export its default list of symbols into your namespace. If you have a list after the module name, then that is taken as a list of symbols to export. If the list is empty, no symbols are exported:

use POSIX (); # Exports nothing.

You can still use all the functions and other symbols - you just have to use their full name, by typing POSIX:: at the front. Some people argue that this actually makes your code clearer, as it is now obvious where each subroutine is defined. Independent of that, it's faster:use POSIX; use POSIX ();0.516s 0.355suse Socket; use Socket ();0.270s 0.231s

POSIX exports a lot of symbols by default. If you tell it to export none, it starts in 30% less time. Socket starts in 15% less time.regexps

avoid $& The $& variable returns the last text successfully matched in any regular expression. It's not lexically scoped, so unlike the match variables $1 etc it isn't reset when you leave a block. This means that to be correct perl has to keep track of it from any match, as perl has no idea when it might be needed. As it involves taking a copy of the matched string, it's expensive for perl to keep track of. If you never mention $&, then perl knows it can cheat and never store it. But if you (or any module) mentions $& anywhere then perl has to keep track of it throughout the script, which slows things down. So it's a good idea to capture the whole match explicitly if that's what you need.

avoid use English; use English gives helpful long names to all the punctuation variables. Unfortunately that includes aliasing $& to $MATCH which makes perl think that it needs to copy every match into $&, even if you script never actually uses it. In perl 5.8 you can say use English '-no_match_vars'; to avoid mentioning the naughty "word", but this isn't available in earlier versions of perl.avoid needless captures Are you using parentheses for capturing, or just for grouping? Capturing involves perl copying the matched string into $1 etc, so it all you need is grouping use a the non-capturing (?:...) instead of the capturing (...)./.../o; If you define scalars with building blocks for your regexps, and then make your final regexp by interpolating them, then your final regexp isn't going to change. However, perl doesn't realise this, because it sees that there are interpolated scalars each time it meets your regexp, and has no idea that their contents are the same as before. If your regexp doesn't change, then use the /o flag to tell perl, and it will never waste time checking or recompiling it.but don't blow it You can use the qr// operator to pre-compile your regexps. It often is the easiest way to write regexp components to build up more complex regexps. Using it to build your regexps once is a good idea. But don't screw up (like parrot's assemble.pl did) by telling perl to recompile the same regexp every time you enter a subroutine:

sub foo { my $reg1 = qr/.../; my $reg2 = qr/... $reg1 .../;

You should pull those two regexp definitions out of the subroutine into package variables, or file scoped lexicals.

Devel::DProf

You find what is slow by using a profiler. People often guess where they think their program is slow, and get it hopelessly wrong. Use a profiler.

Devel::DProf is in the perl core from version 5.6. If you're using an earlier perl you can get it from CPAN.

You run your program with -d:DProf

perl5.8.0 -d:DProf enc2xs.orig -Q -O -o /dev/null ...

which times things and stores the data in a file named tmon.out. Then you run dprofpp to process the tmon.out file, and produce meaningful summary information. This excerpt is the default length and format, but you can use options to change things - see the man page. It also seems to show up a minor bug in dprofpp, because it manages to total things up to get 106%. While that's not right, it doesn't affect the explanation.

At the top of the list, the subroutine enter takes about half the total CPU time, with 200,000 calls, each very fast. That makes it a good candidate to optimise, because all you have to do is make a slight change that gives a small speedup, and that gain will be magnified 200,000 times. [It turned out that enter was tail recursive, and part of the speed gain I got was by making it loop instead]

Third on the list is encode_U, which with 45,000 calls is similar, and worth looking at. [Actually, it was trivial code and in the real enc2xs I inlined it]

utf8::unicode_to_native and utf8::encode are built-ins, so you won't be able to change that.

Don't bother below there, as you've accounted for 90% of total program time, so even if you did a perfect job on everything else, you could only make the program run 10% faster.

compile_ucm is trickier - it's only called 6 times, so it's not obvious where to look for what's slow. Maybe there's a loop with many iterations. But now you're guessing, which isn't good.

One trick is to break it into several subroutines, just for benchmarking, so that DProf gives you times for different bits. That way you can see where the juicy bits to optimise are.

Devel::SmallProf should do line by line profiling, but every time I use it it seems to crash.Benchmark

Now you've identified the slow spots, you need to try alternative code to see if you can find something faster. The Benchmark module makes this easy. A particularly good subroutine is cmpthese, which takes code snippets and plots a chart. cmpthese was added to Benchmark with perl 5.6.

So to compare two code snippets orig and new by running each for 10000 times you'd do this:

use Benchmark ':all';

sub orig { ... }

sub new { ... }

cmpthese (10000, { orig => \&orig, new => \&new } );

Benchmark runs both, times them, and then prints out a helpful comparison chart:

and it's plain to see that my new code is over 4 times as fast as my original code.What causes slowness in perl?

Actually, I didn't tell the whole truth earlier about what causes slowness in perl. [And astute hecklers such as Philip Newton had already told me this]

When perl compilers your program it breaks it down into a sequence of operations it must perform, which are usually referred to as ops. So when you ask perl to compute $a = $b + $c it actually breaks it down into these ops:

* Fetch $b onto the stack * Fetch $c onto the stack * Add the top two things on the stack together; write the result to the stack * Fetch the address of $a * Place the thing on the top of stack into that address

Computers are fast at simple things like addition. But there is quite a lot of overhead involved in keeping track of "which op am I currently performing" and "where is the next op", and this book-keeping often swamps the time taken to actually run the ops. So often in perl it's the number of ops your program takes to perform its task that is more important than the CPU they use or the RAM it needs. The hit list is

1. Ops 2. CPU 3. RAM

So what were my example code snippets that I Benchmarked?

It was code to split a line of hex (54726164696e67207374796c652f6d61) into groups of 4 digits (5472 6164 696e ...) , and convert each to a number

but the first one is much slower. Why? Following the data path from right to left, it starts well with a global regexp, which is only one op and therefore a fast way to generate a list of the 4 digit groups. But that map block is actually an implicit loop, so for each 4 digit block it iterates round and repeatedly calls hex. Thats at least one op for every list item.

Whereas the second one has no loops in it, implicit or explicit. It uses one pack to convert the hex temporarily into a binary string, and then one unpack to convert that string into a list of numbers. n is big endian 16 bit quantities. I didn't know that - I had to look it up. But when the profiler told me that this part of the original code was a performance bottleneck, the first think that I did was to look at the the pack docs to see if I could use some sort of pack/unpack as a speedier replacement.Ops are bad, m'kay

You can ask perl to tell you the ops that it generates for particular code with the Terse backend to the compiler. For example, here's a 1 liner to show the ops in the original code:

At the bottom you can see how the match /(....)/ is just one op. But the next diagonal line of ops from mapwhile down to the match are all the ops that make up the map. Lots of them. And they get run each time round map's loop. [Note also that the {}s mean that map enters scope each time round the loop. That not a trivially cheap op either]

There are less ops in total. And no loops, so all the ops you see execute only once. :-)

[My helpful hecklers pointed out that it's hard to work out what an op is. Good call. There's roughly one op per symbol (function, operator, variable name, and any other bit of perl syntax). So if you golf down the number of functions and operators your program runs, then you'll be reducing the number of ops.]

[These were supposed to be the bonus slides. I talked to fast (quelle surprise) and so manage to actually get through the lot with time for questions]Memoize

Caches function results MJD's Memoize follows the grand perl tradition by trading memory for speed. You tell Memoize the name(s) of functions you'd like to speed up, and it does symbol table games to transparently intercept calls to them. It looks at the parameters the function was called with, and uses them to decide what to do next. If it hasn't seen a particular set of parameters before, it calls the original function with the parameters. However, before returning the result, it stores it in a hash for that function, keyed by the function's parameters. If it has seen the parameters before, then it just returns the result direct from the hash, without even bothering to call the function.For functions that only calculate This is useful for functions that calculate things with no side effects, slow functions that you often call repeatedly with the same parameters. It's not useful for functions that do things external to the program (such as generating output), nor is it good for very small, fast functions.Can tie cache to a disk file The hash Memoize uses is a regular perl hash. This means that you can tie the hash to a disk file. This allows Memoize to remember things across runs of your program. That way, you could use Memoize in a CGI to cache static content that you only generate on demand (but remember you'll need file locking). The first person who requests something has to wait for the generation routine, but everyone else gets it straight from the cache. You can also arrange for another program to periodically expire results from the cache.

As of 5.8 Memoize module has been assimilated into the core. Users of earlier perl can get it from CPAN.Miscellaneous

These are quite general ideas for optimisation that aren't particularly perl specific.

Pull things out of loops perl's hash lookups are fast. But they aren't as fast as a lexical variable. enc2xs was calling a function each time round a loop based on a hash lookup using $type as the key. The value of $type didn't change, so I pulled the lookup out above the loop into a lexical variable:

my $type_func = $encode_types{$type};

and doing it only once was faster.Experiment with number of arguments Something else I found was that enc2xs was calling a function which took several arguments from a small number of places. The function contained code to set defaults if some of the arguments were not supplied. I found that the way the program ran, most of the calls passed in all the values and didn't need the defaults. Changing the function to not set defaults, and writing those defaults out explicitly where needed bought me a speed up.Tail recursion Tail recursion is where the last thing a function does it call itself again with slightly different arguments. It's a common idiom, and some languages can automatically optimise it away. Perl is not one of those languages. So every time a function tail recurses you have another subroutine call [not cheap - Arthur Bergman notes that it is 10 pages of C source, and will blow the instruction cache on a CPU] and re-entering that subroutine again causes more memory to be allocated to store a new set of lexical variables [also not cheap].

perl can't spot that it could just throw away the old lexicals and re-use their space, but you can, so you can save CPU and RAM by re-writing your tail recursive subroutines with loops. In general, trying to reduce recursion by replacing it with iterative algorithms should speed things up.

yay for y

y, or tr, is the transliteration operator. It's not as powerful as the general purpose regular expression engine, but for the things it can do it is often faster.

tr/!// # fastest way to count chars tr doesn't delete characters unless you use the /d flag. If you don't even have any replacement characters then it treats its target as read only. In scalar context it returns the number of characters that matched. It's the fastest way to count the number of occurrences of single characters and character ranges. (ie it's faster than counting the elements returned by m/.../g in list context. But if you just want to see whether one or more of a character is present use m/.../, because it will stop at the u first, whereas tr/// has to go to the end)tr/q/Q/ faster than s/q/Q/g tr is also faster than the regexp engine for doing character-for-character substitutions.tr/a-z//d faster than s/[a-z]//g tr is faster than the regexp engines for doing character range deletions. [When writing the slide I assumed that it would be faster for single character deletions, but I Benchmarked things and found that s///g was faster for them. So never guess timings; always test things. You'll be surprised, but that's better than being wrong]

Ops are bad, m'kay

Another example lifted straight from enc2xs of something that I managed to accelerate quite a bit by reducing the number of ops run. The code takes a scalar, and prints out each byte as \x followed by 2 digits of hex, as it's generating C source code:

The original makes a temporary list with split [not bad in itself - ops are more important than CPU or RAM] and then loops over it. Each time round the loop it executes several ops, including using ord to convert the byte to its numeric value, and then using sprintf with the format "\\x%02X" to convert that number to the C source.

The new code effectively merges the split and looped ord into one op, using unpack's C format to generate the list of numeric values directly. The more interesting (arguably sick) part is the format to sprintf, which is inside +(...). You can see from the .= in the original that the code is just concatenating the converted form of each byte together. So instead of making sprintf convert each value in turn, only for perl ops to stick them together, I use x to replicate the per-byte format string once for each byte I'm about to convert. There's now one "\\x%02X" for each of the numbers in the list passed from unpack to sprintf, so sprintf just does what it's told. And sprintf is faster than perl ops.How to make perl fast enough

use the language's fast features You have enormous power at your disposal with regexps, pack, unpack and sprintf. So why not use them?

All the pack and unpack code is implemented in pure C, so doesn't have any of the book-keeping overhead of perl ops. sprintf too is pure C, so it's fast. The regexp engine uses its own private bytecode, but it's specially tuned for regexps, so it runs much faster than general perl code. And the implementation of tr has less to do than the regexp engine, so it's faster.

For maximum power, remember that you can generate regexps and the formats for pack, unpack and sprintf at run time, based on your data.give the interpreter hints Make it obvious to the interpreter what you're up to. Avoid $&, use (?:...) when you don't need capturing, and put the /o flag on constant regexps.less OPs Try to accomplish your tasks using less operations. If you find you have to optimise an existing program then this is where to start - golf is good, but remember it's run time strokes not source code strokes.less CPU Usually you want to find ways of using less CPU.less RAM but don't forget to think about how your data structures work to see if you can make them use less RAM.

Job interviews can be nerve-racking. You have one shot to convince a potential employer that they should hire you instead of dozens (and maybe hundreds) of other qualified candidates. In this tough job market, a man has to be on top of his game during interviews if he wants a chance to land the job.

A few months ago, I interviewed for a job I had been hoping to get since I was a student in law school. I got through the first round of interviews fine. It was the kind of straightforward and traditional interview that most of us have probably experienced. I was asked questions about my strengths, my weaknesses, and why I wanted to work for this particular company. Basically, they were the kind of questions you can prepare for and have some go-to answers you can use with confidence.

I got the call-back and scheduled an interview with a company executive. Before I flew out to my interview, a friend of mine who knew this person tipped me off on the executive’s interview style. The executive liked to use behavioral interviewing to weed out candidates for positions. I had never heard of this interview style before, so I set out to research as much as I could about it, aiming to be as prepared as possible.

Here’s what I learned on the way to landing the job.

What Is Behavioral Interviewing?

Behavioral interviewing is a relatively new method of job screening. In the 1970s, industrial psychologists found that traditional job interviewing was a pretty crappy way of predicting whether a candidate would succeed at a job. And when you look at traditional job interview questions, it’s easy to see why.

In a traditional job interview an employer might ask questions like:

* “What are your strengths?” Typical banal answer: “I’m a team player who’s passionate about engaging with people to realize the mission statement of the organization.” * “What are your weaknesses?” Typical banal answer: “Oh, I guess my biggest weakness is that I’m just so darn hard working. I never know when to quit. Oh, and I’m really hard on myself. I’m a perfectionist.” Basically, the candidate makes a lame effort to turn a “weakness” into a strength. * “What’s your passion?” Typical banal answer: “I’m passionate about whatever the company I’m interviewing for does for business. I hear you guys make fertilizer. Did I tell you about my dog poop collection in my backyard? It’s amazing!” * “How would you handle a co-worker who is bothering you?” Typical banal answer: “The truth is I would probably leave passive-aggressive notes on his desk, but you don’t want to hear that, so I’ll just tell you what you want to hear. I would seek to understand and then to be understood. I would kill them with kindness. And if worse comes to worse, I’d take the problem to HR.” * Or simply: “Tell me about yourself.” Typical banal answer: “Here’s my 2 minute elevator pitch that makes me look really awesome but in no way reveals to you whether I really have the skills to excel at this job.”

These types of questions are pretty easy to answer. You just have to give the interviewer a vague reply filled with the right buzz words. These answers don’t reveal if the candidate really has the skill set needed to succeed in the job because they don’t require a candidate to give specific examples from their past when they demonstrated said skills. What these types of questions usually reveal is that a job candidate is good at telling a boss what they want to hear.

Behavioral interviewing cuts through the banalities of traditional interviewing and requires candidates to give concrete examples of when they demonstrated the skills needed for the job. Instead of asking what your strengths are, an employer using the behavioral interview process will ask a question like this:

“This job requires the ability to make quick decisions in pressure-filled situations. Can you give me an example from your past when you had to make a quick decision under lots of pressure?”

Yikes. It’s a lot harder to B.S. an answer to this question than the “What are your strengths?” question.

But the questioning doesn’t stop there. The employer using the behavioral interview method will often follow-up your initial response with probing questions to elicit more details from you. Going back to our example question on decision-making, as you tell a story of when you made a quick decision, the interviewer might stop you and ask, “What were you thinking at this point?” These types of probing questions serve two purposes: 1) they give the employer more insight about your personality and character, and 2) they serve as B.S. filters. If you’re telling a totally fabricated story, the probing questions will usually trip you up.Behavioral Interview Question Examples

The possible number of unique behavioral interview questions is only limited by the imagination of the interviewer. You’ll face questions that focus on a large variety of skills and behavior. An employer can then multiply the number of questions he or she asks you about those skill sets by inquiring about different projects or situations you’ve experienced in the past where you demonstrated those skills. Below we’ve included a few sample behavioral interview questions to give you an idea of what you’re up against:

* What do you do when priorities change quickly? Give one example of when this happened. * Describe a project or idea that was implemented primarily because of your efforts. What was your role? What was the outcome? * What is the riskiest decision you have made? What was the situation? What happened? * Give an example of an important goal that you set in the past. Tell about your success in reaching it. * Tell us about a time when you had to analyze information and make a recommendation. What kind of thought process did you go through? What was your reasoning behind your decision? * Tell us about a time when you built rapport quickly with someone under difficult conditions. * Tell us about the most difficult or frustrating individual that you’ve ever had to work with, and how you managed to work with them. * There are many jobs that require creative or innovative thinking. Give an example of when you had such a job and how you handled it. * On occasion we are confronted by dishonesty in the workplace. Tell about such an occurrence and how you handled it. * Describe the most challenging negotiation in which you were involved. What did you do? What were the results for you? What were the results for the other party? * Tell us about the most effective presentation you have made. What was the topic? What made it difficult? How did you handle it? * What have you done to develop your subordinates? Give an example. * Describe a situation where you had to use confrontation skills.

That’s just a sampling. I recommend that you print off this mega list of behavioral interview questions [2]. There are over 100 questions on the list. When I was preparing for my job interview, I printed them off and had my wife give me a mock interview. It forced me to think of different examples from my past that I could use when answering the questions. It was tough, but well worth the effort. During the interview, I had a stockpile of examples fresh in my mind, ready to be drawn from.

And don’t forget that your interviewer will ask you follow-up questions! As you come up with examples to use for your answers, put together as many details as you can so you’re ready for the probes of your potential employer.

How to Answer a Behavioral Interview Question

Alright, we know a behavioral interview can be a real son of a gun. What’s the best way to answer a behavioral interview question so you impress the boss and get the job?

Most guides on behavioral interviewing suggest using the three step STAR process when giving an answer to a behavioral job interview:

1. The Situation or Task you were in2. Action that you took3. Result of that action

Let’s take a look at the STAR process in action.

Question: Describe a situation where you had a conflict with another individual, and how you dealt with it. What was the outcome?

Answer: During college I worked on a four person team that was researching the effects of plastics on male rats. I got along with everyone quite well, except for one fellow. We disagreed strongly on the method we should use to conduct the experiments. My other teammates and I agreed on one way, but this guy wanted to do it his way. He didn’t budge at all on his position and even took passive-aggressive steps to prevent us from completing the project. (Situation or Task)

I set up an informal meeting at the local coffee shop with the guy. I simply asked him to explain his reasons for wanting to do the experiment his way. I just listened and asked questions to clarify. Some of his assumptions were clearly erroneous, but I knew pointing them out right away would just make him get defensive, so I bit my tongue. After hearing him out, I had a better idea of where he was coming from and realized that he might have some misunderstandings on some basic concepts. I didn’t think he would take too kindly to a peer correcting him, so I suggested that maybe we should set up a meeting with the professor to discuss our different ideas and to see if he had any feedback or advice. (Action that you took)

So we met with the professor. We both presented our different reasons for wanting to do the experiment in a certain way. As predicted, the professor brought up the faulty assumptions our stubborn teammate had and that his method wouldn’t be the best to use. The guy was sort of deflated, but he accepted the feedback and agreed to start the experiment using our method. (Result of the action)

There are no right or wrong answers. An important note to remember when answering behavioral interview questions is that there are no right or wrong answers. It’s often hard to tell what employers are looking for when they ask behavioral interview questions. Take our example about conflict resolution. You might think the interviewer is looking for a certain textbook method of conflict resolution. But maybe the employer’s own managerial philosophy doesn’t line up with the typical conflict resolution technique. I enjoy reading a weekly feature in the New York Times called “The Corner Office [3].” They ask CEO’s about leadership and what they’re looking for when interviewing a candidate for a job. Each CEO has a different rubric for what makes a good employee. So just concentrate on coming up with a concrete, truthful example that answers the question and presents you in a good light. And let the chips fall where they may.

Be honest. Don’t try to B.S. your way through a behavioral interview. If you don’t have an example for a question you’re asked, don’t try to make something up. For starters, you’ll probably get called on it with follow-up questions. But more importantly, the questions are designed to see if your skill set and personality fit with the position. If your answers aren’t what the interviewer is looking for, this position may not be the best job for you anyway, and you’d be miserable at work if you did get the job. That’s not good for anyone.

Use all your life experiences as examples for your answers. Behavioral interview questions often require you to give examples from your past work experience to answer a question. This can pose a problem for younger job candidates who haven’t held many, if any, prior jobs. To get around your lack of work experience, call on all your life experiences. Take examples from college or any volunteer organizations that you may have been a part of to answer the question.

unzip the file to your c:\ drive (it can be another drive but this is the easiest)

put the commview.dll file you just downloaded in the map you extracted (it's called aircrack and if you extracted it to your c: drive like I said it should be in c:\aircrack\)

Now go to you place where you installed Commview in (the program itself) and look for a file called "ca2k.dll" (default install dir is c:\program files\commview for wifi\)

Copy this file to the same folder as the commview.dll (c:\aircrack\)

OKAY that was a whole lot! this was just to get everything ready! If you did all of this correct you'll be able to move to the next step!-------------------------------------------------------------------------------------------

THE CRACKING:

Step 1:- Open a command prompt (start > run > cmd.exe)

Step 2:- type the following in the command prompt:

Quote:

cd c:\aircrack\

- HIT ENTER

Step 3:- type the following in the same command prompt:

Quote:

airserv-ng -d commview.dll

- HIT ENTER- You should see something like this coming up in the command prompt

Step 4:- Open a new command prompt (LEAVE THE PREVIOUS ONE OPEN AT ALL TIMES!!)- Typ the following the the new command prompt:

Quote:

cd c:\aircrack\

-HIT ENTER

Step 5:- Now typ this in the same command prompt:

Quote:

airodump-ng 127.0.0.1:666

- HIT ENTER

note: if you know what channel the to-monitor-network is on you can make it this. I recommend this!:

Quote:

airodump-ng --channel YOURCHANNELNUMBER HERE 127.0.0.1:666

Airodump-ng should start capturing data from the networks on the given channel now, you'll notice it isn't going fast (except if it's a big company's network or something). We are going to speed this process up!Take a note of the following:1: BSSID of the network you want to crack = MAC address.2: ESSID of the network you want to crack = name of the network (example: wifi16, mynetwork,...)3: The mac of the card you are using to monitor the packets

Remember the file I made bold in part 8? Well it's obviously the same as in 9 meaning you need to put the same filename here.The part I made green here is the filename you use to save the packet, you can choose whatever you want but you must use this filename in the upcomming steps!

Step 11:Now that we've got our ARP REQ packet we can start injecting!Here's how to do this.- Go to the command prompt used in step 9- Type in the following: