No Tricks

Tuesday, April 30, 2013

Over the years I have developed many mind maps when writing articles, and I have also used them as a way of collecting information and organising it. I was a fan of Freemind, and with effort, I uploaded some maps to the Freemind gallery. Later I switched to Freeplane but it still did not give me a simple way to share maps. I have now taken out a membership at MindMeister as the site can read and publish mind maps in a collection of formats. You can find about 20 or so maps here, mainly from past articles, with new ones to follow. There is a large map on Trends & Analysis from 2008, giving you some idea how complex and from which sources you need to collect information to get a handle on IT Security.

Monday, April 22, 2013

I am on a mailing list from Business Insider and against my better judgement (which often comes out when the internet is involved) I followed a link to an article on The Sexiest Scientists Alive. There are 50 scientists listed, and scientist number 43 is Clio Cresswell, a mathematician at the University of Sydney, who is the author of Mathematics and Sex.

The book gained some notoriety for propounding the 12-bonk rule. Bonking is a term in Australia for having sex (usually casual sex as I recall), and Dr. Cresswell has stated that the best strategy for finding a good (sexual) partner is to bonk with 12 twelve different partners, note the best one, then keep bonking until you find someone who is better, then settle on that partner. You benchmark on a sample of 12 partners, discard them, then take the next best that comes along. Cresswell reports that this strategy gives you a 75% chance of finding a good mate. So it's not foolproof, but with the confidence of mathematics, it proclaims to be better than any other trial-and-error approach that leaves behind a trail of discarded lovers. Of course there is more at play in finding a mate than "bonkability", as opined in the Ask Sam column of the Sydney Morning Herald for example.

The result caught my eye as I was recently reviewing some statistical problems and I surmised that the 12-bonk rule had a similar sounding result to the secretary problem, a classic problem in probability. The secretary problem (which by now should be at least upgraded to the executive assistant problem, or simply just the candidate problem) asks for the best strategy to select a secretary for a position where there is a collection of candidates and you get to have one interview, upon which you must either hire the candidate or move onto the next one. It is assumed that the market is competitive and that you will not be able to return to a rejected candidate as they have found employment elsewhere.

The optimal strategy here is to interview 37% of the possible candidates, make a note of the best one, and then keep interviewing until you a find an additional candidate that is better than you previous best, and then choose the new best candidate as the one to employ. So if you have 100 candidates, interview the first 37, note the best, and then keep interviewing until you find someone better and then hire them. The graph below plots the probability of finding the best candidate using this strategy, as the percentage k of candidates interviewed and refused increases.

Here 37 is the double winner in that the point marked by the dashed lines indicates that the optimal approach is to reject the first 37% and then you will find the best candidate as the next best choice 37% of the time. This magic 37% is derived from 1/e = 0.37, where e is the base of the natural logarithms.

I just downloaded the e-book version of Mathematics and Sex and took a quick look at the 12-bonk section, and it seems that Cresswell's discussion is based on the work of Peter Todd in his paper Searching for the Next Best Mate. Todd looks at simpler heuristics to find a mate than applying the 37% rule, which he notes has the following drawbacks in practice. If we assume a sample of 100 people where they can be rated uniquely on a scale from 1 to 100, then when applying the 37% rule

On average, 37 additional people need to be interviewed (or bonked) to find the next best beyond the best found in the initial 37 people, for an average total of 74 people being considered from the 100.

On average, the best person found has rank 82, where 100 is the best on the scale. The 37% rule finds the best person 37% of the time, but averaging the success out over the remaining 63% of choices, lowers the result by about 20%.

Todd decided to explore other decision rules that performed better than the 37% rule on some criteria, and more closely match with our observed behaviours for finding a mate. It is unlikely that anyone will have the time and (emotional) energy to engage with 37% of all their potential mates, which could easily run into the thousands. Todd's computer models found that if you engaged 12 people from a mating population of 1000, then took the next best, you are highly likely to end up with someone in the top 25% of the population. I cannot quite tell from Todd's graph referring to this result as to how many people must be engaged in total, but seemingly around 30 or so (50 at the outside).

So this was the genesis of the 12-bonk rule, and I will read Crisswell's book a bit closer to see if she has teased out any further details or conclusions. A very quick search of the internet on the topic of the number of sexual partners seemed to indicate that for Western men and women 12 sexual partners is on the high side for most of them - actually more like half of that, after discarding "outliers". A further potential glitch in the 12-bonk rule is that it assumes when you have found your post-12-bonk lover that he or she will accept your overtures, and of course you cannot be certain of that. I am sure that someone is working on the mathematics of unrequited love.

Wednesday, February 6, 2013

Just a short note to say that the number of visitors to this blog just passed 100,000. I had a few posts in 2007, a few more in 2008 and then picked up from there for almost 300 by now. I have been mostly absent of late (meaning the last year of so) for personal reasons but I hope to pick up again here this year. Thank you for all the visits.

Monday, May 28, 2012

Back in 2009 I posted on the risk of GPS satellite positioning system degrading over the next few years, both in terms of coverage and accuracy, due a decrease in the number of operational satellites. This risk was the main finding of an audit performed by the Government Accountability Office (GAO), where Monte Carlo simulations predicted that the number of operational satellites would fall below the threshold required to provide positioning at agreed service levels. In short, too many satellites that were approaching, or had passed, their expected operational life were being relied on to continue functioning in the absence of replacements. Engineers know that satellites have very finite operational lifetimes, and at some point will simply stop working and start drifting.

And one significant satellite did just that last month, as reported by the Economist for example. The satellite in question was Envisat, one without GPS responsibilities thankfully, launched in 2002 to provide a wide range of environmental data which it has delivered handsomely in the terabyte range. It is a critical primary source of data for scientists, providing continuous observations until contact was lost last month. The European Space Agency has formally announced that the mission of Envisat has been completed, and successfully so, after celebrating it’s tenth year of operation when only five were expected – both from an engineering and funding perspective.

So Envisat was living on borrowed time, five years of it or 100% additional mission time, as the GAO report on the GPS satellite constellation was asserting. The Economist article goes on to name some culprits in the case of Envisat, with governments being allocated the lion’s share due to lack of funding. Both NASA and ESA are unwilling to sure up their Earth-observation programs without additional government guarantees.

There is another risk beyond the loss of service provided by Envisat or GPS satellites, and that is additional space debris created by these satellites once they stop functioning. It is estimated that Envisat will orbit the Earth for the next 150 years before being drawn down into its atmosphere. During this time it will be at risk from colliding with other existing space debris, breaking into smaller pieces upon impact, producing even finer debris. This is known as the Kessler Syndrome, proposed by NASA scientist Donald J. Kessler in 1978, who commented on the Envisat demise as follows

It seems ironic that a satellite intended to monitor the Earth’s environment is at risk from the space environment and is likely to become a major contributor to the debris environment.

Orbital debris and the collisions that may result from its presence, are a significant risk for NASA. There is a 180-page report on this topic which, apart from the specific subject matter, contains many useful risk principles and guidelines.

Tuesday, May 22, 2012

Business Insider recently reported that Chrome is now the number 1, or near number one browser of choice, and its popularity has come as the expense of IE as shown in the chart below

The data set is based on statistics collected by StatCounter, and is probably not reliable for specific figures but sufficiently reliable for showing trends – in this case, that Chrome is stealing market share mainly from IE and somewhat from Firefox. In any case, a significant amount of internet traffic is now being funneled through the Chrome security model. The previous browser prediction that I posted on, that Firefox would overtake IE by Christmas 2012, agrees quite well with the data set above.

Friday, October 7, 2011

You can now order the new Block Cipher Companion book from Tesco’s, just published this month. I have seen an earlier draft and the text is very detailed and comprehensive, as you would expect from authors of this caliber.

Tuesday, October 4, 2011

I recently posted about the reads on my Scribd collection, and one of the most frequently read is the master’s thesis by the founder of Xobni (inbox spelt backwards) called How to Organize Email. There is a new version of this software called Smartr for Gmail and you can watch a video on its features.

Sunday, October 2, 2011

I have uploaded about 200 documents to Scribd over the last few years and the number of reads has just passed 150,000. You can see the categories here. The top 5 documents, each with over 3000 reads each are

Thursday, September 29, 2011

Monday, September 26, 2011

Don’t ask me why but a lot of SPAM has accrued, and keeps accruing, at this May 2009 post on SHA-1. Apart from the common penis enlargement references, some of the other SPAM is quite long and seems to be playing on some quirk of SEO. Fine.

Sunday, September 25, 2011

Thursday, September 22, 2011

This is a nice presentation on enterprise key management issues from Anthony Stieber given at the 2nd IEEE (KMS 2010) Key Management Summit. The main message is that KMS is tricky and don’t roll your own. By the way if you are looking for examples of Powerpoint that breaks all the rules for good presentations, then you will find them here.

Also there is a very polished and informative presentation from Chris Kostick of E & Y on an enterprise key management maturity model, and below is a comprehensive diagram on the life-cycle management of keys.

I am currently in-between positions, somewhat happily, and are casting my net of interest a bit wider than my traditional roles in IT Security and Risk. One position that caught my eye from a global reinsurer in town was the role of Earthquake Expert within their Natural Catastrophe department (or Nat Cat in insurance lingo). I really don’t have any specific background in this area but I sometimes entertain the idea that I can transfer hard-learnt crypto math skills into a numerate role like this one which calls for extensive modeling and prediction. You also think that this might be a nice and cozy niche area to ply your trade as a specialist, holding something of a privileged position.

Well I was disabused of any such notion this week when I read this week of six Italian scientists and a former government official are being put on trial for the alleged manslaughter of the 309 people who died in the 2009 L'Aquila earthquake in Italy.

The seven defendants were members of a government panel, called the Serious Risks Commission (seriously), who were asked to give an opinion (or risk statement) on the likelihood that L'Aquila would be struck by a major earthquake, based on an analysis of the smaller tremors that the city was experiencing over the previous few months. The panel verdict delivered in March stated that there was "no reason to believe that a series of low-level tremors was a precursor to a larger event". A week later the city suffered an earthquake of magnitude 6.3 on the Richter Scale, denoting a “strong quake”.

The crux of the case against the scientists is that they did not predict the strong quake coming to L'Aquila to allow a proper evacuation of its inhabitants. The defense rebuttal is simply that such a prediction is impossible, and they cannot be held accountable for this unreasonable expectation. The scientists cannot be expected to function as a reliable advanced warning system. The international scientific community has weighed in to support the defendants with a one-page letter from the American Association for the Advancement of Science, which supported the scientists by saying that there is no reliable scientific process for earthquake prediction, and they should not be treated as criminals for adhering to the accepted practices of their field.

Recently people were evacuated from New York City as precaution to the impact of Hurricane Irene. The hurricane passed by New York causing far less extensive damage than expected, and yet there were still complaints from residents about being asked to leave their homes “unnecessarily”. It seems that authorities cannot win in these matters unless they can predict the future accurately.

Wednesday, September 14, 2011

Every now and again I run this blog through the free Website Grader tool which measures your site on a variety of criteria, hoping to lure you for a more thorough paid analysis. The tool used to report a PageRank value, and No Tricks seemed to be stuck at 3 for quite a few years. The site now uses there own page ranking metric, which reported a value higher than 3. I was overjoyed and eagerly confirmed that the “true” PageRank metric had also increased from 3 to 4, representing some form of “exponential” improvement since the scale is logarithmic. I can now claim that the No Tricks site has gone from being of “low importance” to being of “medium importance”. Fine, I’ll take it.

Incidentally, I wrote a short introduction to the mathematics of PageRank a few years back, with a security spin.