Derek Williams

Menu

Category Archives: Infosec

Although the Heartbleed data leak vulnerability is as old as OpenSSL 1.0.1‘s heartbeats (over two years), it has just now risen to instant infamy. First, it has taken us all a while to upgrade OpenSSL to 1.x and, second, it wasn’t publicized until this week. So now that we have a perfect storm of ubiquity and fame, the internet will be flooded with hackers scanning sites and running off with all the data they can grab.

Kudos to the Carnegie Mellon alum for the closest-fit solution to xkcd’s externally-controlled April Fools comic and Skein hash collision contest (always read the alt text). I was far too hardware-deprived to be nerd-sniped by this one, but there were plenty others who jumped right on it, all motivated by challenge rather than “money.”

That’s encouraging because it seems information theory isn’t taught much anymore (my own alma mater has long since dropped “ICS” for “CS”). Although we now need it most (in our big data and security-starved era), our collective entropy-sophy has atrophied.

In areas like security policy and algorithm design, the cold reality of the pigeonhole principle is too often forgotten. We often regard hashes as magic, forgetting they’re just bit-twiddling mapping functions and that when the domain is bigger than the range, there will be collisions. Simple truths like this are ignored in spots ranging from the XBox to ReiserFS.

The winner of xkcd’s contest was still 384 bits shy of a total collision, but every imperfect hash has plenty of clashes. So be careful out there to win that battle between bit length and processing power, at least while GPUs and quantum computers develop.

BTW, it seems Wikipedia got enough donations from the effort to be good sports about the xkcd-hacking.

When it comes to consumer technologies, we in the US often let the rest of the developed world “leap frog” us, frequently with our own innovations. The main culprits are typically our size and social adoption curves. When you have an installed base of familiar and comfortable (but old) technologies numbering in the hundreds of millions, transition takes awhile. So we’re stuck with broad use of anachronistic things like CDMA cell phone networks, Windows XP, checks, and skimmable mag stripe credit cards. In payments, where adoption is key, it often takes significant financial and regulatory incentives to bring in the new.

As card fraud escalates, US payment networks are stepping up incentives to migrate to chip-embedded credit and debit cards using the Europay-Mastercard-Visa (EMV) standard. For example, Visa’s new October, 2015 fraud liability shift (from issuer to merchant) for non-EMV transactions provides the looming punitive “stick,” while their recently-announced common debit solution and Technology Innovation Program (TIP) provide some “carrots.” But that’s all “network push” with little “consumer pull.” Hopefully, as more EMV cards roll out in the US, consumers will value the extra security, and competitive pressure will motivate issuers to send out those new cards quickly. EMV doesn’t solve all card fraud problems, but it’s a step worth taking. The costs of fraud affect us all, and it’s time we caught up with the rest of the world.

Since security is king in my corp-rat world, standards dictate that my public web services be accessed via mutual authentication SSL. The extra steps this handshake requires can be tedious: exchanging certs, building keystores, configuring connections, updating encryption JARs, etc. So when helping developers of a third party app call in, it’s useful to provide a standard tool as a non-proprietary point of reference.

This week I decided to use soapUI to demonstrate calls into my web services over two-way SSL. The last time I did something like this, I used keytool and openssl to build keystores and convert key formats. But this go ’round I stumbled across this most excellent post which recommends the user-friendly Portecle tool, and steps through the soapUI setup.

Just a few tips to add:

SoapUI’s GUI-accessible logs (soapUI log, http log, SSL Info, etc.) are helpful for diagnosing common problems, but sometimes you have to view content in bin\soapui-errors.log and error.log. Take a peek there if other diags aren’t helpful.

SoapUI doesn’t show full details of the server/client key exchange. You can get more detailed traces with the simple curl -v or curl –trace; for example:

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

This week’s challenges ran the gamut, but there’s probably not much broad interest in consolidated posting for store-level no advice chargebacks, image format and compression conversion, SQLs with decode(), 798 NACHA addenda, or many of the other crazy things that came up. So I’ll stick to the web security vein with a CSRF detector I built.

Sea Surf

If other protections (like XSS) are in place, meaningful Cross-Site Request Forgery (CSRF) attacks are hard to pull off. But that usually doesn’t stop the black hats from trying, or the white hats from insisting you specifically address it.

The basic approach to preventing CSRF (“sea surf”) is to insert a synchronizer token on generated pages and compare it to a session-stored value on subsequent incoming requests. There are some pre-packaged CSRF protectors available, but many are incomplete while others are bloated or fragile. I wanted CSRF detection that was:

I also wanted to include double submit protection, without having to add another filter (certainly no PRG filters – POSTs must be POSTs). Here below is the gist of it.

First, we need to insert a token. I could leverage the fact that nearly all of our JSPs already included a common JSPF file, so I just added to that. The @include wasn’t always inside a form so I added the hidden input field via JavaScript (setToken). I used a bean to keep the JSPF as slim as possible.

I didn’t want to modify all those $.ajax calls to pass the token, so the ajaxSend handler does that. The token arrives from AJAX calls in the request header, and from form submits as a request value (from the hidden input field); that gives the benefit of being able to distinquish them. You could use a separate token for each if you’d like.

The TokenUtil bean is simple, just providing the link to the CSRFDetector.

A servlet filter (doFilter) calls CSRFDetector to validate incoming requests and return a simple error string if invalid. You can limit this to only validating POSTs with parameters, or extend it to other requests as needed. The validation goes like this:

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

You know the old saying, “build a man a fire and he’s warm for a day; set a man on fire, and he’s warm for the rest of his life.” Or something like that. I’ve been asked about tool preferences and development approaches lately, so this week’s post focuses on tools and strategies.

JRebel

If you’re sick of JVM hot-swap error messages and having to redeploy for nearly every change (who isn’t?), run, do not walk, to ZeroTurnaround‘s site and get JRebel. I gave up on an early trial last year, but picked it up again with the latest version a few weeks ago. This thing is so essential, it should be part of the Eclipse base.

My DB2 tool of choice depends on what I’m doing: designing, programming, tuning, administering, or monitoring. There is no “one tool that rules them all,” but my favorites have included TOAD, Eclipse DTP, MyEclipse Database Tools, Spotlight, db2top, db2mon, some custom tools I wrote, and the plain old commandline.

I never liked IBM’s standard GUI tools like Control Center and Command Editor; they’re just too slow and awkward. With the advent of DB2 10, IBM is finally discontinuing Control Center, replacing it with Data Studio 3.1, the grown-up version of the Optim tools and old Eclipse plugins.

I recently switched from a combination of tools to primarily using Data Studio. Having yet another Eclipse workspace open does tax memory a bit, but it’s worth it to get Data Studio’s feature richness. Not only do I get the basics of navigation, SQL editors, table browsing and editing, I can do explains, tuning, and administration tasks quickly from the same tool. Capability wise, it’s like “TOAD meets DTP,” and it’s the closest thing yet to that “one DB2 tool.”

Standardized Configuration

For team development, I’m a fan of preloaded images and workspaces. That is, create a standard workspace that other developers can just pick up, update from the VCS, and start developing. It spares everyone from having to repeat setup steps, or debug configuration issues due to a missed setting somewhere. Alongside this, everybody uses the same directory structures and naming conventions. Yes, “convention over configuration.”

But with the flexibility of today’s IDEs, this has become a lost art in many shops. Developers give in to the lure of customization and go their own ways. But is that worth the resulting lost time and fat manual “setup documents?”

Cloud-based IDEs promise quick start-up and common workspaces, but you don’t have to move development environments to the cloud to get that. Simply follow a common directory structure and build a ready-to-use Eclipse workspace for all team members to grab and go.

Josh is taking it to extremes, but he does have a point: developers’ lives are often too hectic and too distracted. This “do more with less” economy means multiple projects and responsibilities and the unending tyranny of the urgent. Yet we need blocks of focused time to be productive, separated by meaningful breaks for recovery, reflection, and “strategerizing.” It’s like fartlek training: those speed sprints are counterproductive without recovery paces in between. Prior generations of programmers had “smoke breaks;” we need equivalent times away from the desk to walk away and reflect, and then come back with new ideas and approaches.

I’ll be following to see if these experiments yield working solutions, and if Josh can stay employed. You may want to follow him as well.

Be > XSS

As far as I know, there’s no-one whose middle name is <script>transferFunds()</script>. But does your web site know that?

It’s surprising how prevalent cross-site scripting (XSS) attacks are, even after a long history and established preventions. Even large sites like Facebook and Twitter have been victimized, embarrassing them and their users. The general solution approach is simple: validate your inputs and escape your outputs. And open source libraries like ESAPI, StringEscapeUtils, and AntiSamy provide ready assistance.

But misses often aren’t due to systematic neglect, rather they’re caused by small defects and oversights. All it takes is one missed input validation or one missed output-encode to create a hole. 99% secure isn’t good enough.

With that in mind, I coded a servlet filter to reject post parameters with certain “blacklist” characters like < and >. “White list” input validation is better than a blacklist, but a filter is a last line of defense against places where server-side input validation may have been missed. It’s a quick and simple solution if your site doesn’t have to accept these symbols.

I’m hopeful that one day we’ll have a comprehensive open source framework that we can simply drop in to protect against most web site vulnerabilities without all the custom coding and configuration that existing frameworks require. In the mean time, just say no to special characters you don’t really need.

Comments Off

On that note, I’ve turned off comments for this blog. Nearly all real feedback comes via emails anyway, and I’m tired of the flood of spam comments that come during “comments open” intervals. Most spam comments are just cross-links to boost page rank, but I also get some desperate hack attempts. Either way, it’s time-consuming to reject them all, so I’m turning comments off completely. To send feedback, please email me.

Mallet-wielding children in arcades and amusements parks could be holding their weapons over mole holes in vain, waiting for a weasel that just won’t pop. If you run across a Whac-A-Mole arcade cabinet that becomes a dud out of nowhere, it might not be the arcade’s fault. A game programmer in Florida has been running a scheme since 2008, infecting Whac-A-Moles with a computer virus.

It’s Friday, and time again for the Friday Fragment: our weekly programming-related puzzle.

This Week’s Fragment

After last week’s commemorative fragment, we’ll resume our game bots. Let’s make our bots more interesting by letting us play instead of just watching:

Write code to let man compete against machine (a game bot) in rock-paper-scissors, tic-tac-toe, or Reversi (your choice). Present the game board on a web page, let the human player make the next move, and then call the game bot for its move. Continue this until one side wins.

To play along, provide either the URL or the code for the solution. You can post it as a comment or send it via email. If you’d like, you can build atop the bots and/or source code at http://derekwilliams.us/bots.

I offered a hint (“try some googling”) and a short-cut (“just offer suggestions on how to go about solving it”). And for good reason: this is a somewhat famous unsolved cryptogram. It’s Part 4 of Kryptos, Jim Sanborn’s sculpture at the CIA headquarters. I posted it last week in celebration of its 20th anniversary, and the recent release of a new clue.

Many experts and othersorts have worked on solving Part 4, in search of fame or just a good challenge. I think it utilizes a one-time pad or a long key, perhaps along with the Vigenere Table found on the sculpture. The key or pad may be located on-site; for example in the underground utility tunnel. Time will tell.

If you think you can crack it, don’t just tell me: send your solution to Sanborn for verification.

You can provide the solution or just offer suggestions on how to go about solving it. For example, I can’t solve it, but based on its background (try some googling), I have ideas about how it might be solved. To “play along,” post your response as a comment or send it via email.

I’ve seen an uptick lately in phishing emails that do a much better job of replicating legitimate ones. For example, I’ve received several that look like Amazon orders or LinkedIn reminders. In all cases, the email content is a dead ringer for kosher ones except that the content (book titles, names, etc.) is unfamiliar, and if I mouse over the embedded links, the target URL is fishy indeed. And therein lies the purpose: to get unsuspecting recipients to click one of those links, visit its site, and receive malware.

Identifying and stopping these emails was easy. Since they arrived at my Gmail account, I created some quick filters to corral them. On closer inspection, I found that they were sent to a couple of my forwarded email addresses, so I simply turned off those forwards at my domain host. And that provided some insight into the source: one was an old address I had given out only to InformationWeek. Have they been selling or otherwise disclosing my email address?

Google’s anti-phishing initiatives have had mixed success. Their phishing filter has been criticized for too many false positives, their DKIM initiatives have had too little uptake, and their “authentication icon for verified senders” is much too passive-aggressive. But this has perhaps a simple solution: flag any email with embedded links where the target URL’s domain differs from the sender’s domain. Perhaps this could be done with some creative filters or a Gmail gadget. If these things come back, I’ll give it a shot.

Hats off to local SecureWorks for detecting and thwarting the massive BigBoss Russian check counterfeiting ring. Their Counter Threat Unit (yes, 24 fans, there really is a CTU) uncovered an operation used to create over $9 million in counterfeit checks over the past year.

It was a sophisticated attack utilizing ZeuS trojans, SQL injection, a couple thousand infected computers, and a VPN to transmit stolen data. The perps stole over 200,000 check images from archive services and used these to create counterfeit checks. They then overnighted these checks to U.S. recipients (drawn from a stolen database of job seekers) who were to deposit the checks and wire some of the funds back to them. These unwitting money mules (who thought they were job candidates) did become suspicious, so the plan was apparently not very successful.

Compared with credit card fraud, widespread check fraud is less common and is typically easier to resolve. However, check authorization systems are incomplete, so prevention is more difficult. But solutions are well within reach, such as a secure national shared database of positive pay, authorization, negative list, and “stop” information that could be accessible to everyone, not just large commercial customers. This could plug one of the last big security holes in our bank accounts.

Reality is firmly rooted: we don’t quite yet have quantum computers, nor have we really proven that P != NP. Yet while cracking most modern encryption and hash algorithms falls into the “not impossible, just highly improbable” category, academic weaknesses do get attention. So much so that the old SHA-1 and MD5 hashing mainstays are no longer considered acceptable. Soon enough, SHA-2 will also be as uncool as a rickroll.

Just in the nick of time, the NIST is narrowing the list of candidates for the new SHA-3 algorithm. The second round just finished, and it’s down to 14 candidates, with the winner to be chosen before the Aztec calendar ends in 2012. It should be a good contest, as long as FIFA referees aren’t involved.

This is exciting stuff, and I’m sure you’ll want to play along. Just use your jailbroken, Kraken–proofed cell phone to text your favorite to 2600. I’m pulling for Skein, mainly because of the cool name and celebrity status.

Today, a friend reported that one of the apps I provide as a community service was down. Its WebCalendar component complained that magic_quotes_gpc was no longer enabled, which I quickly confirmed by dropping in a phpinfo() call. The remedy was also quick and easy: add a local php.ini with:

magic_quotes_gpc = On

This automatically adds slashes to escape quotes and other characters in GET, POST, and Cookie strings (hence “gpc”). The PHP code then removes these via stripslashes and similar techniques.

Magic_quotes_gpc is no longer considered a good way to guard against SQL injection attacks, so the PHP Security Consortium and others now recommend against it. I suspect this is why my hosting service changed their global setting to off, but security-wise, that was a step in the wrong direction.

Many PHP apps that support magic quotes are coded to work even if it’s turned off, and the get_magic_quotes_gpc() ? stripslashes template for doing this is seemingly everywhere. Fortunately, WebCalendar checks for this, but many apps don’t. Disabling magic quotes was probably done to force apps to change, but it’s more likely apps will continue to work and developers and admins won’t realize that they are suddenly far more susceptible to injection attacks. A better approach would have been to just let it die with the 5.3 upgrade.

There was a bit more dialog today about impersonating the DB2 instance owner. It’s a quick way to get around controls that newer versions of DB2 and tighter Windows and network security have brought us. The extra step is annoying, but trying to convince the system you don’t need it is often worse.

Impersonation and elevation have become the “new normal” these days. I’ve grown so accustomed to opening “run as administrator” shells in UAC Windows (7/Vista/2008), typing runas commands in XP, and using sudo in Ubuntu that these have become second nature. And that level of user acceptance usually translates into approval to expand the practice, rather than a mandate to remove the inconvenience. Enhancing security usually includes putting up new barriers.

A former co-worker has often said that what we really need is software that determines whether a user’s intentions are honorable. Perhaps then security would become seamless. But it’s more likely that its implementation would also test our manners and fading patience.

It’s Friday, and time again for a new Fragment: my weekly programming-related puzzle.

This Week’s Fragment

This week’s puzzle is a new type of challenge: SQL coding. And since taxes and health care reform are all the buzz this April 16, that will be our backdrop.

Owen Moore, fearless programmer for the IRS, would like to help small businesses cope with certain health care reform provisions. In particular, he’ll provide free reports to companies showing who to fire to get below the 50-employee limit by 2014. Fortunately, he has an Employee database table populated from recent tax filings. It has, among other fields, EmployeeID (SSN, char(9)), CompanyID (EIN, char(9)), and HireDate (a date). Since last-in-first-out seems reasonable, his “pink slip pick list” will show all but the 49 most senior employees in each company. That is,

select companyid, employeeid, hiredate
from employee e
where …
order by companyid, hiredate

Can you provide the missing fragment (fill in the “…”) to complete Owen’s SQL?

If you want to “play along”, post the SQL as a comment or send it via email. To avoid “spoilers”, simply don’t expand comments for this post. Owen will be owing you a big favor.

Last Week’s Fragment – Solution

Last week’s puzzle required helping Mark Duke recover his forgotten Linux password. We had his /etc/shadow entry and knew that he was a “roller”: someone whose password is simply his user ID followed by a number (two digits in this case).

The /etc/shadow password format is well documented; even Wikipedia covers it. The “$6” at the beginning indicates that it uses SHA-512 hashing with the salt following. The unix crypt(3) function makes easy work of this, especially since there are wrappers in just about every language imaginable. I chose to code it in PHP:

My son, Spencer, coded in Python, and posted his solution (see comments in last week’s post) shortly after he read the fragment. Now you see what I’m up against: this guy can dump your LM hashes and dictionary-crack your Windows passwords in no time.

I commend readers for not just posting “sudo apt-get install john”. Yes, there are tons of programs like John the Ripper for quickly cracking passwords, with no coding required. This reinforces the need to choose strong passwords (likely not in any password dictionary) and use different passwords for different sites. Frequently changing passwords is really no help, as we learned this week.

I had a good lunch today with a friend who wanted to quickly set up simple (yet strong) authentication on a Tomcat web server using his own login page. Since forms authentication is built into all J2EE web servers (and ASP .NET servers, for that matter), it’s quite easy.

In summary, the steps for Tomcat are:

Add security-constraints to WEB-INF/web.xml to specify protected resources / folders. Also include auth-constraints and security-roles for access.

Create user and user role tables and configure the JDBC realm in server.xml. Or, simply start with defining users directly in tomcat-users.xml; you can always add the database later.

Create the login and login error JSPs, pointing to them from the login-config section of web.xml. Remember to include the required form element names in the JSP (j_security_check, j_username, j_password, etc.). Also note that these pages can’t use style sheets and other external files, so you have to (redundantly) embed style information directly into the JSP.

By default, all traffic (including login passwords) isn’t encrypted, so this should only be used with SSL/TLS encryption in place. That means installing a digital certificate, which is also fairly easy. That is:

Purchase an SSL certificate. For initial testing, you can create a self-signed cert using keytool, included with JSSE.

If the server is local, re-start Tomcat, open your browser, and access your site using the https://localhost:8443 URL. Look for the browser cues for a secure site: padlock icon, green or yellow address bar, etc.

You may eventually switch to more sophisticated methods, like integrating with external security systems for single sign on (e.g., using SAML). But the simple steps above will get you going quickly with basic, unbreakable authentication.

Since it’s spring break (and I’ll be kayaking tomorrow), this week’s Friday Fragment comes to you a day early.

This Week’s Fragment

Mark Dupe has forgotten his Linux password. But he has done two (insecure) things that should make it easy to hack his way back in: 1) his password is always his user name, followed by a two-digit number (he “rolls” it), and 2) he has a readable backup of his /etc/shadow file. Here is the entry for his account:

If you want to “play along”, post the password and/or code as a comment or send it via email. To avoid “spoilers”, simply don’t expand comments for this post. Mark will thank you, at least until someone else hacks into his account.

Last Week’s Fragment – Solution

Last week’s puzzle required correcting a check routing transit number (374088048) and/or credit card number (4754 2700 7476 257x) from a torn slip of paper handwritten by Pierre Lefebvre. I received some excellent responses on this; see the comments for last week’s post.

As Joel Odom pointed out, you can plug in possible digits until you get a valid checksum. Credit card numbers use a mod 10 checksum (the Luhn formula), where you double alternate digits, add the digits of the products, and then add these to the other digits. The total should be evenly divisible by 10.

To make this an even multiple of 10, that last missing digit must be 5 (65 + 5 = 70), so the credit card number is 4754 2700 7476 2575.

The routing transit number was a bit trickier because there wasn’t a missing digit, and I didn’t specify exactly what was wrong. But routing transit numbers follow strict rules; for example, the first two digits have to be within certain ranges that identify the Fed district, credit unions/thrifts, special types of checks, etc. As Joe Richardson pointed out, Wikipedia is right on this one, and from their article you can determine that a valid RT can start with 3, but cannot start with 37.

To find the correct second digit (3x4088048), we use a routing/transit mod check. It’s similar to the credit card algorithm, except that it uses weights: 3, 7, 1, repeating (many check old-timers know 37137137 by heart). Wikipedia and IRS publication 1346 document this, among other places. Here, too, when running a mod check against all the digits, the remainder must be evenly divisible by 10. So, we have:

So the second digit should be 1, and the correct routing/transit is 314088048.

Kudos to Joe Richardson for correcting the routing/transit, and further determining that this belongs to the memorable Alamo Federal Credit Union. His Rescue 5 product can correct these and other account problems automagically. Joe also determined that the credit card number is a rebate card issued by MetaBank. Right again: it’s an old rebate card of mine that I’ve used up.

As Joe pointed out, there are good implementations of both check digit algorithms readily available on the net, in almost every language imaginable. Even Wikipedia has several, but be careful: their routing/transit code is wrong. It seems Wikipedia is continuing their tradition of posting bad code; see another example I pointed out in an encryption research paper.

One big question remains: how did Inspector Lestrade solve this so quickly in his head? Frankly, I think he was bluffing about the Visa number. That calculation is simple, but usually does require pencil and paper to get it right. We would all be very impressed, however, if he memorized all the routing/transit rules and could do weighted mod checks in his head.

Well, Lestrade had an advantage: he recognized from the name (Pierre Lefebvre) and handwriting that the second digit was a 1, not a 7. In handwritten French, ones (1s) look like sevens (7s), which is why the French put an extra line through sevens to distinguish them. From the Sherlock Holmes stories, it’s questionable whether Lestrade even understands the French language, but he certainly would have recognized the handwriting.

I got a question today from a co-worker who was painted into a corner trying access a database he had restored on his Windows development machine. He stumbled over DB2 9.7’s new security twists, such as not having dbadm authority by default. I rattled off my familiar quick fix:

db2 connect to <dbname>
db2 grant dbadm on database to <userid>

However, his default Windows user ID didn’t have secadm or sysadm authority, so that failed with an error. So, I had him impersonate the one that did:

runas /user:<adminuser> db2cmd

Repeating the grant command from this new command shell did the trick. It could have also been done with:

db2 connect to <dbname> user <adminuser> using <adminpassword>

And so it goes. No matter how refined security policies become, they can usually be circumvented with a little impersonation. For example, think of how many times we quickly and mindlessly sudo under Ubuntu. In this case, impersonation was a fast route to giving a developer the access he should have had by default anyway. Today’s technology cannot solve the impersonation problem, but sometimes we consider that more a feature than a bug.

It’s Friday, and time for a new Fragment: my weekly programming-related puzzle. For some background on Friday Fragments, see the earlier post.

This Week’s Fragment

While the code required to solve this week’s puzzle is small, it requires a little setup. I’ll do it in story form.

Little Johnny is riding along in the toddler seat at the back of Mom’s minivan, secretly playing with Dad’s new deck of cards which he is not allowed to touch. Mom gives him the long-awaited news, “we’re almost there”, and he panics: he knows he must put the cards back in the box just the way he found them. That means just like new, sorted in suit and rank order. Little Johnny’s stubby fingers aren’t very dexterous, but he knows he can sort them by laying them out in piles and then re-stacking the piles until they’re ordered. So he grabs his Toy Story 2 lunch box, which gives him room for 4 piles. He has to hurry, but fortunately he knows a way to do it in only four passes of laying out cards into piles and stacking the piles. Johnny’s a smart little toddler, because he knows which pile to put each card in for each pass to make this all work.

Are you smarter than a toddler? Can you write the code to determine the pile number for each card on each pass so that they’re all sorted in the end?

To help out, I’ll post a comment with some scaffolding code. If you want to use it, just fill in the missing line.

What this is really is a technique that banks have used for years to sort large numbers of checks into various orders (account number, destination, amount, etc.) on machines that have anywhere from 5 to 35 available pockets. It’s called a fine sort with compression and base conversion, which sounds really fancy but actually boils down to one or two lines of code.

If you want to “play along”, post a solution as a comment or send it via email. To avoid “spoilers”, simply don’t expand comments for this post.

Last Week’s Fragment – Solution

Last week’s puzzle was to solve the following cryptogram:

Si spy net work, big fedjaw iog link kyxogy

This cryptogram is the dedication in the excellent book, Network Security, by Kaufman, Perlman, and Speciner. Nothing fancy here: it’s just a simple substitution cipher. So you can solve it by just sitting down with a pencil and paper and plugging at it, building up the substitution table as you go. Classic cryptanalysis starts with trying frequent letters (like the nostril combination – NSTRTL, familiar to Wheel of Fortune viewers), common patterns (ad, in, ing, ou, ur, etc.), and common words (to, the, and for are a great start for this puzzle). If you like such puzzles, try the cryptograms web site.

Like any obedient grad student, I wrote a lot of papers while recently working on my Masters degree. While most were admittedly specialized and pedantic (and probably read like they were written by SCIgen), some may accidentally have some real world relevance. Just last week, I handed out my XTEA paper to a co-worker who was foolish enough to ask.

At the risk that others might be interested, I posted a couple of the less obscure ones where I was the sole author; they are:

The Tiny Encryption Algorithm (TEA) was designed by Wheeler and Needham to be indeed “tiny” (small and simple), yet fast and cryptographically strong. In my research and experiments, I sought to gain firsthand experience to assess its relative simplicity, performance, and effectiveness. This paper reports my findings, affirms the inventors’ claims, identifies problems with incorrect implementations and cryptanalysis, and recommends some solutions.

Security measures are available to protect data communication over wireless networks in general, and IEEE 802.11 (Wi-Fi) in particular. Unfortunately, these measures are not widely used, and many of them are easily circumvented. While Wi-Fi security risks are often reported in the technical media, these are largely ignored in practice. This report explores reasons why.