Monthly Archives: June 2010

After 23 years of leadership in (what was known as) the shareware industry, and now following the current trend, the ASP has dropped “Shareware” and officially changed its name to the Association of Software Professionals.

When the ASP was formed back in 1987, it was out to promote the concept of “shareware”, a marketing method (n.b., not a type of software) where a user was able to try software before making a purchase (or not), and also to help independent developers/publishers learn how to use shareware (and other methods) to become successful.

The former goal was achieved long ago, as almost all mass market software now has a trial version available. (See my Mission Accomplished! posting from a couple of years ago.) The only skirmish remaining was that over the word “shareware” itself, but the ASP has now de facto ceded that control (for the sake of lasting peace, I suppose). One only need look at this thread at The Business of Software to observe the rampant ignorance even among (nominal) developers, so I can certainly understand declaring that a losing cause.

The latter goal is actually an ongoing mission, and the ASP (regardless of the name change) remains the single greatest resource for independent software developers and publishers. At only $100 per year, it is quite possibly the best money we have ever spent for our business.

In business, existing customers are very valuable. These are people or companies that have found your product and like what you have to offer enough to (importantly) actually purchase. They are the best source of additional purchases (and only source of software upgrades) and may provide referrals and positive word of mouth. It is easier to sell to an existing customer than to locate and establish new customers. However, if you want to take it to the extreme, and simply treat your customers as “cash cows” rather than with the proper respect, here is how you can do it.

First, it helps to be a large, unfeeling conglomerate, such as Corel, with a record of collecting brand name products and adding little (if any) real value; it is much harder to disrespect customers as a small company headed by people whose livelihoods depend on them.

Of course, there need to be cash cows, I mean customers, to exploit, so the quickest way to acquire them is to just buy a well-recognized company that already has loads, such as WinZip. Pay no attention to the fact that their single (excellent) product that created and defined a market has since become a commodity that has numerous competitors, many of them free, and whose primary functions are built in to all major operating systems. It is even better if the product was widespread due to there being few incentives to purchase the original product (save abiding by the license) and a promise of free upgrades for those of us who did the right thing.

Now you have a product which is unnecessary for the vast majority of computer users, plus a list of customers who paid for the product back when it was necessary, many (if not most) more than a decade earlier, and who would reasonably expect free upgrades (should any be desired). What can you do now? I know: Spam.

Validate all of your contact addresses by sending a whole slew of messages selling products completely unrelated to the original product (except by being owned by the same stockholders). When that fails to do anything but annoy your customers… wait… no not your customers, but the customers of the previous company… it is then time to do a new build of the product, complete with a new version number and no discernible new features.

With the “new” version, send out emails to all previous customers of the product to indicate, above all else, that the days of free upgrades are over and that they are expected not only to upgrade, but to pay the new masters (for what they already have, and probably no longer need). When that does not work, either, repeat the messages on a regular basis, all with slightly different messages (and increasing version numbers), but never forget the “give us your money” message:

“Your WinZip Software – Upgrade Available”

“Upgrade your single-user WinZip Standard license to WinZip 14 now…”

“Your WinZip software is out of date. You are currently running an older version of WinZip, and now is the time to upgrade.”

“Your WinZip software is out of date. You are currently running WinZip 6.3 Standard, and now is the time to upgrade.” [No, actually I am running WinZip 9.0 SR-1, the last free update; version 6.3 may be the last version I purchased, back in 1997.]

“Exclusively for WinZip customers: Upgrade your single-user WinZip 6.3 Standard license to WinZip 14.5 now, and save 50% or more off the new license list price.”

“Your WinZip software is out of date. You are currently running WinZip 6.3 Standard, and now is the time to upgrade.” [OK, let’s be clear: “running” is not really the case; I cannot remember the last time I actually used WinZip for anything.]

So, now you have properly alienated existing customers. Your product has gone through a number of version numbers (10.0, 11.0, 11.1, 11.2, 12.0, 12.1, 14.0, 14.5) yet the web site lists no significant feature that is not already present in the 9.0 version (from 2004), which still runs just fine, by the way. I hope it was worth it (moreso than, say, actually creating value).

This sort of thoughtless approach also works nicely in other areas of business, not just software. For instance, you could be a large chain video rental store, like Blockbuster, and introduce a rent-by-mail service to take on your most significant competitor (Netflix). Offer a similar service, at a comparable price, with an added benefit than your competition cannot match: the ability to exchange a mailed rental for a store rental when you are finished. You will get lots of customers who can get titles unavailable in the retail stores by mail, can keep them as long as they want, exchange them at the retail store for a newer release, keep those as long as they want (while a new title is sent by mail), and have a constant supply of rental movies to watch. Brilliant! [seriously]

Where does one go from there, though. With a nearly unassailable product offering, and happy customers, you cannot just sit there and leave well enough alone. No, first you need to raise prices, and then email every current customer to let them know that they are “grandfathered in” to the original price, but be sure to emphasize that they are now locked in, so if they let it lapse, the new price applies. Next, change the program, so now for the same price, the mailed rentals are not sent until the in-store rentals are returned. Then, inform customers (like you failed to do last time) that now the number of retail exchanges is limited. Never, ever, consider reducing the cost to match the reduced services. (My prediction is that the next move will be to add due dates to these rentals, just to be sure that we switch to the competition.)

Anyway, there are two good examples of how to mistreat your valuable customers.

On the other hand, one could always recognize customer value in simpler ways, like abiding by agreements and promises, and not being so obvious about caring only about their money. We love our customers; they allow us to stay in business and continue to do what we truly enjoy.

On a regular basis, inexperienced Backgammon players voice opinions about how a certain computer program or game server cheats. The stochastic nature of the game lends itself to this kind of perception on the part of human beings. Generally, there are a few primary reasons for this type of belief.

First, novice players often fail to recognize the complexities of the game of Backgammon, so what they perceive as an unnatural number of “lucky rolls” are not (necessarily) due to luck, but rather due to skillful play on the part of the opponent. Expert players tend toward positions where a greater number of rolls would be considered good (i.e., “lucky”). A higher percentage of good moves tends to make the dice appear biased in ones favor, and it is also key to good checker play.

In many cases, players also fail to understand the nature of truly random numbers. It is often stated that, say, a certain number of doubles in a row indicates… excuse me… “proves” that the virtual dice are unfair when, in fact, a truly random number generator would have to produce any arbitrary sequence (whether or not a pattern is perceptible) given enough rolls. Of course, we are talking about pseudo-random number generators (PRNG), so they are, by their very nature, not truly random. However, one would have to do an actual study/count of the dice rolls to make any conclusion about any particular PRNG.

The reason for this need to analyze a PRNG scientifically, rather than anecdotally, seems fairly obvious. Human beings have selective memory, which means that we tend to recall things that are out of the ordinary, so a number of doubles in a row stands out, whereas a statistically identical sequence of rolls that do not seem to show a pattern are not reported. Likewise, a few very good (or very bad) rolls are more memorable than many run-of-the-mill rolls.

Related to this is the concept of apophenia, which is the human “experience of seeing patterns or connections in random or meaningless data.” [from Wikipedia] Our minds have evolved to recognize patterns, so we can sometimes perceive things that are not there. This is how people see images in clouds, hear music or sounds in white noise, and imagine divine imagery in oil stains or burnt toast.

All of these factors make it very easy for an average person to perceive unfairness in Backgammon software or servers (even in games against other human beings), and even trained experts can be fooled.

How experts demonstrate that Backgammon software is fair

There are a few key points that are usually made by experts when arguing that a particular Backgammon program does not cheat. First, of course, one generally describes some of the aspects of the perception problem, as listed above. In particular, reports are almost always anecdotal, so they can be dismissed quickly as having no scientific validity until somebody does an actual count and statistical analysis.

To dismiss accusations of manipulated dice (by software), the suggestion is to manually input dice rolls, which most (decent) programs allow, according to the rolls of physical dice recorded meticulously, or by changing to an alternative PRNG. If the results stay statistically consistent, that argues against the idea that the rolls are manipulated. Another common argument is that programs can “look ahead” to see which rolls are upcoming and make moves based on this prior knowledge, and manual input of dice rolls also removes this possibility.

Another method to test if dice rolls are being artificially manipulated is to switch sides and look for discrepancies. In other words, start a game (or save one in progress) with a particular random number seed and play the rest of the game, recording the dice rolls for each side. Then, restart (or load) the game and play the opposite side. If the dice rolls remain the same, then no manipulation was done to bias the outcome.

A final, less scientific, approach is the simple “Why?” method, wherein one looks at the reasons why (and how) a programmer might decide to write a biased program. Speaking as the primary programmer for MVP Backgammon Professional, from MVP Software, I can assure you that cheating would add a whole extra layer of (unwanted and unnecessary) complexity, so I certainly did not and would not include such code. In fact, accusations of unfairness were troubling enough to MVP for the first version of the program that our version has a replaceable PRNG library so one can write ones own (with whatever extra checking is desired).

Possibility for Backgammon software to cheat without malice aforethought

This whole topic was reinvigorated when yet another thread appeared on rec.games.backgammon recently, entitled “Jellyfish. Cheating or just Lucky” [links to Google groups]. Through dozens of messages, some people suggested/argued that the Backgammon program Jellyfish seemed to cheat, while two other popular programs, GNU Backgammon and Snowie, did not.

Interestingly (and, n.b., anecdotally), when testing MVP Backgammon, I had a similar experience. I was simply testing relative strength with a series of 25-point matches between my program and these others. Whereas the strength of my neural network was comparable to the others, it got beaten significantly by Jellyfish when it rolled the dice. When MVPBG rolled, it was much closer. As a final test, I played one match with manual rolls, and it was again close. At this point, I figured out the likely problem (leaving alive the possibility that it was just sheer chance).

The whole purpose of a neural network is to discover connections and patterns in provided data, and the conclusions are affected by the design of the inputs (essentially, which raw data is supplied) and, of course, the requested output(s). In our design, we basically supplied the number of checkers on each point (in a special format), the number on the bar, and the number borne off. This specifies a pure position in the game (with no knowledge about moves or rolls), and our outputs were designed to estimate the probability of each potential game outcome (win, loss, or winning/losing either a gammon or backgammon). The neural network was only used for evaluation; the selection of moves was based on the evaluation of the resulting position (and cube decisions were calculated mathematically from the neural network outputs).

Theoretically, we could provide irrelevant inputs (e.g., outside temperature) and during training, their influence on the network would tend toward zero. However, providing somewhat related data, such as the last game move, could give the neural network just enough information to begin to anticipate an outcome and bias the outputs. More directly, providing the current dice roll, or perhaps designing the neural network to rate individual moves based on that roll, gives the network additional information that could be used to actually predict the next pseudo-random roll, especially if the particular PRNG is not very good. After all, guessing what the next roll would be based on the position and previous roll is exactly the kind of task that neural networks are designed to solve.

Based on this observation, I suggest that it is possible that the programmers of Jellyfish may have inadvertently, and with no malicious intent whatsoever, provided their neural network with just a little too much information, and it may have taken that information to (at least partially) figure out the random number sequence and then draw conclusions that were not intended.

This would be a very interesting (and perhaps slightly startling) example of emergent behavior in a computer system. It would, however, explain why a program could pass all of the tests to “prove” it is not cheating, but still have an observable bias when using its own dice. I suppose we could call it “computer intuition“. Of course, without more scientific study, it could still just be called “luck“.