Thursday, August 13, 2009

For those of you interested in submitting photos for the 2010 letterboxing calendar, be sure to upload them by the end of the month! Full details about the project can be read on the Project X page. (I moved the link so it's now under the 'Marketplace' menubar option rather than the 'Toolbox' option where it used to be--in case you're having trouble finding it.) It has some descriptions for what I'm looking for in photos along with previous calendars of photos that had been selected as winners.

I included my favorite photo of last year's calendar in this post. Isn't that photos absolutely awesome! Seems like every year I get at least one photo that when I see it, my eyes pop out and I immediately know that I'll use it. Not that many of the other photos are absolutely stunning, but certain photos are just hard to get. A bright red cardinal sitting in a tree during a snowstorm? That's not the kind of picture you can plan for! The bright red against that cold, white background--what an amazing photo! A nice scene of a sunset can be beautiful, but finding a pretty sunset isn't actually all that hard to do. (Probably why I get so many sunset submissions!) But this photo..... wow. It's the one that really grabbed my attention last year. I'd never be able to get a photo like that in a hundred years!

Wednesday, August 12, 2009

The second poll is now officially closed. The official winner, by a whisker, is the original blue diamond algorithm I had been using all along. Before I started this whole voting process, I actually saved a list of exactly which boxes had a blue diamond, and during this vote, I simply put them back. So despite all these other algorithms I tried, the original blue diamond algorithm is actually still the favorite. =)

A close runner up was the green algorithm, which is close enough where I feel the two colors really were a tie from a statistical standpoint. That doesn't surprise me much--the core algorithm for the two is exactly the same. The difference between the two is that the blue algorithm had additional "tweaks" I added after the core algorithm ran. The core ranked boxes based purely on the votes, adjusting for the voter's average vote and the standard deviation of their votes. The green algorithm is the "pure" results. The blue algorithm included a few additional tweaks after the fact by rearrange the "borderline" results.

Boxes that ranked near the cutoff for a diamond usually ended up there more-or-less by chance. From a statistical standpoint, the boxes immediately above and below the cutoff are actually ties. The difference in ranking for #2223 or #2252 might depend on what a voter had for breakfast that morning. So I added a couple of tweaks to make the rankings more consistent and (I hoped) fair. If a box already had a blue diamond the previous month, it would still keep the diamond even if it technically fell below the cutoff (but was still a borderline case). If two new boxes fell close to the border line, one on each side of it, I would give a slight edge to the one with a planter's choice listed as an attribute. Basically, in the event of a tie, then the planters would cast a tie-breaking vote. (Don't think putting a planter's choice icon next to ALL of your boxes will help either--how discerning one is in applying them to your boxes is also taken into account.) There were about a half-dozen various tweaks I made to those borderline boxes in an attempt to break the statistical ties, and those were applied to the blue algorithm but not the green.

The tweaks only affected the results of the borderline boxes, and apparently it didn't make a significant difference in the results.

The purple and white diamonds I didn't expect to do well since they didn't do especially well in the last vote. The white diamond used the algorithm where it removed the best and worst vote for a box, then took the average of the remaining votes. The purple diamond took the ratio of high votes (5s and 4s) to the number of low votes (1s and 2s) and sorted accordingly. It actually did surprising well in the last vote, but still nowhere close to the original core algorithm that adjusted votes based on the average and standard deviation of an individual's voting patterns. While the first vote had the high-low ratio score nearly double the rate of the straight-average of votes, this time they scored almost identically. I'm a bit puzzled about that, but they both did significantly worse than other options, so it doesn't make much of a difference.

The red and yellow algorithms were the "combined" algorithms, where I ran three different ranking algorithms, then combined the results to generate the red and yellow diamonds. Intuitively, I thought these would do very well--perhaps even beating out the original blue diamond algorithm--and was stunned to see them go down in flames like they did. I guess in my head, I thought a combined algorithm would pick up on the best of all the algorithms. It seems actual results were more skewed towards "the weakest link." It took the results of the green, purple, and white diamonds, and combined them. The red is the "pure" combined algorithm, while the yellow is the "tweaked" version using many of the same tweaks I did for the green/blue variations.

The end results of the combined algorithms, as I see it, is that the most popular core algorithm (the green), was pulled down by the poorer results of the purple and white algorithms. Or you could view it as the green algorithm "pulling up" the results of the purple and white algorithms. The combined algorithms did score better than the two least favorites, but it scored worse than the most popular algorithm. An average of algorithms thus resulted in average results.

And that was the biggest surprise for me. I really expected the combined algorithm to get much better results than that.

The different between the tweaked and non-tweaked version of the combined algorithm 31-29, a statistical tie in my book. Again, there doesn't seem to be much preference one way or another based on the tweaks.

So, the core algorithm using the average and standard deviations of a person's voting patterns is hands down the winner and will continue to be used. The tweaked version shows a *slight* preference, but it may not be outside the range of a statistical tie. I also never broke down the multiple tweaks that could be voted on to see which ones might be preferred--it was an all-or-nothing type of deal.

The two "tweaked" algorithms also didn't all have the same tweaks, so I can't really compare those two very well. I literally applied the blue diamonds on exactly the same boxes that had blue diamonds before the votes were counted, which meant that tweaked version did allow boxes with just two votes to get a diamond, but the yellow diamond was limited to boxes that had a minimum of three votes. The blue diamond included the tweak that gave preference to boxes that already had a blue diamond if it now falls just under the cutoff, but the yellow version had no previous diamonds that it could be compared to and thus did not use that tweak.

So I'm left trying to decide exactly which tweaks to keep and which ones to throw away, but based on the results of the poll, I'm not sure such decisions will make a big impact anyhow. They're little decisions that ultimately have little impact. I'll definitely continue favoring boxes that already have blue diamonds just for the consistency factor--one of the biggest complaints about blue diamonds was their fleeting nature for borderline boxes. It would appear one month, disappear the next, and return the month after that, and so on. Giving a slight edge to those with the blue diamond already got rid of most of that inconsistency (and the subsequent complaints about "losing" diamonds).

But in a nutshell, after all this voting and discussion, pretty much nothing will change. =) Was it a waste of time? I think not. There were several very good things that came out of these proceedings:

1. You no longer have to take my word that I'm using the best algorithms possible.

2. I also don't have to trust that my biases had been playing a roll in the selection of algorithms.

3. I hope that anyone who intuitively felt that a simple average of all votes really is NOT the best ranking algorithm available will finally be able to let it go. Yes, there are some people who actually liked that result the best, but there were also nine people who each voted for the "completely random" results as well. The results were pretty overwhelming, however, that a simple average is NOT the best ranking algorithm available, and it's time to simply agree to disagree.

4. And I hope to gave many of you a sense of empowerment. Not the "cram it down your throat whether you like it or not" feeling that some people seemed to have, but a sense that you're in control of how the boxes are ranked. The end results may not have changed, but this time it was you all who chose the algorithm--not me. =)

On another note, I'm seriously considering giving boxes with different status different colored diamonds. Not because it has any significance, but rather because there continues to be that persistent myth that retired boxes are "taking" diamonds away from active boxes. It's not true, and even after I explain mathematically why that's not happening, it's a myth that continues to persist. And maybe a simple change of colors can finally put the nail in that myth once and for all. It's an intriguing idea to me, and it would be pretty easy to implement given the fact I already have lots of colors available now. =)

Thanks to everyone who participated. I'll be putting everything back to normal shortly. I'll leave the original blue diamonds up this months, but I might make a couple of minor tweaks when it comes to next month's ranking of the boxes. For the most part, however, expect the same algorithm.

Saturday, August 08, 2009

For those following along in the message boards, IrishRef suggested a different algorithm for calculating blue diamonds. His idea was to throw the highest rated vote and lowest rated vote for each box out (those pesky "outliers") then average the rest and sort accordingly. It's an interesting algorithm, and not one I had considered before. I was intrigued--how would blue diamonds if I sorted it out that way?

And what about those people who think every vote should be counted "as is," no normalization of the votes allowed. How would that shake out? Sure, there would likely be a lot of overlap, but how much? Would one of these other algorithms provide better results? Ultimately, I'm not attached to any one particular algorithm. I'm more than happy to go with the one I think works best.

So I'm having an algorithm face-off. I have created seven, yes, count 'em SEVEN colors of diamonds: red, yellow, green, blue, purple, white, and brown. Each one uses a different algorithm to determine the 5% of boxes that will have that color.

I'm not going to tell you which algorithm goes to which color, or even what all the algorithms are. I will say, however, that one of them does use IrishRef's suggestion. Additionally, one of them is a "flat average"--it takes the votes as is and averages picking those with the highest averages. I'd like your opinions on which color you feel best represents the real "blue diamond letterboxes." I'd also like to point out the the color blue is NOT using the old algorithm--I've actually tweaked the old algorithm and given it a different color to disguise it a bit. =) None of the colors actually represent what the old algorithm used.

If one of the algorithms is a particular clear-cut favorite, I might update the code to use the new algorithm instead of the old one. =)

But please, be honest. Don't pick the color that gives your plants the most diamonds. Pick the one that you feel provides the most accurate results. These are supposed to be the best boxes out there--those that a visitor "can't miss" if they're passing through. Be honest with yourself, and select the algorithm you feel accomplishes this goal.

I'm very curious to see how you all think the different algorithms stack up against each other. =) Also keep in mind, anyone who has opted out of the blue diamonds will not have ANY color on their boxes, so don't fault an algorithm for not putting a diamond on a boxer if you know they've opted out. The problem might not be the algorithm.

I also want to point out--all these colors are temporary. Eventually, I will be selecting ONE algorithm, and that's what'll be used for blue diamonds. The rest of the colors will go away.

Wednesday, August 05, 2009

I seem to be in a mood this week of working on features I actually dislike. First the blacklist, and now a "who's online now" list.

Have a pressing need to know if someone is on Atlas Quest right now?! If I've said it once, I've said it a hundred times: There is no such thing. You see lists like that on other websites, I know, which is probably why so many people want to see it on Atlas Quest, but unless someone is in the chat room and their browsers are pinging Atlas Quest every single second (well, every other second--I slowed down the pinging to help alleviate the load on the AQ servers), I can't really know who is actively on the site. Actually, even the chat rooms are imperfect. I know I've been in them in one window while surfing a completely unrelated website in another window. Not to mention that the list in the chat rooms could be two seconds out of date even with the faster Internet connections. (It could be even more out of date with slower connections.)

People who aren't in chat rooms--it's even harder to tell if they're on Atlas Quest or not. I can only track the last time their browser hits the AQ server. If five minutes go by without any additional hits, what's that mean? Maybe they're reading a long post or solving a challenging cryptogram? Or maybe they've moved on and are checking their stock portfolios on another website. Or maybe they shut down their computer and are watching television.

The point is--there's no master list that's actually accurate of who's online now. Never has been, and never will be. So keep that in mind. And given the fact that some people might not want others to know when their online, they can hide that information if they so choose. I'm notorious for turning off those annoying status icons on my Yahoo account. Mostly because people seem to expect an immediate reply if they think I'm online, and I rarely do that. I reply when I'm good and ready to reply. ;o)

For me, the list serves two very useful purposes. One, I can monitor how much activity Atlas Quest is getting and how close it's getting to capacity. And two, as an admin, there's a link available to me that allows me to force a member to logout. Until now, I didn't actually have an easy way to do that--a feature that would have been useful during the rare attacks by spammers. For the rest of you, it doesn't actually serve much purpose except give you another thrill by 'spying' on others.

Anyhow, to view a list of members who have recently been on Atlas Quest, check out the Online Members page. It only includes people who have logged into Atlas Quest--unless they've logged in, there's no way for me to know who it was. The "age" column does not represent a person's age--that's how long it's been (in minutes) since the last time the person has shown any activity on Atlas Quest. Someone who's age is 10 minutes means they haven't registered a hit or clicked on anything in Atlas Quest for 10 minutes. Maybe they're solving a cryptogram. Maybe they left the website. Maybe they took a bathroom break. We may never know. =)

If you click the "logout" button and actually log OUT of Atlas Quest, you will be dropped from the list immediately. Technically, you could still be surfing the website anonymously--but for the purposes of this list, it only shows logged in members rather than every single person surfing the site.

If you'd rather not have your presence known, you can opt out of the list from your privacy preferences.

Tuesday, August 04, 2009

I updated Atlas Quest this afternoon. Nothing particularly serious or noteworthy. Minor things that most people would likely never even notice unless it's pointed out to them. =)

One addition that probably needs a bit explanation is what a "whitelist" and "blacklist" is. You're probably more familiar with the term blacklist, as in, "Fred was blacklisted from the agency." To be banished or excluded from something.

Whitelists are more of a computer nerd type of terminology, but it's the opposite of a blacklist. If you send out invitations for a party, anyone you send the invitation to is on your whitelist. Everyone NOT on your whitelist is excluded by default. Sometimes, it's easier to maintain a list of 30 people on a whitelist than a 5,999,999,970 people on a blacklist.

For a quite awhile now, AQ has supported a "whitelist" option when you listed boxes. You could restrict your box to anyone on your designated whitelist. I didn't call it a whitelist, but that's what it was. It was actually called a "contact group" on Atlas Quest. AQ lets you create contact groups, or a collection of people you want to contact or communicate with quickly and easily, but not in a public forum. If you had listed any contact groups, then you could restrict boxes, events, and trackers to members of one of your contact groups. Anyone you added as a member could see the box, event, or tracker. Everyone else could not.

Occasionally I'd get requests asking if there was some way they could restrict a specific person from seeing their boxes. In a word, no. Even if that option were available, they could log in under a different name and still see the listing. So it's not a feature I ever took seriously.

But I added it today. I'm not really sure why. I don't think people should use it. If they figure out they are on a blacklist, they might get really ticked off and do something stupid like steal your boxes. But for what it's worth, I added a blacklist option. It works like the whitelist option, but in this case, anyone on your list cannot see the box, event, or tracker.

There is ONE instance where I can see why you might want to make sure of the blacklist--and that's a blacklist with nobody in it. =) The way AQ works, it has to know who is logged in to know whether or not to display the box, event, or tracker. So if you have a whitelist or blacklist restriction, anyone who is not logged in will not be able to see it.

So AQ first checks if the person using the site is logged in or not. If not, it displays a "this page is restricted" message. If so, then it checks if you are on the whitelist (in which case you CAN see the page) or if you are in the blacklist (in which case you can NOT see the page) and displays the appropriate message. But the key thing here is that whether you are logged in or not is checked first.

So if you add an empty blacklist as a restriction, it essentially means only a logged in member can view your box, event, or tracker. Kind of equivalent to a P-0 or F-0 restriction, except those don't actually require someone to be logged in. So if you want your clues to be available to all members with the only catch being that only people with accounts can see your boxes, an empty blacklist will do the trick. =)

If you are absolutely bound and determined to prevent specific people from finding your boxes online through the use of a blacklist--that's fine. Not my problem. But it's as easy as creating another account and logging in to get around it. Most aliases don't typically have a lot of plants and finds so if you combine it with an F-count and P-count restriction, it might actually do a pretty good job of keeping people out of your boxes you don't want visiting them. It still doesn't stop them from finding boxes with friends who might have access to your boxes, and you do risk a lot of hurt feelings if the person you blacklisted ever finds out, but I'm not your babysitter. =)