Posted
by
samzenpus
on Thursday January 26, 2012 @06:33PM
from the not-all-fun-and-games dept.

MrSeb writes "An Italian researcher with a penchant for retro games — or perhaps just looking for an excuse to play games in the name of science! — has used computational complexity theory to decide, once and for all, just how hard video games are. In a truly epic undertaking, Giovanni Viglietta of the University of Pisa has worked out the theoretical difficulty of 13 old games, including Pac-Man, Doom, Lemmings, Prince of Persia, and Boulder Dash. Pac-Man, with its traversal of space, is NP-hard. Doom, on the other hand, is PSPACE-hard."

The NES one, on the other hand, was actually impossible after a certain level... the blocks fell faster than you could get them to the edges of the screen.

That made me think of the old Columns game on the Sega Genesis that I used to play when I was around 13. I recall that the rate at which pieces fell when you pushed down was fixed. After a certain point, the fall speed of pieces became faster than the fixed speed of pushing down, so you could slow pieces down by pushing down. Using this technique, I could play indefinitely without it ever getting too fast.

They use a pretty conveniently screwed up variant of Pac-Man for their proof, not the actual Pac-Man, where there's free choice and arbitrarily fast transitions between the different ghost modes, so it's even further from true here than for Tetris.

"We assume full configurability of the amount of ghosts and ghost houses, speeds, and the durations of Chase, Scatter, and Frightened modes (see [1] for definitions)." That's all well and good but there is no configurability with the level designs, amount of ghosts, or ghost houses.

There was, while they designed the game.

The PacMan videogame is one instance of a problem class. Algorithmic complexity is calculated for classes of problems, since for any particular instance you can always design a trivial, constant time algorithm if at least one solution is known...

I think Pac-man and Ms. Pac-man are quite easy, and don't comprehend anybody who uses the word "hard" in conjunction with them. Sure when you first start out, you need to develop the skills, but it doesn't take that long.

The Atari console variants are particularly easy (read: boring). Why are all the ghosts running around randomly, instead of chasing the player? Poor programming.

That's like me saying the small hill I go up on the way home from work is a harder climb than Everest, then piling it a mile high with junk to prove it and when someone calls me on it responding with "I have to build it up this way, otherwise it's an extremely easy climb". You can't say "Pac-Man is X" then change the nature of Pac-Man to make it X as your proof, because at that point it is definitely X but it's no longer Pac-Man.

I am embarrassed to say, this is the most interesting comment I have seen on this site in well over X;X3 years. You know the changes in the Tetris engine, and links to a strategy guide for playing Tetris infinitely.

Most Tetris products prior to 2001, such as Super Tetris 3 for Super Famicom, use the old randomizer, possibly with some slight modification to ensure no immediate repeats.

Tetris for Game Boy has a 2/32 chance of the same piece and a 5/32 chance of each other piece.

Tetris for NES will reroll once if the piece matches the last piece.

Tetris the Grand Master will reroll three or five times if matches one of the last four pieces.

The New Tetris for Nintendo 64 deals from a 63-card deck with nine of each shape.

New games, such as Tetris Worlds, Tetris DS, and Tetris Party, all use the new randomizer that shuffles sets of all 7 shapes. Start a new game of Tetris DS and notice how the falling piece and the next 6 pieces are all different.

As far as I'm aware super tetris 3 is truly random, the best kind of random! I am a big fan of TGM3, despite it's slightly cheating re-roll randomiser, but for some reason it there's a bug in Mame that causes me grief when I try to play it in 2 player mode that causes the blocks to fall straight down for one of the players.

It would actually be somewhat surprising(especially with games where the twitch factor keeps the player from strategizing too deeply) if you could discern the computational complexity of a game just by playing it....

Naively, I'd imagine that a human player most closely resembles a stochastic hill-climbing agent, providing the input at each tick that seems most likely to improve their situation in the relatively short term. That would make them brutally efficient at some problems, miserably hung up on local maxima or discontinuities in others; but not necessarily provide much correlation between difficulty of play and difficulty of problem.

Naively, I'd imagine that a human player most closely resembles a stochastic hill-climbing agent,

That's too naive. Most game players don't just play the game once (start game, yay, play, win a little, die, never play this game again). Instead, they play many times, and use their previous knowledge as leverage to improve their performance.

That puts the hill climbing analogy into more modern optimization territory, like multiple randomized restarts, adaptive strategies, etc. As such, the odds of winning are high when players are willing to put in the hours.

Most game players don't just play the game once (start game, yay, play, win a little, die, never play this game again).

I'd have to disagree, as both a Game Player and Game Developer. Gone are the days when Sonic or MegaMan sat in your console for weeks while you tried endlessly to beat it. Today's game players are EXACTLY like what you describe. That's why we have to baby them & lead them into playing the game -- They have many other options, a virtually endless supply of games to Try and fail at until one lets them win.

I sit "average" gamers of all age ranges in front of the games from yesteryears and the majority do exactly what you describe when given a choice to switch between any classic game on the shelf. They play the longest on familiar or easy to play titles.

PacMan is HARD. Rarely will you find a decent arcade with 5 lives instead of 3, and longer power-up periods (selectable via dip switches or.conf files). However, people don't play games to beat them, now they play to be entertained, and an unforgiving game that eats your quarters or $50 at once can't compete with the free casual games of today.

I think some balance can be found -- A short introduction to get you interested in the mechanics and/or story, followed by an increasingly engaging experience, but there's a fine line between too steep a learning curve and too boring of a game.

As for whether or not PacMan is NP Hard, I'd say that since it's 100% fully deterministic it's actually not. It's easy as hell to map out then play perfectly every single time afterwards, especially if you have the source code "running" through your brain and can can predict exactly what the Ghosts will do. Also, the same damn level over and over again is quite boring... That's why when I was required to learn JavaScript I created my own rendition [memebot.com] that was non deterministic (pseudorandomly so) as well as had many differing levels.

A pleasure to see you again my liege. I am still most grateful for your correction of my unconsciable grammar and spelling last time.

I too lament how game players today have been turned into "whiny little pussies". I long for the days past when a kill screen was indicative of godhood.

There was great value in a game that was actually hard to play, and was progressively impossible to beat. The point of that game was the journey of suffering itself, not the end. Those that actually reached the end, the aforementioned kill screens, were granted immediate entrance into Valhalla and spoken of in legends.

I think the last game that I played that was actually hard might have been Doom or Doom II. To this day the sound of a Arch-Vile makes me twitch.

In fact, thinking about, the harder a game is, and the more I have to constantly replay levels to achieve perfection, the more I enjoy it. Games these days seem to be more like interactive stories requiring a modicum of effort and designed with soft rubber corners so you don't trip along the way.

Personally, I think it all went downhill the moment you had a save game state. The unrelenting suffering of being so close to finishing and seeing that start screen, and to keep coming back, was in my opinion, a rite of manhood in my day. It separated the boys from the men.

The biggest factor for me (and I'd guess a lot of other gamers) is having the spare time for games. The average gamer age is now mid thirties. When we were all kids it was easy to spend countless hours replaying the same games to perfect our speed runs etc. These days, even replaying the last ten minutes due to a stupid mistake seems like a massive chore when you can only manage to grab a spare hour here and there when work/real life isn't getting in the way. That, to me, is the obvious root of why people d

There was one level in particular that I struggled for hours to survive the first 5 to 15s. I think that level was about 3/4 the way through. You start in a large chamber with a double-wide set of doors that open onto a large area that is essentially a wide hallway wrapped around three sides of a large pool concealed with some modest trellis work. You're stuck in the middle of the long side with fireballs coming from every direction, several pink chick

Never played Pac-Land but there were a ton of cheap shot deaths in Wonderboy. The game consisted mostly of replaying levels and getting a little further each time and memorising which cheap shot killed you last time and what you need to do to avoid it this time. That's not to say I didn't love it back in the day (and could make it all the way through on one shiny 10 pence piece), but it deliberately was difficult to extract as many replays as possible from what was ultimately quite a short experience.

I think you actually add weight to the GP's premise, even if you tore down one of his points. We are optimization engines (at least us rational engineer types). But when we optimize we don't do it like computers. We don't check all possible routes as we consider many of them sub-optimal. But computational theory mandates that even the seemingly sub-optimal paths be explored. Some people do this kind of experimentation some times, but it's too time consuming so that's the exception rather than the rule.

Interestingly, there is a conference this summer dealing with humans and their abilities to perform computation. It's titled Engineering and Metaphysics [eandm2012.com], and deals with the relationship between humans, physics, and reality.

It means it isn't computationally solvable in linear time. A computer would only be able to solve very very simple permutations of an NP hard problem in reasonable amount of time. The more complexity added to the problem space makes the time it takes to solve grow to be something most people would not wait around for.

i.e. Traveling Salesman (look it up, its the classic problem) problem for 4 cities is pretty easy and quickly solved for by a computer. However 100 or 1000 cities takes much much longer f

An oversimplification though - it really means it's not solvable in deterministic polynomial time. An algorithm with O(n^12328372) would still fall under P, because it's solvable deterministically in Polynomial time.

Informally, especially in computer science, the Big O notation often is permitted to be somewhat abused to describe an asymptotic tight bound where using Big Theta notation might be more factually appropriate in a given context. For example, when considering a function T(n) = 73n3 + 22n2 + 58, all of the following are generally acceptable, but tightnesses of bound (i.e., bullets 2 and 3 below) are usually strongly preferred over laxness of bound (i.e., number 1 below).T(n) = O(n100), which is identical to T(n) O(n100)T(n) = O(n3), which is identical to T(n) O(n3)T(n) = (n3), which is identical to T(n) (n3).The equivalent English statements are respectively:T(n) grows asymptotically no faster than n100T(n) grows asymptotically no faster than n3T(n) grows asymptotically as fast as n3.So while all three statements are true, progressively more information is contained in each. In some fields, however, the Big O notation (number 2 in the lists above) would be used more commonly than the Big Theta notation (bullets number 3 in the lists above) because functions that grow more slowly are more desirable. For example, if T(n) represents the running time of a newly developed algorithm for input size n, the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound.

Yes you are pedantically correct, but CS (where I learned the big-O notation) tends to abuse it, because in algorithm analysis making such a weak argument as "this linear function is O(n^12387...5)" is next to meaningless, and impractical. Yes, it is mathematically true, but for practicality (since computer science is the practical application of math) one typically considers big-O notation to describe the strongest upper-bound statement available.

Assuming NP != P your first sentence is correct. And maybe this is what laymen should know about it. However for completeness...

In general a problem is presented as a string of n bits and the algorithm (Turing Machine) has to decide whether it is acceptable or not (good or bad etc.) For example, take the graph coloring problem. This involves a graph on m vertices and you have to color it using k colors such that neighboring vertices have different colors. The input to the algorithm is a description of the graph and k as a bit-string. And the bit-string is acceptable if there is a proper coloring.

If the Turing machine can decide whether the bit-string with n bits is acceptable in less than p(n) steps where n is a polynomial, then the problem is in P.

NP does *not* stand for Not P.

NP means that there is a witness to the acceptability of a bit-string that can be verified in p(n) steps. For example, the witness for the graph coloring is an actual assignment of the colors to the vertices. It is quite straightforward to verify that the coloring is proper (no neighboring vertices have the same color, it takes less than n^2 color comparisons. NP stands for Nondeterministic Polynomial, I amnot a fan of the name.

NP-Hard means that the problem is such that any NP problem can be reduced to it (with a polynomial correspondance). Therefore, if you had a polynomial algorithm for it than you had one for *all* NP problems. This would imply P=NP and is doubtful to be true. In other words a proof of NP-hardness means: Yes, it is harder than P, at least most scientists think so.

I have no idea yet how the Pac-Man problem is represented as a bit-string. I will find out tomorrow on a lecture...

It is worth mentioning the class co-NP. This is a the class of problems for which there is a witness that the input is *not* acceptable. Think what witness could easily verify that a graph is not k colorable... For example existence of a full k+1 subgraph would suffice but other constructions also prohibit k coloring which have no full k subgraph in them. I do not recall from the top of my head whether k coloring is co-NP or not. But I think it is not, here is why:

There is a conjecture that may have more chance than P = NP. And that is: P = NP intersect co-NP. That is if both acceptability and non-acceptibility can be polynomially verified then there would be a guaranteed polynomial algorithm. So far this appears to be the case.The last famous problem that is NP and co-NP at the same time and was found to be in P was prime testing.

Also not quite. NP-hard problems can be as difficult as one likes. The key is that, given an oracle for an NP-hard problem, you can solve any NP problem within a polynomial amount of time (where calls to the oracle take unit time).

Another point to be mindful of is that there are runtimes between polynomial and exponential.

* P is the set of problems that can be solved in polynomial (or better) time.* NP-Complete is the set of problems that (probably) can't be solved in polynomial time, but the solution can be verified in polynomial time.* NP-Hard is the set of all problems that can't be solved in polynomial time.

Only two of those are actiually distinct. We think P and NP-complete are different, which would mean NP-Complete and NP-Hard are the same (IIRC).

More or less correct, but you are slightly confused on the definition of NP-Hard.

NP is the set of problems that can be verified in polynomial time.

NP-Hard is the set of problems which are provably at least as hard as any problem in NP (in the sense that a solution to an NP-Hard problem can be used to solve any problem in NP with only polynomial additional effort).

NP-Complete is the intersection of NP-Hard and NP.

It is believed that P != NP, and if any problem that is NP-Hard (usually one just says NP-Complete) turns out to be solvable in polynomial time, then P = NP.

What's wrong with it? If you're in a very nitpicky mood, the word "provably" should be removed from the second point (our ability to prove something is irrelevant to the definition of a complexity class), but that's a ridiculously minor error. It's nothing compared to the errors in the post they replied to.

We think P and NP-complete are different, which would mean NP-Complete and NP-Hard are the same (IIRC).

There are many problems that are wholly outside polynomial complexity (used to work with some, years ago, and they were brutes even with a Top-500 supercomputer of the day). That means that NP-Hard must not be just NP-Complete; there's got to be higher levels in there.

PSpace hard means the problem is relatively simple, maybe check n things n times, which is only n*n things. For example, "For n cities, find the sum of all the distances between all the cities".

NP hard usually means you have to start at one part, then make a new decision each time you want to move on to the next part. The classic example is: "for n cities, start at a city and find the shortest possible distance to visit each city once". Since you have to ma

Correction: I confused PSpace with P. The above definition is for P and NOT PSpace. Haven't spent much time thinking about complexities ABOVE NP hard. PSpace actually considers the "memory" you need to solve the problem. This is relevant when you talk about Turing machines, where you have to use "tape" that you "write on" while solving the problem. The thing that's been proved is that you only need a polynomial amount of memory (relative to 'n') to solve problems that need NP Hard time.

I read so much wrong answer replying to you post I attempt a new explanation.

A problem is in P if there is a polynomial time algorithm to solve it. It means, that there is a algo which will always find the correct answer AND the algorithm an instance of the problem in at most a*n^b operations, where a and b are constants and n is the size of the smallest file that can contain an instance of the problem. Example of problems in P includes: deciding whether a number is divisible by2, whether a number is prime,

NP-hard is a class of problems, the solution of which is guaranteed not more efficient than NP. That is, there is a demonstrated way to convert an NP-complete problem (let's call that problem NPC) to the hard problem (NPH), and the conversion can be done in polynomial time.

How does that work? Well, if you were able to solve the NPH problem more efficiently (in polynomial time or better), you'd first use the conversion (costing you o

P -- problems that can be solved in time that grows according to some polynomial of the size of problem (e.g. sorting -- can be solved in n^2 time by bubble sort).

NP -- problems that can be verified in polynomial time; we think that some of these problems cannot be solved in polynomial time. For example, graph three colorability is in NP and is not known to be in P; this means that if I show you a 3-colored graph, you can check that it is indeed 3-colored in polynomial time, but if I give you a graph and

Ron Fagin (a heavyweight in this topic) once told me about this in layman's terms. Can't remember precisely, but this is more or less what he said:

I assume you know Sudoku. Solving a Sudoku game from the clues has a certain difficulty (complexity). Compare it with just verifying that a Sudoku with all its squares filled is a valid solution. You can see that verifying a solution is much faster than finding one! In general, we can accept that verifying is faster than solving... but even though it's intuitive,

Roughly speaking, NP-hard (NP = non-polynomial) means that it scales non-polynomially fast... e.g. if an algorithm is O(n^n) then it is NP-hard but if it is O(n^3) than it is only P-hard. By this definition, even O(n^lg(n)) is NP-hard and O(N^100) is "only" P-hard.

No, no, no, no. NP does not mean "non-polynomial". In fact, all "P" problems are also "NP". NP means "nondeterministic polynomial", i.e, polynomial in a non-deterministic machine (think a computer with an infinite number of CPUs). It is unlikely that they are "P" ("deterministic polynomial"), but it has not been proven either way. Also, a problem being NP-hard doesn't imply that it is NP. It actually may be harder than NP: in general, to say that a problem is X-hard means that there is no problem of class X that is "harder" than it.

For the precise definition of "harder", the wikipedia article http://en.wikipedia.org/wiki/P_versus_NP_problem is pretty good.

You can't have an "infinite number" of anything. Infinity doesn't work that way. It simply means arbitrarily large, as in "given any finite value N, I have more than N CPU's in my computer". There's no concept of an infinite number.

Actually, you are wrong - inifinity does not mean "arbitrarily large" (nor "arbitrarily small"). There is the concept of an infinite number. We call them non-real numbers. Pi, for example, comes to mind as an actual number that is infinite.

Better example (from an assignment in COS101, circa 1995): Prove that infinity comes in different sizes. Bonus points: prove that it comes in an infinite number of sizes.

Pick any two numbers, X and Y such that X < Y. Then choose two fractional numbers less than on

You can't have an "infinite number" of anything. Infinity doesn't work that way. It simply means arbitrarily large, as in "given any finite value N, I have more than N CPU's in my computer". There's no concept of an infinite number.

There is definitely a concept of infinite, and actually there are several different kinds of infinite. With some infinite bigger than others (size of set of natural numbers vs. real for example), which usually give a "wow" moment the first time you encounter this concept.

There's plenty of funny things with infinite sets. For example, what about the size of the set of positive integers N, and the size of the set of pairs of integers NxN? In a way, for each integer you can have a subset of NxN as large as N

Mod parent down, please. The definition of NP above is circular -- if NP actually stood for non-polynomial, then P!=NP by definition. That would be begging the question.

Rather, NP means "nondeterministic polynomial time." [wikipedia.org] It is the class of problems whose solutions can be verified in polynomial time. NP-hard are the "hardest" problems in this class. All algorithms known to solve problems in this class are super-polynomial. The question of "P==NP?" really amounts to "is there an undiscovered polynomial solution to every problem that we currently think is NP-hard?" or even more simply, "if a problem's solution can be verified in polynomial time, can the solution be discovered in polynomial time?"

Right, except that NP-hard problems don't have to be in the NP class, they can be harder. They are the problems, not necessarily in NP, that are at least as hard as the hardest problems in NP. You're thinking of NP-complete or equivalent.

Totally wrong. First, as the previous response said NP-hard is a separate set from NP. The intersection of the two sets is called NP-complete. NP-hard are the "hardest" problems in this class is axiomatically wrong because NP-hard is not a subset of NP.

Second, by definition of NP-hard, given a polynomial-time solution to any NP-hard problem, you can solve *every* NP problem in polynomial time, so what you meant to say is The question of "P==NP?" really amounts to "is there a polynomial-time solution t

Mostly wrong, and even the parts that a right, you make far more confusing than the need to be. I think you're using your maze example to try and represent branching within the algorithm and only confusing yourself. How would the maze for multiplying two numbers together (a P algorithm) look different from the maze for the satisfiability problem (an NP-complete algorithm)? I really can't picture your maze for either of them.

As far as your nondeterminism allowing you to simultaneously try each path, ma

Most/. discussions are just as bad as this. It happens that in computational complexity there really is an answer you can look up to prove someone wrong. That's not the case the vast majority of the time though, so crap that sounds good floats to the top and fails to sink back down. The same principle characterizes most political discourse (which is a tragedy, but there you have it).

Watching Feynman's videos is the most "productive" use of YouTube I've seen in a while. He has personality so they hold a person's interest. On top of that his explanations are usually very clear. Maybe the nicest thing about watching Feynman is his scrupulous honesty--he doesn't want to mislead his audience in any way, even if he has to pass up simpler ideas that give partial understanding but which break down upon further examination.

Speaking as a game designer/programmer, I like it when games are too hard to solve outright, but not too hard to employ strategies based on your current state. The more possible strategies people can employ in different situations gives people the fun-factor.

This is a good example of how you define the problems mattering. For example he declares Starcraft to be at least NP-hard. But if one is allowed to use trigger events and some other aspects of the scenario editor one can actually fully model a Turing machine in Starcraft. You do this in a straightforward way by giving trigger based instructions to a unit (say a probe) and have it move along a line where having some other specified unit in an adjacent spot represents a 1, or one has a 0 if the unit isn't there. This is a much stronger result than the result he has. But it seems that his version of Starcraft as defined doesn't let you use event triggers (or at least he doesn't mention them). So he only gets the weaker result of Starcraft being NP hard.

In the 1970s and 1980s, showing something was NP-hard used to be a big deal and there are a lot of papers from that time period. As the techniques improved one occasionally got some fun with someone showing that some new game was NP hard or NP complete (Tetris was done a few years ago as was Minesweeper). But these are really not considered to have any real insight. This paper is a bit more impressive because of the sheer number of games, and the systematic way he approaches the games especially his Metatheorem 1 and Metatheorem 2. Those two results are not obvious. Overall this is quite clever and makes for a fun read.

It's pretty obvious he's only talking about unmodified multiplayer Starcraft. Once you get into mods (custom maps), it's hard to really still call them "Starcraft" (the game) despite them running within "Starcraft" (the piece of software).

On the rst traversal, the avatar can safely land on top of the enemy and dig a hole on the left.
The AI will make the enemy fall in the hole, so the avatar may follow
it, land on its top again, and proceed through a ladder, while the enemy
remains forever trapped in the hole below. The avatar cannot attempt to
traverse the gadget a second time without getting stuck in the hole where
the enemy previously was.

If you played Doom without Godmode on, it was fairly hard. You had to spend some time doing it, and learn tactics. Not to mention a fair amount of luck.

I put Doom and Pac-Man into the class of games that are enjoyable because they are hard, and progressively harder. The first few levels of Pac-Man are easy enough, but much like a good strong Habanero sauce, the beginning is fine but you start to figure out that you may have made a mistake about it being easy.

All fun and frolics, as long as you realize that the computational complexity of the "solution" has little to do with how difficult the game is for humans to play...

I'm assuming that by "solution" here we mean an algorithm that will either win (or prove winning impossible) for any case, in a finite number of steps; as opposed to a heuristic or stochastic technique that could win more-often-than-not, but never prove un-winnability.

Last time I looked, the human brain used some sort of complicated neural net

Hey Pac Man is only 13 years old according to TFA! That's great that means i'm only in my 30s again!...oh wait a tick this is Slashdot where editors never edit...:-(

You wanna know what's hard? finding out your twitch skills have gone further south than a 43 year old's D cups whose nursed 3 kids. I used to be death incarnate on games like DOOM, Mechwarrior 3 & 4, SoF 1&2, bodies would be piled up like cordwood. Then over the Xmas holiday my boys were like "Hey you've got your new netbook right? Why don't we fire up some TF2 and play together!" and boy that was a rude awakening let me tell you. the heavy, the soldier, it didn't matter what i played i got the living shit slapped out of me. I got so irritated i decided to whip out my CC and buy 3 copies of HL:DM to show these kiddies a thing or two and...ooow, I ended up with so many rockets up my ass my little Freeman is probably walking funny to this very day. Didn't help the oldest was announcing a play by play for the house while my GF laughed her ass off "Oh he's dead, and he's dead again, oh look he didn't even see the death coming that time, will he make it around the corner? nope he's dead" while the youngest made smartass remarks about how he'd "get me a walker with a controller built into the handles for my birthday"

So enjoy it while you can young ones, all of you that kick royal ass on MW or CoD just as we ruled the arcades will blink twice and the next thing you know your kids will be pimp smacking you on every game while making geriatric jokes. man it sucks getting old:-(

Hey Pac Man is only 13 years old according to TFA! That's great that means i'm only in my 30s again!...oh wait a tick this is Slashdot where editors never edit...:-(

Or more likely that Slashdotters don't read TFA. The 13 is a reference to the number of games researched, not their age. Pac-Man is noted as a game from 1980. Neither the introduction article (first link) or the research paper itself (led to by second link) suggest Pac-Man is only 13 years old.

When you get older, the thing to do is play Civilization style games. The youngsters with their trendy ADHD won't last more than a few minutes, You can basically bore them into losing, just by playing non stop for an hour or two and letting them know how far there may be to go before a result.

It's a bit like watching cricket, highly recommended if you're over 40 and need to look after a child or two. After half an hour tops they fall asleep and you can get happily drunk.

People that play these games every day can school people that don't every time because they have every nuance down. I played a game of HL2 where I was dying constantly and never saw it coming. Later I watched the guy that had been sniping me and he was hitting targets that I couldn't even see because they were so faint and far away and getting head shots every time. It was insane (I learned later that he won several half life tourneys and we didn't stand a chance - he was mid-30s at the time, so age is only

But maybe its just me, but the newer games don't really seem to have much of a chance for aquiring a "bag of tricks' because everyone just runs like a chicken with their head cut off. that is what I miss about MechWarrior 3/4, i used to get accused of cheating until i would tell them my exact loadout and they were always "Uhhh...what the hell? you only got one shot!" but that was the point, i stripped down my mech and made it into a walking blunderbuss. no secondary weapons, no lasers, just one big ass shot

People solve Sudokus every day; a computer can do it in a flash - yet Sudoku is NP-Complete. In a 9x9 grid, the scale of the problem remains small enough that you can brute-force it. Make a bigger Sudoku -- or a bigger pacman maze -- and it would become significantly more difficult to solve.