There is only one move in which White can force a draw - and to find out what it is, you'll have to RTFA.

Nah, I'm just pulling your leg, here you go..

We now know the exact outcome of this position, assuming perfect play, of course. I know your next question, so I am going to pre-empt it: there is only one move that draws for White, and that is, somewhat surprisingly, 3.Be2. Every other move loses by force.

Anybody really interested in the details will still RTFA anyway and the rest of us won't be left hanging with a teaser.

Whenever Rybka evaluates a position with a score of +/– 5.12 we don't need to search any further, we have our proof that in the continuation there is going to be a win or loss, and there is a forced mate somewhere deep down in the tree. We tested a random sampling of positions of varying levels of difficulty that were evaluated at above 5.12, and we never saw a solution fail. So it is safe to use this assumption generally in the search.

If it were for physicists rather than mathematicians the chess-pieces would be approximated by spheres and 6 sigma would be regarded as a theory having been proven. Mathematical proofs are "somewhat" more rigourous and a sigma rating at all would invalidate the proof.. This lies somewhere between those two extremes.

It's hard to know if you mean that statement technically or somewhat more generally.
The Probabilistic Method [wikipedia.org] is rigorous because it is certain. There's no chance involved in the result, only in a highly technical sense during the proof.

It's certainly not like checking a few (or even a lot of) cases that all seem to work.

Yep, for example all evidence suggest that chess is either a draw or a win for white. However, there's no proof that black can't win. In theory, all of the 20 opening moves (8x2 pawn moves and 2x2 knight moves) could expose a subtle weakness in white's defense that black could use to force checkmate with perfect play. That's a 0.00000001% chance but until it's proven otherwise, it's still possible.

Technically there's fancier wording you should use on that statement. We can't evaluate the chance because we don't have sufficient information. Given our current models of the game, we can estimate the chance of a statement being incorrect. A major problem in the model could render your estimate inaccurate (just like problems one deals with in experimentation). On the whole, it's a rare occurrence. It's just that it's very different from a proof, which is exactly correct because it is built entirely from a

In shogi (Japanese chess), there was a recent development in kaku-gawari (Bishop Exchange) family of openings in which not only does Black gain the advantage, but does it by deliberately ceding a move to White!

By electing not to advance a pawn into a key square, Black can instead deploy a variety of other pieces into that square, giving Black more strategic options in spite of the delay in piece development. This came as a shock to the shogi pros--kaku-gawari is one of the most popular, closely studied ope

Rajlich analyzed a small subset of the ~10^100 possible continuations to the point that Rybka (Rajlich's chess program) showed a score of +/- 5.12, which he describes as "99.99999999% certain" of the outcome. Assigning percentages to scores like that is tricky, often impossible, so it's hard to say how accurate the statement is. I'm sure Rajlich didn't intend the statement to be interpreted strictly. But if we take it at face-value where there is a 1/10^10 chance a line might go the other way and 10^100 opportunities for that to happen, we don't need a fancy statistics degree to see that it is highly probable not all of those conclusions are accurate. This analysis of the King's gambit isn't anything like Appel and Haken's computer proof of the four color problem, which is exhaustive and grudgingly accepted by the mathematical community.

the thing is, particles aren't out there to trick us (or so we've been thinking for the past few centuries, and there's no much reasonable doubt of this). their uncertainty is "known" (sort of), and as long as you sample with the right kind of distribution of starting configurations, you're fine.

however, in this case they are basically saying "whenever rybka thinks a branch is hard enough, there is no way to win at that branch, and we know there's no way to win because rybka can't win." well, uh, yeah.

"Whenever Rybka evaluates a position with a score of +/– 5.12 we don't need to search any further, we have our proof that in the continuation there is going to be a win or loss, and there is a forced mate somewhere deep down in the tree. We tested a random sampling of positions of varying levels of difficulty that were evaluated at above 5.12, and we never saw a solution fail. So it is safe to use this assumption generally in the search."

Chess programs usually score a position in "pawn equivalents". Having one pawn more is a +1, unless your opponent has compensation in position. Having one less would be a -1. Other examples are:-a knight or bishop is worth roughly 3 points-a rook is worth roughly 5 points

In practice, skilled players will win a +5 position reliably. A +3 is usually enough as well. So even if Rybka's evaluation is a bit off, I would not see much chances to win the match from the inferior position.

The score is about equivalent to being a rook down without compensation. Even strong club players could beat computers from such positions. Of course, what it really hinges on is Rybka's ability to evaluate the notion of compensation, but I can believe that the percentage of positions Rybka evaluates at -5.12 or worse in which there exists a win for the 'weaker' side is very small. So, yes, not a proof, but a strong practical indicator.

Let's assume that White is down by 5+ points in evaluation. Even in this case, Black may still want to force perpetual check (e.g., because not doing so would lead to a forced line where he might lose even more points further down the line) or White may still be able to force stalemate. You cannot assume that just because an intermediate search tree node in the game search has an arbitrary value (other than specifically a win, loss, or draw), that the tree below it can be pruned. You can limit the issues by

A perpetual check would be evaluated as 0.00+.
Stalemate only happens in the endgame, when there are few pieces on the chessboard. In these case, going down the tree would be much faster that when in the middle game. So there would be no reason to stop the evaluation at this point. Any time the line reaches the endgame stage, then it would be more sensible to let the analysis run to tablebase.
I expect that Rajlich knows this.

This is just telling you that you'd lose against Rybka. But then, unless you're a top grandmaster having a good day, you already knew that. Even then, if you decided to play King's Gambit, Rybka's letting you know in advance that you are not having a good day.

Well... More accurately Rybka has, most probably, already beaten you. Unless you allow it to use the entire result tree as an "opening book", then it still needs to calculate each move... meaning it needs those 2800 cores plus support cluster... well... assuming you are a top grandmaster having a good day, that is.

This is just telling you that you'd lose against Rybka. But then, unless you're a top grandmaster having a good day, you already knew that.

They lose too and hardly anyone bothers anymore because it's like asking whether a human or a dragster would win the 100m dash. Even a mobile phone plays at a grandmaster level these days, with a regular desktop humans would occasionally make a draw and if you made a dedicated supercomputer again like Deep Blue they'd lose every game. The last recorded human win without handicap I could find was back in 2004 when Karjakin beat Deep Junior. Everyone would know the competition would be like "if we just let th

Try playing a chess AI that isn't based on a books of known games. They can be smart when having the knowledge from all the best chess games by the best human players, but are still retarded when playing without a database of known games and openings.

Try playing a chess AI that isn't based on a books of known games. They can be smart when having the knowledge from all the best chess games by the best human players, but are still retarded when playing without a database of known games and openings.

Some of the chess engines will actually let you try this. Do it and let us know how "retarded" they are...

So, back about 10-12 years ago when I was doing masters work on HPC, one of the class assignments a group of us got was HPC gaming. Chess, connect4, othello, etc.

Let me tell you, it was stupid. It knew of no opening books or anything. It was also really damn sharp -- doing a good job beating everyone on our team at whichever game they preferred vastly more than any (consumer-grade) computer opponent at that time had.

If the machine is fast enough to search sufficiently deep in the tree from the starting boar

I was keeping up with this a bit at the time, and as far as I could tell, there was a period where Kasparov seemed to have an absolutely unique ability to beat computers. At that point, the computers had already decimated everybody else, and it stopped being "computers vs humanity" but "computers vs Kasparov".

They finally cracked the Kasparov problem, but it had been over for everybody else, including the grandmasters, for at least a decade.

An advantage of 5 is equivalent to an extra rook, or a minor piece (bishop or knight) and two pawns. No decent chess player would play on with such a deficit; instead he would resign to spare his opponent the tedium (and himself the distress) of playing on when the outcome is a foregone conclusion.

There are exceptions in practice. In 5-minute chess (with each player restricted to 5 minutes for the entire game) it is perfectly feasible to play on a piece down and perhaps even win, because the opponent may no

... so long as you still have a chance. The computers haven't reached professional level yet and certainly won't be able to compute the whole of the game in advance, even after a given opening, in the next decades.

In 2005, chess program The Baron played two Chess960 games against Chess960 World Champion Peter Svidler; Svidler won 1½–½. The chess program Shredder, developed by Stefan Meyer-Kahlen of Germany, played two games against Zoltán Almási from Hungary; Shredder won 2–0.

Yeah, computers are better at chess than humans. And cars are better at marathons than humans.

If the development of automobiles did not take away the interest of running, what reason is there to assume that the development of chess programs will eventually take away the interest of chess playing?

You don't play Tic-Tac-Toe anymore once you come up with a good heuristic to either win or at best tie. If the game is "solved" most don't see the point in playing.i.e.There is little fun due to being no challenge left when all the outcomes are already known before hand. There is no "fun" in beating a computer chess program - it is much more rewarding playing a human because the outcome is much more volatile and interesting.

In line with what Skydyr said, Joseki are opening sequences, but they aren't fixed, they can be inverted, played in any corner depending on the rest of the board. So even if you are are playing Joseki, you can deviate from them, or change them around with out it becoming an auto loss (unlike say, chess).

Many lines of chess aren't an auto-loss and merely put you at a disadvantage according to expert analysis. I have played plenty of chess and plenty of Go, and if you want to be good at both you need to learn a lot of static lines, and joseki is very much like the opening book in chess. It's played out, it's book knowledge, and it's kinda boring to learn.

However, there's a very easy fix for the opening book problem in chess: play one of the random opening variants. Don't get me wrong. I actually prefer Go to

In Europe, we mostly get our knowledge about Go from old japanese books that were translated to English in the 1970ies or so. When you compare the whole joseki and fuseki theory of that age and the professional games that resulted from it, with professional games today, or games of the Showa Era (when Go Seigen was in his prime and invented the shinfuseki) or games of the Meiji Restoration (when Honinbo Shusaku lived and basically created formal fuseki theory) you'll be hard pressed to conclude that there

The same could be said of chess. The simple fact is that much of the game is based on static lines that you'll have to learn if you want to be good. Even early on, it becomes obvious that you need to learn about the basic lines of a 3-3 invasion.

I agree that book openings are worse in chess, given the lower branching factor and sharper lines, but you're not going to escape memorization by switching to Go. Also, playing one of the random variants of chess is an easy way to avoid the opening book problem.

What I'd love to know is the skill level gap between humans and computers when comparing small Go boards and large ones. Do larger boards make this gap wider? Is the gap growth linear with the board size? If the gap widens, is there a board size so big where the gap doesn't widen anymore?

On smaller boards for beginners (9x9), the computers are much better than at the usual ones (19x19) - though I wouldn't put a number on it (I'm not sure about the very recent program/computer pairings.)

Other board sizes are hard to compare, as a) human players are unfamiliar with those and b) generally just don't play them. There is the intermediate beginners board with 13x13 that is commonly used, as well as the Tibetan 17x17. On larger boards the games become impractically long. They already last 200-300

They didn't calculate all possible moves, but skipped every branch where analysation showed an advantage high enough for one party to be "absolutely sure" to win. So while the algorithm is very sophisticated, it technically didn't solve King's Gambit.

The difference is that solve in this context is not a general English word but rather a specific and well defined term. I'm pretty sure the technical meaning of "solving" a game or position within a game requires a proof. The meaning of proof is somewhat stronger than overwhelming evidence. We are pretty sure P!=NP, but we don't have a proof. You cannot publish a paper or write a thesis that says "I'm pretty sure P!=NP".

Note: I'm not saying this work is uninteresting, just that those pointing out that solve is being used incorrectly are justified.

For sure! That's why I was sure to make the point about there being value in the work. Again I wanted to say a few words about how "solving" is different from "making a very convincing argument about".

I think that paper would not get accepted. A probabilistic statement would indicate there is a random aspect that determines the outcome. Reviewers would probably ask what is the random event that would determine if P does or does not equal NP.

I'm definitely not making the confusion you think I am. I have studied Computer Science at the PhD level at the University of Alberta, which I believe has the strongest games research group in the world. I will admit to not being an expert in games myself, but I am quite confident that when people in this area say solved, it means something specific, something stronger than "obviously true to everyone in the world". It requires proof in the rigorous, mathematical/algorithmic sense. I'm pretty sure, any

Sorry. You cannot publish a paper whose conclusion is "I'm pretty sure P!=NP". This is different from assuming it's true in order to make other statements, such as, "assuming P!=NP, my cool new crypto method has features people would care to have."

You mean what's the difference between math and science? Mathematical truths are known for sure, scientific models are just approximations based on experiments and may or may not hold up under a certain set of specific conditions.

Rybka was stripped of its world computer chess championship after it was found that the author plagiarized the chess engines fruit (free software, GPL, the current base of GNUchess) and crafty (opensource). Even so, chessbase keeps selling this stolen engine.

I think its a great thing he stole. Imagine if mathematicians treated their discoveries as property, not allowing others to tread. It may not be nice that he didn't acknowledge, but should I care if what he writes is valid ?

For the benefit of those reading this who are not familiar with the Rybka situation, I want to point out that the author of Rybka did do a lot of original work on it too -- it wasn't a "complete clone" like some of the more blatant plagarism cases in the computer-chess world have been. But he still did plagarize certain parts of Crafty and Fruit which gave him a significant advantage over the other competitors in the WCCC and other tournaments. These tournaments are really competitions between programmers to see who can make the best-playing engine. And in order for them to be fair, each team entering the tournament must write their engine entirely by themselves, and disclose the origins of any third-party code used in their engine. Rybka versions that contained third-party code from Crafty and Fruit were entered into several of these tournaments without declaring that this code was used, and without getting the permission of the authors of Crafty and Fruit (either explicitly, or via some sort of license grant). In fact, Rybka's use of this code violated both Crafty's license (a non-commercial-use license which also has a clause prohibiting use of Crafty code in a tournament without permission from Crafty's author, Dr. Bob Hyatt) and Fruit's license (GPL v2).

If you read Slashdot, you know that stealing is OK, because1) It costs more than is reasonable2) You disagree with their license or copy protection scheme3) The MPAA/RIAA are a bunch of jerks4) You promise you will support the artist directly by some kind of donation or going to their show or referring your friends5) It's a try before you buy situation, and you'll pay later if you like the program6) Stealing software doesn't deprive others of the product

If you read Slashdot, you know that copyright infringement is OK, because

Fixed that for you.

You've got modded funny, but your list list reflects pretty well why most reasonable and decent people think copyright infringement is okay or at least only petty crime rather than on a par with terrorism et al. The good news is that it's only harmless for personal use. Nobody really thinks it is okay to copy someone else's software and sell it.

1) It costs more than is reasonable2) You disagree with their license or copy protection scheme3) The MPAA/RIAA are a bunch of jerks4) You promise you will support the artist directly by some kind of donation or going to their show or referring your friends5) It's a try before you buy situation, and you'll pay later if you like the program6) Stealing software doesn't deprive others of the product

Just a side note: If your reason is (2) then it is better to send back your proposed license changes to the company's legal department. Even bette

Yes, perhaps, but on the other hand as these programs *are* opensource, there is nothing wrong with their use in the manner described in the article. I hope we can agree that the result produced is interesting regardless of whether we consider the ethics of the researcher to be up to our standards?

The paper proposes that, contrary to popular opinion, Rybka probably did not misappropriate parts of Fruit. It was enough for me to tend toward believing Rybka and not believing 34 panelists on ICGA, but I'll let you judge for yourself. If you know the background of the SCO vs Linux case, especially how the pundits made their pronouncements, you will appreciate this paper more. I can definitely say that I no longer unequivocally conclude that Rybka stole from Fruit.

Right on the surface, the King's Gambit doesn't look like a very good idea for white, throwing away a well-placed pawn on your second move. Apparently this was considered a good idea for a long time, though I (a mediocre-at-best player) don't see how it could work.

As white, the only advice you need from this study is "Don't do it." As black, the advice appears to be "Take the pawn if offered. The best they can do at that point is a draw, and if they differ from that line at all, they lose."

Assuming you're a great player, of course. I'm sure that I'd still get massacred if a real player were to play the King's Gambit against me.

"After Bobby Fischer lost a 1960 game[4] at Mar del Plata to Boris Spassky, in which Spassky played the Kieseritzky Gambit, Fischer left in tears[citation needed] and promptly went to work at devising a new defense to the King's Gambit. In Fischer's 1961 article, "A Bust to the King's Gambit", he brashly claimed, "In my opinion the King's Gambit is busted. It loses by force."[5] Fischer concluded the article with the famously arrogant line, "Of course White can always play differently, in which case he merely loses differently. (Thank you, Weaver Adams!)"[6] The article became famous.[7][8]

Remarkably, Fischer later played the King's Gambit himself with great success,[9] including winning all three tournament games in which he played it.[10][11][12] However, he played the Bishop's Gambit (1.e4 e5 2.f4 exf4 3.Bc4) rather than the King's Knight Gambit (3.Nf3), the only line that he analyzed in his article."

Apparently this was considered a good idea for a long time, though I (a mediocre-at-best player) don't see how it could work.

For a long time declining a gambit was not considered very good sportsmanship, if the opponent offered you to go on a roller coaster ride you were supposed to take it. Go through The Immortal Game [wikipedia.org] and see what they considered a masterpiece in 1851. Oh and as relevant to the story - it's King's Gambit Accepted and won by white.

Another point of relevancy to the story is in that particular game white was down by more than 5.5 points of material with no significant positional advantage in return but only a checkmate 7 moves in the future. I don't know if the computer in the article would have chalked that up as a loss for white and moved on by its criteria.

You're missing the point. The computer can quickly find a mate in 7 from where it's starting its analysis. But it only analyzes move sequences to a certain depth, and checkmates, captures etc beyond that depth won't figure into its evaluation of that sequence of moves.

Winning sacrifices may be evaluated as bad because the computer doesn't explore the line enough to see the compensation, and time-wasting moves may be evaluated as good because they push problems a move further into the future, beyond where th

Another point of relevancy to the story is in that particular game white was down by more than 5.5 points of material with no significant positional advantage in return but only a checkmate 7 moves in the future. I don't know if the computer in the article would have chalked that up as a loss for white and moved on by its criteria.

No. Rybka's scores for this game stay in the range {-1.17,3.75} until the last few moves, so it will have analysed it, and presumably decided that the entire branch it sits in is a mistake for black.

Their methods looks ok, their conclusion on the King's Gambit looks ok, but I hold that chess is a deterministic but non-predictable system that is sensitive to initial conditions. ie: a chaotic system. All chaotic systems can be represented by relatively simple mathematical equations, even if "relatively simple" means "still very complicated" and/or "not known at this time".

Their reasoning that the system will tend to some ratio of wins:draws:losses very quickly is one I can see being true for many cases,

As a tournament player and mathematician (3rd year): you're looking at this in a completely wrong way:)

Their methods looks ok, their conclusion on the King's Gambit looks ok, but I hold that chess is a deterministic but non-predictable system that is sensitive to initial conditions. ie: a chaotic system. All chaotic systems can be represented by relatively simple mathematical equations, even if "relatively simple" means "still very complicated" and/or "not known at this time".

Chess isn't really chaotic. In some situations, I'd wager a lot (really a lot) that one side can't do much but lose. These situations are rated with high scores (say... +/- 5).

Let's start easy with a soccer analogy: two good national teams playing, but 5 of one team must have their shoelaces tied together from a certain point on (roughly equivalent to a -5 score I'd claim). Your bet would be? Yes, there a

"Initial conditions" in mathematics refers to when you start analyzing the system, not to when the system starts to exist. Since each board position is a fresh calculation, each board position is an initial condition. So although the first board is common, he's not really looking at only one initial condition. Yes, I know, mathspeak isn't intuitive and is often contrary to "common usage", but there ya go.

First, the King's Gambit has not technically been "solved", for the most rigorous definition of "solved". Unlike, say, checkers, there are still lines (i.e. series of moves) within the King's Gambit that have not formally been examined.

Second, we are strictly speaking about the King's Gambit Accepted. That is, white begins with e4 (King's pawn forward two spaces), black replies classically with e5 (King's pawn up two spaces), white then gambits the f-pawn (King's bishop's pawn up two spaces), and black captures the f-pawn, accepting the gambit. As TFA mentions, the King's Gambit Declined has not been examined nearly as thoroughly.

Third, all of this is only somewhat relevant to actual chess playing, and only at the very highest levels of play; the average FIDE Master (i.e. a well above average tournament player, though nowhere near being among the 1,000 best players in the world) need not remove the King's Gambit from his repertoire because it has been "solved". This has, historically, been one of the most dynamic openings in chess, with tons of opportunities for tactical tomfoolery and psychological pressure. When we talk about "perfect play", or "near perfect play", we're already reaching beyond the level of world champions.

Fourth, while not every line has been thoroughly analysed, the ones that haven't are irrelevant. An advantage, in chess, is calculated on the basis of a difference of pawns. So, if the black player has all the same pieces as his opponent, save for an extra pawn, all other things being equal, we evaluate the position as -1 (i.e. from the perspective of white, the position is minus one pawn). Pieces other than pawns are weighed differently, even when we are solely looking at material differences. Traditionally, knights/bishops are said to be worth three pawns, rooks are worth five pawns, and the queen is worth nine pawns. However, the actual position of the pieces affects their worth; a knight very near the centre of the board is, often, worth more than a rook (i.e. A knight near the centre can have up to eight possible moves, whereas a knight in a corner can only have two possible moves). Thus, a position that has been evaluated as +/- 5.12 means that one player has more than a rook's worth of advantages over his opponent. Even in low level tournament play, it is very reasonable to assume that the advantaged player will win the game; at grandmaster level, this is so certain that it is considered impolite, even downright offensive, if the disadvantaged player refuses to resign.

Fifth, while different computer chess engines do evaluate positions differently, I have yet to come across a position about which the analyses of different engines have diverged by more than 2 pawns. An evaluation of +/- 5.12 by a top-notch engine can safely be assumed to be conclusive, since since most of what I said in the above paragraph also applies to an evaluation of +/- 3.0. Whatever else it may be, Rybka is certainly a top-notch engine.

Finally, it is true that Rybka's having reached its current strength relies on what are at best described as questionable appropriations of others' source code and algorithms. Nonetheless, the presented findings have an intrinsic value that is not dependent or reliant on notions of intellectual property or publicity. I am frankly ashamed by posters who have suggested that this article ought not have been publicized by slashdot because of its source. Knowledge is knowledge, period, and while it is both sensible and necessary to place ethical restrictions on scientific methodology, it is simply insane to deprive oneself and others of data that has, for better or worse, already been gathered.

Third, all of this is only somewhat relevant to actual chess playing, and only at the very highest levels of play; the average FIDE Master (i.e. a well above average tournament player, though nowhere near being among the 1,000 best players in the world) need not remove the King's Gambit from his repertoire because it has been "solved". This has, historically, been one of the most dynamic openings in chess, with tons of opportunities for tactical tomfoolery and psychological pressure. When we talk about "perfect play", or "near perfect play", we're already reaching beyond the level of world champions.

If chess is so hard that WORLD CHAMPIONS frequently and regularly make dumb moves -- yes, that's what not playing perfectly is defined as -- then why should it attract any interest as a discipline at all? It's like wheelchair ballet.

As a GAME -- an opportunity for excitement, aggression, a way to humiliate your opponent -- sure, it makes sense to play chess. But so does poker. As a mathematical discipline -- we're outclassed as a species. We have no business studying chess anymore.

"On March 31 the author of the Rybka program, Vasik Rajlich, and his family moved from Warsaw, Poland to a new appartment in Budapest, Hungary. The next day, in spite of the bustle of moving boxes and setting up phone and Internet connections Vas, kindly agreed to the following interview, which had been planned some months ago."

How can computer professionals not spot such an obvious April Fools joke? Chess openings cannot be "solved" by a classical computer and if they were, the result would not be that white had only one move to save a draw after two fairly normal moves.

This is an interesting technical exercise. However, it won't stop me playing this opening as White. This opening leads to all sorts of exciting games in all sorts of situations.

It can also have a great psychological effect, not greatly diminished by this new study of it. If you need to win a particular game, playing the Kings Gambit with White sends a strong "OK, buddy, this is an all or nothing game!" message to your opponent.

Just because a computer has figured out a way to win, doesn't mean that a typi

Extraordinary claims require extraordinary evidence -- no examples were provided on the page itself -- yet many of the comments above uncritically accept that this is true, only disputing the semantics.

On the page itself:"On March 31 the author of the Rybka program, Vasik Rajlich, and his family moved from Warsaw, Poland to a new appartment in Budapest, Hungary. The next day, in spite of the bustle of moving boxes and setting up phone and Internet connections Vas, kindly agreed to the following interview, which had been planned some months ago."

Another example of an April Fools post is here [chessbase.com], which is more obvious due to its premise. The King's Gambit post (a day late) is plausible; but that's all. You wouldn't be taken seriously if you mentioned it to a grandmaster.

While chess will face difficulties as computers and chess software become more advanced, we are along way from writing chess off as we did checkers, and probably won't do for a number of decades -- and even then, not solving every position.