penpen wrote:This behaviour is a hint, that the command line interpreter has an internal representationfor 32 bit signed integer numbers (sint32) of the following form (minimum):- 1 bit sign- 32 bit unsigned integer number (uint32) value

The upper one is an endless loop, while the lower one produces no output.So 0x80000000 != -0x80000000 (***), although these numbers should be equal according to set /A, and the normal 'behavior' of int32.Looking at the decimal values only the following can be derived from (*), (**), and (***):counter <= 2147483647 < 2147483648, counter in { -2147483648, 2147483647 } (*)

I should have added that to my prior post, but i had not much time, so i'd shortened the above post.

And my formulation "This behaviour is a hint, that..." should make clear that there is (at minimum) one additional possibility to explain the behavior:It may be, that the internal representation of these numbers is done in long int ((s/u)int64) Format, but as WinXP is 32 bit and the processors it is designed for have no native long support, i think there are just 2 values in use.

The only possibility to get these results is, that the following inequalties are true:counter <= 2147483648, counter in { -2147483648, 2147483647 } (*)

I disagree with penpen and agree with jeb. Penpen is assuming FOR /L uses the same parsing rules as SET /A. You must remember that SET /A has its own rules that differ from all the other contexts.

I believe FOR /L is using the same parsing rules that I outlined for the IF command:

dbenham wrote:All three numeric notations employ a similar strategy: First ignore any leading negative sign and convert the number into an unsigned binary representation. Then apply any leading negative sign by taking the 2's compliment.

The big difference (from SET /A) is that overflow conditions no longer result in an error. Instead the maximum magnitude value is used. A positive overflow becomes 2147483647, and a negative overflow becomes -2147483648.

I don't know how I missed FOR /L and EXIT [/B] in my original post. I need to edit that original post with the new info

Both a max end condition with a positive increment and a min end condition with a negative increment result in an infinite loop for the exact reason that jeb outlined. When the max value is incremented, it overflows and is interpreted as the min value, so the value never exceeds the end condition.

I looked at cmderror's post, and discovered something disturbing on XP.

I couldn't figure out why he was getting overflow with SET /A 0XFFFFFFFF on XP, whereas I was getting a value of -1. Then all of a sudden, I was getting overflow as well.

I've done some experiments on XP, and determined that:

1) A brand new CMD.EXE session will always treat SET /A 0XFFFFFFFF as -1 to start.

2) Once the "arithmetic processor" within CMD.EXE (whatever that is) detects an overflow condition under any context, then from then on, SET /A 0XFFFFFFFF will be treated as an overflow error. This condition can be triggered by overflow in SET /A, IF, FOR /L, FOR "SKIP=", FOR "TOKENS=", or variable expansion with substring operation.

dbenham wrote:I disagree with penpen and agree with jeb. Penpen is assuming FOR /L uses the same parsing rules as SET /A. You must remember that SET /A has its own rules that differ from all the other contexts.

No, i don't i also use the normal int32 behavior and assume that it may be bound to maximum.

dbenham wrote:I believe FOR /L is using the same parsing rules that I outlined for the IF command:

dbenham wrote:All three numeric notations employ a similar strategy: First ignore any leading negative sign and convert the number into an unsigned binary representation. Then apply any leading negative sign by taking the 2's compliment.

The big difference (from SET /A) is that overflow conditions no longer result in an error. Instead the maximum magnitude value is used. A positive overflow becomes 2147483647, and a negative overflow becomes -2147483648.

And in Addition i did used the special value 0x8000000 because of the reason, that it's two complements value is again 0x8000000, so it doesn't have any effect.And if it is computed as you assume, then this should be the result of 0x80000000, -0x80000000:First ignore any leading negative sign and convert the number into an unsigned binary representation: 0x80000000, 0x80000000.Then apply any leading negative sign by taking the 2's compliment: 0x80000000, 0x80000000.

In addition even if it tranformed to an decimal interchange Format the result should be the same: as -0x80000000 ==-(-2147483648)=2147483648.And even if you aplly the bound to the Maximum value will not change this number.

So if you were right, then for loops i've given above should in all these case do the same:

Edit: I've just seen that 0x80000000 is clamped to 2147483647 on Win 7, so this behavior can be explained with clamping, too.Edit2: Added "on Win 7" to the edit, as the XP 64 here (no patches for whatever reasons, not mine) behaves different.Edit 3: With "this behavior" in Edit 1 i meant -0x80000000 != 0x80000000, but to be able to differ you need additional Information (at least one additional bit).

I didn't doubt your pseudocode.Your pseudocode just doesn't respect that an internal representation might have a different (to sint32) integer representation representation.The Interpreter just must be able to differ between 0x80000000 in unsigned int 32 and signed int 32 which is not possible with sint32 as the only representation.

This is what i've wanted to say in my above posts (maybe it was not clear enough).

penpen wrote:And in Addition i did used the special value 0x8000000 because of the reason, that it's two complements value is again 0x8000000, so it doesn't have any effect.And if it is computed as you assume, then this should be the result of 0x80000000, -0x80000000:First ignore any leading negative sign and convert the number into an unsigned binary representation: 0x80000000, 0x80000000.Then apply any leading negative sign by taking the 2's compliment: 0x80000000, 0x80000000.

In addition even if it tranformed to an decimal interchange Format the result should be the same: as -0x80000000 ==-(-2147483648)=2147483648.And even if you aplly the bound to the Maximum value will not change this number.

So if you were right, then for loops i've given above should in all these case do the same:

Edit: I've just seen that 0x80000000 is clamped to 2147483647 on Win 7, so this behavior can be explained with clamping, too.Edit2: Added "on Win 7" to the edit, as the XP 64 here (no patches for whatever reasons, not mine) behaves different.Edit 3: With "this behavior" in Edit 1 i meant -0x80000000 != 0x80000000, but to be able to differ you need additional Information (at least one additional bit).

No, you misunderstood my rules. But I can see why as I was not clear about what I meant by "overflow condition".

Here is my original statement:

dbenham wrote:All three numeric notations employ a similar strategy: First ignore any leading negative sign and convert the number into an unsigned binary representation. Then apply any leading negative sign by taking the 2's compliment.

The big difference (from SET /A) is that overflow conditions no longer result in an error. Instead the maximum magnitude value is used. A positive overflow becomes 2147483647, and a negative overflow becomes -2147483648.

What was left unsaid is that overflow is detected after the initial unsigned parsing if the 32nd bit (the sign bit of a signed integer) is set. I described this in my original post when talking about SET /A.

However, I think it is more precise to say overflow is detected if the original unsigned parsing requires more than 31 bits.

If no overflow is detected, and the minus sign is missing, then the original parsed value is used.

If no overflow is detected, and the minus sign is present, then the negative value is computed as the two's compliment of the original unsigned value.

If overflow is detected, and the minus sign is missing, then the maximum 32 bit signed integer value is used: 2147483647.

If overflow is detected, and the minus sign is present, then the minimum 32 bit signed integer value is used: -2147483648.

Some additional rules:

The FOR /L parser reads a token as a number up until it detects a character that is invalid as a number. For example: 45HELLO is treated as 45. -123GOODBYE is treated as -123.

If the entire token is invalid, then it is treated as 0. For example: HELLO45 is treated as 0. --123 is treated as 0 (only one minus sign allowed).

The above model account for all of the behavior.

I think your model may also work, but I find it simpler to restrict the model to 32 bits.

Without looking at the cmd.exe code, we can never be sure exactly what is happening. But we can document the behavior A working model just makes it easier to document the behavior.

:: put it here togather, but all set /A instructions are executed in their own shell instance as the first commandZ:\>set /A 0x80000001Ungültige Zahl. Zahlen sind begrenzt auf eine Genauigkeit von 32 Bits.

One more forgotten case .. IF [NOT] ERRORLEVEL N .And works surprisingly correct .It should detect if the current errorlevel is bigger or equals to the given number.Does not accept non-numerical strings.