I compiled and ran your program with the x* variables being defined as both int and double and got the same result both times. The results actually look good (accurate). The only negative number that I'm seeing is a negative fraction at array[2][6] and array[2][7], but these appear after your calculating loop where you subtract so a negative value is "plausible".

Floating point arithmetic on Intel chips is inherently "iffy". The bottom bits of any calculation are suspect so you often get answer slightly different that what you expect. That's the reason you often see floating point value checks that resemble:

However, in your case you don't need floating point arithmetic. The largest value that I see is 1.9E+12. Slightly larger than 32 bits but a value that will definitely fit into 64 bits.

Most C compilers these days support 64-bit math. You should be able to define your variables as "_int64" or "long long", depending on your compiler.

By the way, that was quite clever putting array[][] on the stack and letting the compiler generate the values at the time the function is called. When I first saw the program I didn't realize that you'd taken that approach and was sure that the program wouldn't compile. :)

You're going to have overflow problems if you try using integers to compute such large numbers.

I would declare the variables to be of type float or double.

It would also help to move the initialization into separate statements, then you can step thru the statements with a debugger and see what's happening.

It wouldalso help to #define sx(e) (x##e*x##e) and #define sy(e) (y##e*y##e) so you can replace all those x11*x11 by sx(11),
that will cut down the visual clutter and chance of typos by a considerable factor.

Even better, if there's some pattern to the array, write some for loops to do the initialization, you may be able to reduce all that comlex babble to
a line or two of code.

Now the compiler will generate the initial values for you. A byproduct is that your executable will be smaller and faster, but the real benefit is that any value that is too large to fit into a long will generate a compilation warning. You can then experiment with data types. float should work fine, as should 64-bit integers.

It's generally poor practice to enumerate (or define) variables with these names as they are names commonly used in programming, but in this case we should be fine.

One complication will be type casting. The values generated by enum{} will be treated as ints so the mathematics being done to generate the initial values is integer math. The compiler is very nicely telling us that we've overflowed the int, so we need to force the compiler to do "long" arithmetic.

This can be accomplished by changing the enum{} to #define, or by incorporating a macro that forces the recast. grg99 suggested a macro to compute the square of the value. It's a good idea and also simplifies the code

An Outlet in Cocoa is a persistent reference to a GUI control; it connects a property (a variable) to a control. For example, it is common to create an Outlet for the text field GUI control and change the text that appears in this field via that Ou…

This is a short and sweet, but (hopefully) to the point article. There seems to be some fundamental misunderstanding about the function prototype for the "main" function in C and C++, more specifically what type this function should return. I see so…