I would like to discuss some aspects of these notesFirst question is, in page 4 what does does it mean by: "The stuff in parenthesis doesn't amount to much (though it is fun to think about how much there is)."

And also, the number of steps for multiplication is 2n^2. i think it should be more. Could anyone please explain this to me?

If I can understand this I am hoping i can move further in my study of cryptography.

Hi, I've been looking at the pages in your PDF on addition and multiplication, and I disagree somewhat with what they say. But I'm not sure that it really matters how many steps it takes to do n-digit multiplication or n-digit addition. Especially when it pertains to encryption. Understanding the approximate number of operations required to evaluate a mathematical equation might be useful, especially if you need to know that. To understand this better it is good to know what the processor or virtual environment does. For example if you are writing assembly, you need to know how the processor and memory handle wide numbers. A 32-bit processor can handle 32-bit unsigned integer values well or 31-bit signed integer values assuming the processor uses two's compliment to identify positive and negative, but decimal values are only 16-bit with 15 bits of decimal space. Sometimes processors use registers to store flags and temporary values as well. All of this is explained in computer architecture. But a lot of it is unimportant or less important unless you are relying on small-scale hardware (i.e. 64kb or program memory, low processor cache: 0-16kb, etc.) or low-level programming such as assembly, C or C++. If you are using a virtual environment such as those provided by .NET/MONO, Java, or any of a number of scripting languages, such as Ruby, Perl or Python, then 32 bits of precision is exactly what you expect it to be, because these environments "fix" processor precision issues by using their own variables to store data and check it.

Back to the math, if I add two one-digit numbers together, like 1 + 1, then I get 2. IMO 1 + 1 is a single operation. In computer terms, three operations would take place to perform the addition:

Place 1 in the left operand register

Place 1 in the right operand register

Add left operand register and right operand register, placing the result in the result register (placing the result is done as part of the adder ALU's logic by design, so it is not a separate operation)

This is pretty much the lowest level of operation you can get to. In a 32-bit system, you can expect values up to 32-bits in size to operate similarly, as in: 4294967295 + 4294967295 = 8589934590. Since 4294967295 is the maximum unsigned value that a 32-bit integer can hold, if that was the limit of bit-width for numbers in your system, then you would likely get a buffer overflow, resulting in 4294967294 (1 less than the two numbers being added due to the bit-carry upon add):

11111111111111111111111111111111+ 11111111111111111111111111111111----------------------------------- 111111111111111111111111111111110 ^ | --- This bit is carried by the add operation because all of the binary fields are full.

[32-bit register can only hold 11111111111111111111111111111110 which omits the first 1 resulting in 4294967294]

Now typically when we think of simple multiplication we think of it as a single operation. For example 3 * 1 = 3 or 12 * 2 = 24, but when you break it down we are still adding. One method of performing multiplication is to iteratively add the multiplicand (in the case of the first expression, 1) the number of times minus one to itself as defined by the multiplier (in the case of the first expression, 3). So 3 * 1 becomes 1 + 1 + 1. Likewise the second expression would become 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2.

One property that is important to know about both multiplication and addition is that the order of the operands does not matter. For example 3 * 1 is exactly the same as 1 * 3, likewise 12 * 2 is the same as 2 * 12. In both sets the answer is the same no matter which order the multiplicands occur. For addition 1 + 3 is the same as 3 + 1, and 12 + 2 is the same as 2 + 12. However in performing symbolic arithmetic it might often be simpler to place values containing more digits in a higher order than those with fewer digits.

When we do multiplication symbolically, if we are using more than one digit in second operand, then we have to keep track of new operands for addition. If there are three digits in the second of the two operands then there will be three new operands to keep track of and add after the multiplication is done. For example:

So in the first example we find that we must first multiply each digit by each digit. That is 3 * 3 operations, or 9 operations. Then we must add the resulting numbers. The largest number contains six digits, so we will have six addition operations to perform. There are also two carries the occur during the addition. You could count those as extra operations. So for the first multiplication problem, evaluated symbolically, we get 9 + 6 + 2 operations or 17 operations total. As compared to 512 (since it is the smaller operand) operations if we use a loop, or a single operation if the CPU supports the width of the numbers.

The second example has 3 + 4 operations or 7 operations, compared to 4 using a loop, and again a single operation since the CPU supports the width of these values.

Looking back now, the PDF document that you referenced states that, in reference to Naive Multiplication, it takes 2n^2 steps where n is the number of digits (assuming both operands have the same number of digits) to evaluate a symmetrical multiplication expression. Which for two 3-digit operands would be 2(3^2) = 2(9) = 18. We found that the number of operations to multiply those two specific numbers was 17. The document doesn't say anything about when the opearnds are asymmetrical as in my second example, 4 * 512, but that resulted in only 7 operations, a prime number which is neither a factor of three or 1 (obviously) The 18 was close however. Let's take two symmetrical 4-digit numbers and multiply them symbolically.

Given the hypothesis of Hal Sadofsky, we should require 2(4^2) = 2(16) = 32 operations to symbolically complete this multiplication. From the multiplication side we use 4 * 4 = 16 operations, then on the addition side, the widest value is seven digits wide, so we know there will be at minimum seven operations required to add the four operands together, and that ends up being all that we need resulting in 7 + 16 = 23. 23 is pretty distant from 32.

So to answer your questions. I still don't know what he means by "The stuff in parenthesis doesn't amount to much (though it is fun to think about how much there is)." And actually the number of steps IMO is less than 2n^2. Hope that helps explain it a little. And maybe I'm wrong.

-- Thu Dec 23, 2010 2:31 pm --

As a follow-up I realized that on page three of the PDF document it states "Consider addition of two digits, or multiplication of two digits to be a single operation." I assumed the addition of the symbolic results of the multiplication to be singular for each column. But based on this statement, given the number of operands there are (n - 1) * w operations where n is the number of operands and w is the number of digits in the widest operand.

Given this information take the four-digit multiplication problem again:

In the addition portion of the multiplication problem there are (4 - 1) * 7 operations of adding single digits to single digits (x + 1 or x + 1). That means in the addition section there are 21 steps. Making the total 16 + 21 = 37. So if this is your thinking then you are correct that 2(n^2) is low for the number of operations.

You can always expect the number of operands for the addition portion to be the total number of digits of the widest multiplicand. And the width of the widest addition operand will be either 2w - 1 or 2w, where w is the width of the widest multiplicand, depending on whether there is a carry on the last symbolic multiplication.

So at most the number of operations for a multiplication problem would be (m^2) + (2m * (m - 1)), where m is the width of the widest multiplicand. If you take into account carries, then you have to depend on having at most (((m-1) * m) + (2m - 1)) - 1 carries as well.

The minimum number of operations would be:Multiplication: (m^2)+(2m*(m-1))Carries: 0

For some reason i did not get an email notification of your reply to my post so I couldn't read it beforeI am still reading what you have written and I see that at one point you agree that the number of steps needed should have been greater than what the author said. My opinion was based on counting the possible carries that may arise while adding all the numbers in a the corresponding column after the multiplication of individual digits, but according to my advisor it's nothing to worry about, as the writer may have got his result on the basis of some assumption he didn't state. The main point is that the multiplications take more time than additions and and the amount of time needed for multiplication increases quadratically [ O(n^2) ] as the length of input increases, so minor additions may be ignored. moreover, addition of a carry to the addition of two bits may be considered a single operation (as is the case in case of a full adder).

However I am still reading up on the topic so if I have any new developments I will let you knowthank you for your reply