This problem is actually very straightforward, and does not require much cunning to solve. By definition, raising a number to a power is the same as multiplying the number by itself repeatedly (e.g. a^3 = a*a*a). However, polynomial expansion, a 7th grade math principle, yields a trick that can be used to compute the partial result of each multiplication (az % x) without ever computing the entire result of the exponent (a^b). A rudimentary proof of the trick:

az % x

Decompose a and z into portions which are multiples of x and portions which aren't: a = xm + n, z = xo + p

= (xm + n)(xo + p) % x

Polynomial expansion

= (x^2*mo + xmp + xno + np) % x

(y + z) % x = y % x + z % x. We have to add another % x at the end because it's possible that the sum of the remainders exceeds x.

= ((x^2*mo + xmp + xno) % x + np % x) % x

Extract common multiple

= (x(xmo + mp + no) % x + np % x) % x

xy % x = 0 by definition

= (np % x) % x

Two modulus in series are redundant

= np % x

As the result of each multiplication/modulus will always be less than x (which must fit in a word), there is no need for big numbers or for many multiplications for each step (for comparison, the most straightforward method of multiplying big numbers requires n^2 primitive multiplications, where n is the number of digits in the big numbers). Thus this algorithm is O(power) in the worst case.

This process can be further optimized by noting that a^b % x = (a^(b/c) % x)^c when b is a multiple of c. Thus divide and conquer can be used with recursion to eliminate duplicate work in the case where the exponent is a multiple of c. With this optimization, the algorithm becomes OMEGA(log power) in the best case; the example I gave in the original post was a best case scenario where the exponent was a power of two, and required only 62 multiplications and 62 divisions to solve.

Another problem in the same homework is proving much more difficult, however: write an algorithm using divide and conquer to, given an unsorted array of values, determine whether there is a single value that appears more than half as many times as the number of elements in the array; the algorithm must be generalizable such that you can only compare elements for equality - you can't tell if one element is greater or less than another (which rules out sorting and creating any kind of data structure to count them). Those multiple constraints, taken together, are really brutal for resisting solutions. I've figured out a fact of the problem that I think will be important in solving this (a condition that, if false, guarantees that such a value doesn't exist; however, if the condition is true, it's still possible a majority value doesn't exist), but I haven't quite figured out how to solve it, yet.

Subscribe to:
Post Comments (Atom)

## 3 comments:

In retrospect, it probably would have been simpler to just prove it from the distributive property of division (for real numbers) - (a * b) / c = (a / c)(b / c).

Umm, no, (a/c)(b/c) = (a*b)/c^2 -- not the same thing at all. I'm not sure what you mean in this comment.

Anyway, just wanted to point out (and I kind of hope that this will be covered in your class, too) that you don't require the exponent to have any particular divisibility properties to get the sort of improved performance you describe. In general, it should be Theta(log_2(exponent)).

Oh, and for the other problem, you didn't state what the required running time was. There's a trivial O(n^2) solution, for instance. I'm guessing you are meant to do it in O(n*log(n))?

"Umm, no, (a/c)(b/c) = (a*b)/c^2 -- not the same thing at all. I'm not sure what you mean in this comment."

I wasn't trying to prove that (a/c)(b/c) = (a*b)/c^2; rather ab % c = (a % c)(b % c) % c.

"Anyway, just wanted to point out (and I kind of hope that this will be covered in your class, too) that you don't require the exponent to have any particular divisibility properties to get the sort of improved performance you describe. In general, it should be Theta(log_2(exponent))."

Binary exponentiation (I just looked up algorithms for that on Wikipedia after you said that - I usually try to avoid looking up answers except when I get really stumped)?

"Oh, and for the other problem, you didn't state what the required running time was. There's a trivial O(n^2) solution, for instance. I'm guessing you are meant to do it in O(n*log(n))?"

There was no performance constraint in the problem. At the time I was looking for any solution that fit the constraints (and I refused to look it up on Google). Oddly, both myself and several friends (all of which are in comp sci, and at least one of which specializes in high-level math) managed to miss the relatively simple solution; maybe we were overthinking the problem. Friends did come up with some humorous solutions (although not fitting the constraints), such as some involving matrix math.

I think the O(n^2) solution you're thinking of is the brute-force one. That's definitely trivial, but we had to use a divide and conquer algorithm. Yes, the solution did turn out to be O(n^log n).

Post a Comment