Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Bit cost of bit shift

I updated a question I asked before with this but as the original question was answered I'm guessing I should ask it seperately in a new question.

Take for example the simple multiplication algorithm. I see in numerous places the claim that this is a Log^2(N) operation. The given explanation is that this is due to it consisting of Log(N) additions of a Log(N) number.

The problem I have with this is although that is true it ignores the fact that each of those Log(N) numbers will be the result of a bit shift and by the end we will have bitshifted at least Log(N) times. As bitshift by 1 is a Log(N) operation the bitshifts considered alone give us Log^2(N) operations.

It therefore makes no sense to me when I see it further claimed that in practice multiplication doesn't in fact use Log^2(N) operations as various methods can reduce the number of required additions. Since the bitshifting alone gives us Log^2(N) I'm left confused as to how that claim can be true.

In fact any shift and add method would seem to have this bit cost irrespective of how many addition's there are.

Even if we use perfect minimal bit coding any Mbit by Nbit multiplication will result in an approx M+Nbit Number so M+N bits will have to have been shifted at least N times just to output/store/combine terms, meaning a minimum N^2 bit operations.

This seems to contradict the claimed number of operations for Toom-Cook etc so can someone please point out where my reasoning is flawed.

like image 765
Rhuaidhri Tynan Avatar asked Oct 18 '25 15:10

Rhuaidhri Tynan


1 Answers

I think the way around this issue is the fact that you can do the operation

 a + b << k

without having to perform any shifts at all. If you imagine what the addition would look like, it would look something like this:

  bn b(n - 1) ... b(n-k) b(n-k-1)     b0   0    ... 0  0  0  0
                  an     a(n-1)   ... ak a(k-1) ... a3 a2 a1 a0

In other words, the last k digits of the number will just be the last k digits of the number a, the middle digits will consist of the sum of a subset of b's digits and a subset of a's digits, and the leading digits can be formed by doing a ripple propagation of any carries up through the remaining digits of b. In other words, the total runtime will be proportional to the number of digits in a and b, plus the number of places to do the shift.

The real trick here is realizing that you can shift by k places without doing k individual shifts by one place over. Rather than shuffling everything down k times, you can just figure out where the bits are going to end up and write them there directly. In other words, the cost of a shift by k bits is not k times the cost of a shift by 1 bit. It's O(N + k), where N is the number of bits in the number.

Consequently, if you can implement multiplication in terms of some number of "add two numbers with a shift" operations, you will not necessarily have to do O((log n)2) bit operations. Each addition does O(log n + k) total bit operations, so if k is small (say, O(log n)) and you only do a small number of additions, then you can do better than O((log n)2) bit operations.

Hope this helps!

like image 104
templatetypedef Avatar answered Oct 20 '25 11:10

templatetypedef