Correct answer is (a) O(log2 n)
The best explanation: Let n be a k-bit integer in binary. The conversion algorithm is as follows. Divide 10 = (1010) into n. The remainder – which will be one of the integers 0, 1, 10, 11, 100, 101, 110, 11 1, 1000, or 1001 – will be the ones digit d0. Now replace n by the quotient and repeat the process, dividing that quotient by (1010), using the remainder as d1 and the quotient as the next number into which to divide (1010). This process must be repeated a number of times equal to the number of decimal digits in n, which is [log n/log 10] +1 = O(k).
We have O(k) divisions, each requiring O(4k) operations (dividing a number with at most k bits by the 4 bit number (1010)). But O(4k) is the same as O(k) (constant factors don’t matter in the big-0 notation, so we conclude that the total number of bit operations is O(k). O(k) = 0(k2). If we want to express this in terms of n rather than k, then since k = O(1og n), we can write
Time(convert n to decimal) = 0(log2n).