5

In C++ if I do a logical OR (or AND) on two bitsets, for example:

bitset<1000000> b1, b2;
//some stuff
b1 |= b2;

Does this happen in O(n) or O(1) time? Why?

Also, can this be accomplished using an array of bools in O(1) time?

Thanks.

citizenCode
  • 85
  • 2
  • 7

3 Answers3

6

It has to happen in O(N) time since there is a finite number of bits that can be processed in any given chunk of time by a given processor platform. In other words, the larger the bit-set, the longer the amount of time each operation will take, and the increase will be linear with respect to the number of bits in the bitset.

You also end up with the same problem using the array of bool types. While each individual operation itself will take O(1) time, the total amount of time for N objects will be O(N).

Jason
  • 31,834
  • 7
  • 59
  • 78
  • 1
    The finite amount could be the maximum possible size of a bitset, thus giving O(1). Of course, that will never happen. – Pubby Mar 05 '12 at 04:24
  • Right, that would take a theoretical machine with infinite resources, at which point you start play with NP-type problems. – Jason Mar 05 '12 at 04:29
  • There's always dreaming of million core processors. – SinisterRainbow Mar 05 '12 at 04:44
  • 1
    Actually you would need at least a single-processor machine with registers that could each store an infinite amount of memory ... so pretty much impossible. – Jason Mar 05 '12 at 04:48
  • @Jason No, infinite memory cannot exist in C++. Rather pointless discussion I suppose as it's irrelevant in practice. – Pubby Mar 05 '12 at 07:17
  • So is it always one bit at a time? Or can you OR 32 bits (or some other number) at a time? When compare an int like if(int == int) does it also do it one bit at a time? – citizenCode Mar 05 '12 at 08:19
  • @citizenCode : You can compare up to the size of an internal processor register at a single time ... so if you have a 64-bit native processor, then typically 64-bits can be compared at once atomically. On other platforms, the number of bits that can be compared at once will differ depending on the architecture ... for instance, some embedded architectures may have 16-bit `int` types, but will still do comparisons atomically 8-bits at-a-time rather than all 16-bits at once. – Jason Mar 05 '12 at 14:59
  • If my bitset is 64 bits or less, do operations become atomic (on x64 CPU)? – xyz Sep 25 '16 at 19:44
1

It's impossible to perform a logical operation (e.g. OR or AND) on arbitrary arrays of flags in unit time. True Big-Oh analysis deals with runtime as the size of the data tends to infinity, and a Core i7 is never going to OR together a billion bits in the same time it takes to OR together two bit.

Adam Mihalcin
  • 14,242
  • 4
  • 36
  • 52
1

I think it needs to be made clear that Big O is a boundary - an asymptotic boundary (minimum time required cannot be less than the f(x)'s Big O., and in in thinking about it, it states the order of magnitude of the speed of a computation. So if you think about how an array works - if you can say I can do this operation all in one computation or so, or there's a known amount that is very small and much less than N, then it is constant. If you need to iterate in some manner (in this case you will see all the bits need to be checked, and there is no short cut for bitwise OR - therefore N bits need to be computed, and therefore it's O(n). [It's actually tighter boundary than that, but we're dealing with just Big O]. An array itself stores N-bits in it.

In fact, few things are really O(1) (index look ups at a known address using a pointer can be O(1) (if you already know what you are looking up). But, if you have M things that need to be looked up in constant time, then it is O(M) * O(1) = O(M).

This is a function of modern day computer - since most things are processed sequentially. (multi-core helps but doesn't come close to affecting big O notation yet). There is of course, the ability of the computer to process words in parallel, but even that is just a constant subtraction. O(n) / O(64) is still O(n).

  • 1
    Big O is an _asymptotic_ upper bound. For instance, a quicksort will do `N log(N)` comparisons at _minimum_. Also, the constant factors _don't_ have to be less than `N` to be constant time. – Mooing Duck Mar 05 '12 at 06:38