0

Why is this program showing the following output ?

#include <bitset>
#include <cstdio>
#include <iostream>

int main()
{
    std::bitset<8> b1(01100100); std::cout<<b1<<std::endl;
    std::bitset<8> b2(11111111); std::cout<<b2<<std::endl; //see, this variable
                                                           //has been assigned
                                                           //the value 11111111
                                                           //whereas, during
                                                           //execution, it takes
                                                           //the value 11000111.
                                                           //Same is the case with b1
    std::cout << "b1 & b2: " << (b1 & b2) << '\n';
    std::cout << "b1 | b2: " << (b1 | b2) << '\n';
    std::cout << "b1 ^ b2: " << (b1 ^ b2) << '\n';
getchar();
return 0;
}

This is the OUTPUT:

01000000
11000111
b1 & b2: 01000000
b1 | b2: 11000111
b1 ^ b2: 10000111

First, I thought there is something wrong with the header file (I was using MinGW) so I checked using MSVCC. But it too showed the same thing. Please help!

Shravan
  • 2,809
  • 2
  • 18
  • 39

1 Answers1

10

You initiate the std::bitset with a numbers 01100100 and 11111111. Simply look in the output:

01000000
11000111

Both are wrong. You are initiating std::bitset with a decimal number 11111111 which is 101010011000101011000111 in binary representation. And first is the number 01100100 are in octal representation: 1001000000001000000 in binary. And std::bitset take the least significant eight bits of that.

Here is the correct code:

#include <bitset>
#include <iostream>
#include <string>

int main()
{
    std::bitset<8> b1(std::string("01100100")); std::cout<<b1<<std::endl;
    std::bitset<8> b2(std::string("11111111")); std::cout<<b2<<std::endl;
    std::cout << "b1 & b2: " << (b1 & b2) << '\n';
    std::cout << "b1 | b2: " << (b1 | b2) << '\n';
    std::cout << "b1 ^ b2: " << (b1 ^ b2) << '\n';
    return 0;
}

And the correct output:

01100100
11111111
b1 & b2: 01100100
b1 | b2: 11111111
b1 ^ b2: 10011011
m0nhawk
  • 22,980
  • 9
  • 45
  • 73
  • 2
    Alternatively, if he's familiar with hexadecimal at all: `std::bitset<8> b1(0x64)` and `std::bitset<8> b2(0xff)` ought to work. And the translation from any 4 binary bits to hex values is pretty trivial to do by hand. – KChaloux Apr 08 '13 at 13:03