5

i am confused by the output of the following code:

uint8_t x = 0, y = 0x4a;
std::stringstream ss;
std::string a = "4a";


ss << std::hex << a;
ss >> x;

std::cout << (int)x << " "<< (int)y << std::endl;
std::cout << x << " "<< y <<std::endl;
std::cout << std::hex << (int)x <<  " " << (int)y << std::endl;

uint8_t z(x);
std::cout << z;

the output for the above is:

52 74

4 J

34 4a

4

and when we change replace the first line with:

uint16_t x = 0, y = 0x4a;

the output turns into:

74 74

74 74

4a 4a

J

I think i understand what happens but i don't understand why it happens or how i can prevent it/work around it. From my understanding std::hex modifier is somehow undermined because of the type of x, maybe not exactly true at a technical level but it simply just writes the first character it reads.

Background: The input is supposed to be a string of hexadecimal digits, each pair representing a byte( just like a bitmap except in string). I want to be able to read each byte and store it in a uint8_t so i was experimenting with that when i came across this problem. I still can't determine what's the best method of this so if you think what i'm doing is inefficient or unnecessary i would appreciate to know why. Thank you for reading,

user3402183
  • 412
  • 1
  • 4
  • 11

1 Answers1

2
ss >> x

is treating uint8_t x as an unsigned char. The ascii value of '4' is (decimal) 52. It's reading the first char of the string "4a" into x as if x were a character. When you switch it to uint16_t, it's treating it as an unsigned short integer type. Same with y.

Dmitry Rubanovich
  • 2,471
  • 19
  • 27
  • Thank you for the answer. This is the part i understand, what i don't understand is why, i mean after all, am i not specifying the literal value is in hex? even if it treats it as a char shouldn't it read the value 'J'... – user3402183 Oct 10 '15 at 05:14
  • If you look at the declaration of uint8_t, you'll probably see that it is a typedef of a typedef of a typedef..., but when you finally see the actual standard C++ type that is being used, it'll be unsigned char. There is a std::uint8_t type in C++ starting with C++11. But you have not indicated that you are using "std" namespace. Even if you are, implementing this type as standard is optional in the language. So it is probably free to treat it as a stand-alone type or as a typedef of unsigned char: http://en.cppreference.com/w/cpp/types/integer – Dmitry Rubanovich Oct 10 '15 at 05:19
  • 'J' is the output of the value of y as an unsigned char. At that point y has the value of 0x4a. Which is the ascii value of 'J'. – Dmitry Rubanovich Oct 10 '15 at 05:26
  • I see, thank you for explaining. But, when i try your method for the solution, i get error C2679(https://msdn.microsoft.com/en-us/library/h1925w4w.aspx), which i don't understand because the reference shows an overload for `unsigned int`. – user3402183 Oct 10 '15 at 06:48
  • aah. yeah, it's probably dangerous. In fact, I am going to remove it. It can easily corrupt the stack if the operator were implement. It would write 4 bytes to the stack even though there is only space for 1. It's still odd that it gives this particular error since the operator is implemented. – Dmitry Rubanovich Oct 10 '15 at 08:08
  • I see, i just wrote the hex value to a `short` then constructed an `unsigned char`(just like how `z` in the example is constructed) to get around it for now. It might not be the best way but since i made sure not more than 2 digits(1 byte) is written to the `short` at a time, converting won't cause any significant data loss. Thank you for your help! – user3402183 Oct 10 '15 at 09:06