0

I have a vector of 8 bit unsigned chars & a vector of 16 bit unsigned shorts

std::vector<unsigned char> eight_bit_array;
std::vector<unsigned short> sixteen_bit_array;

sixteen_bit_array.resize(x_number_of_samples);
eight_bit_array.resize((x_number_of_samples)*2);

I have populated some data into the sixteen_bit_array. Thats cool. I want to know if it is possible to typecast & store the sixteen_bit_array & into the eight_bit_array & How ?

I have a method which returns the eight_bit_array by returning a pointer to unsigned char like so:

// A legacy method which returns the char array
unsigned char *GetDataSample(std::size_t sample_number) {
    return &eight_bit_array[sample_number];
}

So I want to typecast & store the sixteen_bit_array into the eight_bit_array so that I can return 16 bit unsigned ints without having to change the return type of my legacy method from unsigned char * to unsigned short *

Please suggest how to do this.

TheWaterProgrammer
  • 7,055
  • 12
  • 70
  • 159

2 Answers2

2

You could do some memcpy magic but you need to make sure your types are actually 8 and 16 bits respectively:

#include <cstdint>
#include <vector>
#include <cstring>

int main() {
    std::vector<uint16_t> uint16vals{11, 1, 0, 3};
    std::vector<uint8_t> uint8vals(uint16vals.size() * 2);
    std::memcpy(&uint8vals[0], &uint16vals[0], sizeof(uint16_t) * uint16vals.size());
}
Hatted Rooster
  • 35,759
  • 6
  • 62
  • 122
1

You can use bitwise operations:

std::pair<unsigned char, unsigned char> split(unsigned short n) {
    // set to 0 the bit outside of a 8bit unsigned int
    unsigned char low = n & (1 << 8);
    // get the bit higher than 2^8
    unsigned char high = n >> 8;
    return {high, low};
}

(The shift value should be good but TBH I'm not 100% sure)

BTW, use fixed size type and not implementation dependant size type when you make assumption on the size of the type

EDIT

to merge two 8 bit integer you can do something like this:

unsigned short split(unsigned char h, unsigned char l) {
    return (h << 8) + l;
}
nefas
  • 1,120
  • 7
  • 16
  • Is it possible to do reinterpret_cast the whole vector to vector ? & thus avoid this complex bit magic ? – TheWaterProgrammer May 17 '17 at 12:16
  • 1
    You can use reinterpret_cast to convert a unsigned short to an unsigned char but you will lose the "high" bits. (reinterpret cast to convert unsigned char to unsigned short will give you garbage) – nefas May 17 '17 at 12:20
  • so, that is a reason I should choose to do the bit magic to compared to a reinterpret_cast ? – TheWaterProgrammer May 17 '17 at 12:25
  • 1
    If you don't want to use bitwise operation, you can use @GillBates solution but I do not see a way to use reinterpret_cast without losing (useful) informations. – nefas May 17 '17 at 12:29
  • thanks @nefas. both the solutions are good to know for anyone looking out with this problem – TheWaterProgrammer May 17 '17 at 12:33