I was wondering which of the following scenarios will achieve the highest ratio with lossless algorithms applied to binary data with repeated data.
Am I correct to assume the compression ratio depends on patterns?
- Size
- Times repeated
For example the binary data:
10 10 10 10 10 10 10 10 pattern (10) size 2, pattern (10) repeated 8
1001 1001 1001 1001 pattern (1001) size 4, pattern (1001) repeated 4
0000000 11111111 pattern (0) size 1, pattern (0) repeated 8; pattern (1) size 1, pattern (1) repeated 8; Or 0000000 11111111 pattern (0000000) size 8, pattern (0000000) repeated 8; pattern (11111111) size 8, pattern (11111111) repeated 1;
Which of the above achieves the highest and lowest compression ratios?
Thanks in advance.