I need some help in AT&T assembly. I've load some data into memory like below (hex and dec)
(gdb) x/8xb &buffer_in
0x8049096: 0x03 0x02 0x10 0x27 0xe8 0x03 0x64 0x00
(gdb) x/8db &buffer_in
0x8049096: 3 2 16 39 -24 3 100 0
Lets say that first byte = number count, second = each number length in bytes and then we got (first * second) bytes of numbers. For this example, 3 numbers, 2 bytes each, first number is 16 39 and so one. I have no problems with implementing it, I can grab each byte and add.
The question is, why the hell hex numb 0xE8 = -24 in decimal after just loading data into memory (like below)?? It should be 232 in decimal.
Code for loading data is very simple:
.align 32
SYSEXIT = 1
SYSREAD = 3
SYSWRITE = 4
STDOUT = 1
STDIN = 0
.bss
buffer_in: .space 10000
buffer_in_len = . - buffer_in
.text
.global _start
_start:
#STDIN READ
movl $SYSREAD, %eax
movl $STDIN, %ebx
movl $buffer_in, %ecx
movl $buffer_in_len, %edx
int $0x80
debug:
movl $0, %edi #do nothing
movl $SYSEXIT, %eax
int $0x80