1

Code:

#define SEG(type,base,lim)                  \
.word (((lim) >> 12) & 0xffff), ((base) & 0xffff);  \
.byte (((base) >> 16) & 0xff), (0x90 | (type)),     \
    (0xC0 | (((lim) >> 28) & 0xf)), (((base) >> 24) & 0xff)

I know it's a segment descriptor structure.

But i don't understand the code : ((lim) >> 12 & 0xffff)

Why need right shift 12 bit?

I need help.

Alex Liu
  • 95
  • 1
  • 8
  • 4
    the >> 12 effectively divides limit by 4096 (2^12). When the granularity bit is set in the the descriptor, the limit is represented as units of 4096 (4kb) and not bytes. So the >>12 converts `lim` to number of 4k units and then tosses away everything but the lower 16-bits (that is what the & 0xffff does). The remaining upper 4 bits of the limit value are computed with (((lim) >> 28) & 0xf). Limit is stored as a 20-bit value in the descriptor. – Michael Petch Oct 12 '18 at 05:26
  • 2
    See: https://pdos.csail.mit.edu/6.828/2007/readings/i386/s05_01.htm – Michael Petch Oct 12 '18 at 05:36
  • Oh! i see. Much appreciated! – Alex Liu Oct 12 '18 at 05:46

1 Answers1

3

The reason is archaic. The descriptors were 8 bytes in 286, and limit used to be coded in 16 bits in 286 16-bit protected mode. When 386 came, the descriptors were not widened - each entry being still 8 bytes. However there wouldn't be enough space to code both the segment base and the segment limit using 32 bits - so the limits are now coded using 20 bits.

There are 2 options for how the 20-bit limit is interpreted - either as multiple of 4K or as bytes - this is called granularity. The 4K mode is a good compromise and works rather well together with the 4K page size of 386 - when you're using more than 1M limits you very probably are using virtual memory as well and then you'd lose an entire page on the edge anyway.

The limit given to the macro is expressed in bytes and it is divided by 4096 (>> 12) to get the page-granular limit.