3

For my PIC Backend, I want 'int' to be 16 bits. How can I / my target tell clang what should be the size of 'int'? Defining 16-bit registers only seems not sufficient.

Currently "clang -O2 -emit-llvm -target pic" converts

int foo(int a, int b) { return a + b; }

to this IR code, using 32-bit integers:

; ModuleID = '../test/sum.c'
source_filename = "../test/sum.c"
target datalayout = "e-m:e-p:16:16-i16:16-a:0:16-n16-S16"
target triple = "pic"

; Function Attrs: norecurse nounwind readnone
define i32 @foo(i32 %a, i32 %b) local_unnamed_addr #0 {
entry:
  %add = add nsw i32 %b, %a
  ret i32 %add
}

attributes #0 = { norecurse nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" }

!llvm.ident = !{!0}

!0 = !{!"clang version 4.0.0 (http://llvm.org/git/clang.git 92920e1616528c259756dd8190d4a47058fae127) (http://llvm.org/git/llvm.git 7ca31361200d6bc8a75fa06f112083a8be544287)"}

This may or may not be the cause of the "Return operand #1 has unhandled type i16" message I described in PIC Backend: 16-bit registers / return type. However I should probably get the type used in clang's output correct before turning to other problems.

Community
  • 1
  • 1

2 Answers2

2

SOLVED: clang takes the size of int (and other types) from its own Target, defined in clang/lib/Basics/Targets.cpp. The native size setting "-n16" is not sufficient to override the (default?) i32 setting. Instead:

IntWidth = 32;
IntAlign = 32;

in my target's constructor does the trick.

This also solves the strange 'unhandled return type i16' issue. Don't know why.

-1

I strongly suggest making a habit of using the ISO/IEC 9899 include files <stdint.h> and <stdbool.h>. In your case, you can then declare

int16_t foo(int16_t a, int16_t b) { return a + b; }

and you are guaranteed that the variables and return values are 16-bit signed integers, irrespective of the target processor. Look at the file contents, or the ISO document appendix B.17 (search for ISO/IEC 9899 and you will find the pdf document easily), to see all the type options.

These include files (together with prefixes on variable names like u16foo and i32bar to clarify to me the size and signedness of identifiers) have saved my skin too many times to count.

EBlake
  • 735
  • 7
  • 14
  • Sure, it's a good idea to explicitely state the size _if the programmer intends to use a specific size_. However this doesn't solve my problem, for two reasons: a) int was intended to mean 'an integer of reasonable, usually native, word size', which in my case is 16 bits. b) When I use short (int16_t is actually nothing but a typedef), I do get 16 bit arguments and return types - however as soon as I remove the '-O2', LLVM casts them to i32, tries to add and and cast it back again - and all of this fails because I do not have 32 arithmetics or registers. – Maximilian Rixius Sep 17 '16 at 19:32
  • Sounds like clang thinks your target is 32bit. I don't use it, so I cannot comment on configuration or optimization. Regarding using the stdint typedefs, it's a matter of preference /style. Personally, I have never observed a compiler overriding explicit sizes, regardless of optimization level. – EBlake Sep 17 '16 at 19:37
  • I believe that my answer solved your problem perfectly. Your opening statement was "For my PIC Backend, I want 'int' to be 16 bits." This is the purpose of 'stdint.h' and it's cousins. It's irrelevant that `int16_t` is a typedef - what's important is that for a given compiler and target, `int16_t` is guaranteed to give you 16 bits. You proved my point by saying your build worked when using `short`, which is the type equivalent for `int16_t` on a 16-bit target. My answer does not work if you want a general solution (i.e. "an integer of reasonable word size"), but that's not what you asked for. – EBlake Jun 04 '17 at 06:22
  • This is not the point. C by definition converts most everything to int when doing arithmetics. Adding two int16_t will create an int value which is converted back to int16_t on assignment - unless: 1. the compiler knows that your uP uses 16-bit words and 2. it can prove that the result is the same. Thatfore the backend needs to know the native word size, which I the writer of the backend have to tell it. – Maximilian Rixius Jun 05 '17 at 17:24