-1

Zig insists a strict checking on integer signedness mixing, which is great, and it enforces programmers to really see what's happening.

But then what in C looks like

uint64_t a = 100;
int32_t b = 42;
uint64_t c = a + b;

has to be like this in Zig:

var a: u64 = 100;
var b: i32 = 42;
var c = if (b >= 0) a + @bitCast(u32, b) else a - @bitCast(u32, -b);

So is this the best we can have? Or are there more elegant ways of putting it?

mkrieger1
  • 19,194
  • 5
  • 54
  • 65
Ralph Zhang
  • 5,015
  • 5
  • 30
  • 40

3 Answers3

1

The best general solution to this problem is to avoid mixing signed and unsigned integer types in calculations. When you find this problem appearing in a program, look to program design improvements first to avoid the situation.

These types of conversions always depend on context. Since in this case the value of a will fit in an i32 and it seems that OP is interested in cases where b may be negative, a more "elegant" solution might be to cast a to i32. This allows correct addition with any i32 value for b (so long as a will in fact fit in an i32). In general when you must do some mixed signed/unsigned integer arithmetic it is probably better to cast unsigned types to signed types when possible.

For more safety, cast to a wider signed type when possible. Since a u64 will fit into an i128, you can cast both values to i128 safely and then perform the arithmetic. In Zig addition invokes peer type resolution which finds a common type for the operands, so there is no need to explicitly cast the i32 to the wider i128 type.

It would also be better to use @intCast which is specifically for casting between integer types. The @intCast built-in provides runtime safety checks (which may be disabled). When @intCast attempts to convert a value which is out of range of the destination type the program will crash with a stack trace. This is better than using @bitCast which can provide no such safety checks.

const std = @import("std");

pub fn main() void {
    var a: u64 = 100;
    var b: i32 = 42;
    var c: i32 = -42;

    var aPlus_b = @intCast(i32, a) + b;  // 100 will fit in `i32`
    var aPlus_c = @intCast(i128, a) + c; // any `u64` will fit in `i128`

    std.debug.print("{} + {} = {}: {}\n", .{ a, b, aPlus_b, @TypeOf(aPlus_b) });
    std.debug.print("{} + {} = {}: {}\n", .{ a, c, aPlus_c, @TypeOf(aPlus_c) });
}

Program output:

100 + 42 = 142: i32
100 + -42 = 58: i128
ad absurdum
  • 19,498
  • 5
  • 37
  • 60
0

I see one:

var c = @addWithOverflow(a, @bitCast(u32, b))[0];
Ralph Zhang
  • 5,015
  • 5
  • 30
  • 40
0

zig's translate-c command turns it into:

var a: u64 = 100;
var b: i32 = 42;
var c: u64 = a +% @bitCast(u64, @as(c_longlong, b));