17

I wanted to implement a function computing the number of digits within any generic type of integer. Here is the code I came up with:

extern crate num;
use num::Integer;

fn int_length<T: Integer>(mut x: T) -> u8 {
    if x == 0 {
        return 1;
    }

    let mut length = 0u8;
    if x < 0 {
        length += 1;
        x = -x;
    }

    while x > 0 {
        x /= 10;
        length += 1;
    }

    length
}

fn main() {
    println!("{}", int_length(45));
    println!("{}", int_length(-45));
}

And here is the compiler output

error[E0308]: mismatched types
 --> src/main.rs:5:13
  |
5 |     if x == 0 {
  |             ^ expected type parameter, found integral variable
  |
  = note: expected type `T`
             found type `{integer}`

error[E0308]: mismatched types
  --> src/main.rs:10:12
   |
10 |     if x < 0 {
   |            ^ expected type parameter, found integral variable
   |
   = note: expected type `T`
              found type `{integer}`

error: cannot apply unary operator `-` to type `T`
  --> src/main.rs:12:13
   |
12 |         x = -x;
   |             ^^

error[E0308]: mismatched types
  --> src/main.rs:15:15
   |
15 |     while x > 0 {
   |               ^ expected type parameter, found integral variable
   |
   = note: expected type `T`
              found type `{integer}`

error[E0368]: binary assignment operation `/=` cannot be applied to type `T`
  --> src/main.rs:16:9
   |
16 |         x /= 10;
   |         ^ cannot use `/=` on type `T`

I understand that the problem comes from my use of constants within the function, but I don't understand why the trait specification as Integer doesn't solve this.

The documentation for Integer says it implements the PartialOrd, etc. traits with Self (which I assume refers to Integer). By using integer constants which also implement the Integer trait, aren't the operations defined, and shouldn't the compiler compile without errors?

I tried suffixing my constants with i32, but the error message is the same, replacing _ with i32.

Shepmaster
  • 388,571
  • 95
  • 1,107
  • 1,366
Léo Ercolanelli
  • 1,000
  • 1
  • 8
  • 11

2 Answers2

18

Many things are going wrong here:

  1. As Shepmaster says, 0 and 1 cannot be converted to everything implementing Integer. Use Zero::zero and One::one instead.
  2. 10 can definitely not be converted to anything implementing Integer, you need to use NumCast for that
  3. a /= b is not sugar for a = a / b but an separate trait that Integer does not require.
  4. -x is an unary operation which is not part of Integer but requires the Neg trait (since it only makes sense for signed types).

Here's an implementation. Note that you need a bound on Neg, to make sure that it results in the same type as T

extern crate num;

use num::{Integer, NumCast};
use std::ops::Neg;

fn int_length<T>(mut x: T) -> u8
where
    T: Integer + Neg<Output = T> + NumCast,
{
    if x == T::zero() {
        return 1;
    }

    let mut length = 0;
    if x < T::zero() {
        length += 1;
        x = -x;
    }

    while x > T::zero() {
        x = x / NumCast::from(10).unwrap();
        length += 1;
    }

    length
}

fn main() {
    println!("{}", int_length(45));
    println!("{}", int_length(-45));
}
Shepmaster
  • 388,571
  • 95
  • 1,107
  • 1,366
oli_obk
  • 28,729
  • 6
  • 82
  • 98
  • Thanks a lot for pointing all that out ! I guess I'm better off replacing `Int + Neg <...>` by `SignedInt` ! – Léo Ercolanelli Feb 17 '15 at 16:22
  • And thus any number if changing a bit your function \o/ Thanks a lot ! – Léo Ercolanelli Feb 17 '15 at 18:11
  • SignedInt has one problem, you won't be able to use unsigned values ;) see playpen: http://is.gd/YPhjla – oli_obk Feb 17 '15 at 18:19
  • @ker I thought that unsigned int didn't implemented the `Neg` traits... But I obviously didn't took a sufficient look at it ! Thanks ! I definitely have some reading waiting for me... – Léo Ercolanelli Feb 17 '15 at 21:40
  • @Shepmaster's example [updated for 1.1 here](http://is.gd/m86OnY). Sadly, I'm not savvy enough to detect the "wow" optimization ker mentions. – Leif Arne Storset Jul 19 '15 at 13:26
  • 1
    just add `#[inline(never)]` to your `ten` function, turn on release mode and generate the llvm code. Then you can see, that the monomorphised generic function for `i32` is optimized to a single `ret i32 10` statement – oli_obk Jul 19 '15 at 17:27
7

The problem is that the Integer trait can be implemented by anything. For example, you could choose to implement it on your own struct! There wouldn't be a way to convert the literal 0 or 1 to your struct. I'm too lazy to show an example of implementing it, because there's 10 or so methods. ^_^

num::Zero and num::One

This is why Zero::zero and One::one exist. You can (very annoyingly) create all the other constants from repeated calls to those.

use num::{One, Zero}; // 0.4.0

fn three<T>() -> T
where
    T: Zero + One,
{
    let mut three = Zero::zero();
    for _ in 0..3 {
        three = three + One::one();
    }
    three
}

From and Into

You can also use the From and Into traits to convert to your generic type:

use num::Integer; // 0.4.0
use std::ops::{DivAssign, Neg};

fn int_length<T>(mut x: T) -> u8
where
    T: Integer + Neg<Output = T> + DivAssign,
    u8: Into<T>,
{
    let zero = 0.into();
    if x == zero {
        return 1;
    }

    let mut length = 0u8;
    if x < zero {
        length += 1;
        x = -x;
    }

    while x > zero {
        x /= 10.into();
        length += 1;
    }

    length
}

fn main() {
    println!("{}", int_length(45));
    println!("{}", int_length(-45));
}

See also:

Shepmaster
  • 388,571
  • 95
  • 1,107
  • 1,366