0

I am trying to prevent java from rounding the result of user-input values. here is the code,

import java.util.Scanner; 

public class questions {

    public static void main(String[] args) {
    // TODO Auto-generated method stub

        Scanner in = new Scanner(System.in);

        System.out.print("enter two integers");
        int firstNumber = in.nextInt(); 
        int secondNumber = in.nextInt(); 
    
        System.out.print("the average is = "); 
        System.out.println((firstNumber + secondNumber)/2);

        // end      
    }
}

|For example, when I put 5 and 6 as inputs,it outputs 5 instead of 5.50

dmitryro
  • 3,463
  • 2
  • 20
  • 28

2 Answers2

0

You can use in.nextDouble() and then deal with it accordingly.

Like so:

import java.util.Scanner; 

public class questions {

    public static void main(String[] args) {
    // TODO Auto-generated method stub

        Scanner in = new Scanner(System.in);

        System.out.print("enter two integers");
        double firstNumber = in.nextDouble(); 
        double secondNumber = in.nextDouble(); 
        double divideresult = ((firstNumber+secondNumber)/2);
        System.out.print("the average is = "); 
        System.out.println(divideresult);

        // end      
    }
}

Result:

Input

5 6

Output

5.5

You can force a zero at the end by using NumberFormat like so:

DecimalFormat formatter = new DecimalFormat("#0.00");     
System.out.println(formatter.format(divideresult));

Which outputs:

5.50
Spectric
  • 30,714
  • 6
  • 20
  • 43
0
import java.util.Scanner;

public class questions {

    public static void main(String[] args) {
        // TODO Auto-generated method stub

        Scanner in = new Scanner(System.in);

        System.out.print("enter two integers");
        int firstNumber = in.nextInt();
        int secondNumber = in.nextInt();

        System.out.print("the average is = ");
        System.out.println((double) (firstNumber + secondNumber) / 2);

        // end
    }
}
Kousik Mandal
  • 686
  • 1
  • 6
  • 15
  • Don't use `float`, always use `double`, unless you have a very specific reason to use the very limited `float`. – Andreas Sep 20 '20 at 18:30
  • Thank you. Updated the code. Double takes more space but more precise during computation and float takes less space but less precise. – Kousik Mandal Sep 20 '20 at 18:35
  • Space is not at all a consideration, unless you have a huge array of values. You should default to always use `double`, to prevent unexpected issues with precision, unless you specifically need to save space, and have fully considered whether `float` is accurate enough for all the values it needs to store. Since space is very rarely an issue, don't waste time to consider whether `float` will do, just use `double`. – Andreas Sep 20 '20 at 18:41
  • That's true, I considered float in this specific case because inputs are integer hence result will be 1 precision. But I agree yes best practice to use double instead of float. – Kousik Mandal Sep 20 '20 at 18:45
  • *"inputs are integer hence result will be 1 precision"* Huh? **Integer** inputs `123456789` and `222222222` should result in `172839505.5`, not the `172839504` that you get with `float`. --- Proving that you should ***always*** use `double` to prevent/limit precision issues, unless you absolutely needed to use `float`. – Andreas Sep 20 '20 at 18:53