In the first one, you're doing integer division, then casting to double. Since integer division truncates any decimals, you're losing the decimal bit of the answer, and casting it to double doesn't bring that info back.
In the second, you're casting the top one to double, then doing double division. This makes the answer automatically a double, since it takes the higher-precision unit when dividing, and ensures that you're getting the decimal bit too.
The second is equivalent to ((double) int1) / int2
, since casts have higher precedence than /
.