8

Has anyone come across an authoritative specification of how arithmetic on int and uint works in Actionscript 3? (By "authoritative" I mean either "comes from Adobe" or "has been declared authoritative by Adobe"). In particular I'm looking for a supported way to do integer multiplication modulo 232. This is not covered in any Adobe documentation I have been able to find.

Actionscript claims to be based on ECMAScript, but ECMAScript does not do integer arithmetic at all. It does everything on IEEE-754 doubles, and reduces the result modulo 232 before bitwise operations, which in most cases simulates integer arithmetic. However, this does not work for multiplication: the true result of a multiplying, say, 0x10000001 * 0x0FFFFFFF will be too long for the mantissa of a double, so the low-order bits will be lost if the specification is followed to the letter.

Now enter Actionscript. I have found experimentally that multiplying two int or uint variables and immediately casting the product to int or uint always seems to give me the exact result. However, the generated AVM2 bytecode just contains a plain "mul" instruction with no direct indication that it is supposed to produce an integer result rather than a floating-point one; the virtual machine would have to look ahead to find this out. I'm worrying that I've just been lucky in my experiments and gotten extra precision as a bonus rather than something I can rely on.

(For one thing, my experiments were all performed using an x86 Flash player. Perhaps it represents intermediate results as Intel 80-bit doubles, or stores a 64-bit int on the evaluation stack until it's known what it will be used for. Neither would be easily possible on a non-x86 tablet with no native 32×32→64 multiplication instruction, so might the VM just decide to reduce the precision to what the ECMAScript standard specifies?)

24-hour status: Mike Welsh has done some able investigation and provided very useful links, but unfortunately not enough to close the question. Anyone else?

(tl;dr debate in comments: whitequark refutes, to some degree, one of my hypothetical reasons why the answer might be "no". His points have merit, but of course don't constitute a showing that the answer is "yes").

Deduplicator
  • 44,692
  • 7
  • 66
  • 118
hmakholm left over Monica
  • 23,074
  • 3
  • 51
  • 73
  • Most, if not all, non-x86 tablets are based on ARM, and there is such an [instruction](http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0068b/CIHIHGGJ.html). – Catherine Aug 07 '11 at 16:17
  • I see. More power to them, then. Unfortunately it just increases the risk that my code will appear to work _now_ but start to fail at some indeterminate point in the future when a different architecture becomes popular. – hmakholm left over Monica Aug 07 '11 at 16:29
  • Actually not. Flash is heavy enough to run only on 32-bit or better architectures, and every sensible 32-bit arch now and in the future will have this multiplication. Even the cheapest [Cortex-M](http://www.arm.com/products/processors/cortex-m/index.php) do! Moreover, if _somehow_ there'll be a CPU which does not support this feature, and Flash will run on it, a compiler will provide a supporting function as it does currently for division, and you won't notice anything. – Catherine Aug 07 '11 at 16:32
  • You're assuming that the answer to my question is, "yes, Flash guarantees that this will work". That guarantee is what I'm looking for a reference to. – hmakholm left over Monica Aug 07 '11 at 16:50
  • Well, the first part of my comment still applies: you'll never manage to find such a platform. Regarding the second, yes, I don't used Flash in my entire life; that's why I am commenting and not answering. – Catherine Aug 07 '11 at 16:53

1 Answers1

4

ActionScript 3 was based on ECMAScript 4, which includes true 32-bit int and uint operations. For example, the multipy_i instruction performs integer multiplication (source: AVM2 Overview).

Unfortunately, the Adobe AS compiler only seems to perform the float versions of these opcodes, e.g. multiply, which supposedly casts the operands as 64-bit floats. This is perhaps in accordance with the ECMAScript specs, which state that ints will be promoted to doubles during math operations in order handle overflow. If it does indeed do 64-bit float multiplication, and then converts back to an int, then there should be a loss of precision.

Despite this, the Flash Player seems to not lose precision when casting back to int immediately. For example:

var n:int = 0x7FFFFFFF;
var n2:int = n*n;
trace(n2);

Even though this code emits a multiply instruction, it traces out a 1 in the Flash Player, which is the result if there is no loss of precision. It's unclear whether this behavior is consistent and cross-platform. However, I tested it in the Flash Player on several platforms, including a few mobile phones, and the result seemed to be 1 consistently. However, running this code through a Tamarin shell in interpreted mode outputted a 0! (JIT mode still outputted a 1, so this behavior must be a side effect of JIT). So it may be risky to rely on this.

Using a multiply_i opcode instead should behave appropriately. Haxe will use this opcode when working with ints. Apparat could also be used to apply this opcode.

Gama11
  • 31,714
  • 9
  • 78
  • 100
Mike Welsh
  • 1,549
  • 1
  • 9
  • 14
  • Where can I read the ECMAScript 4 specification? Google turns up some overview articles, but no actual specification text. Also, why would an AVM2 disassembler show a multiply_i instruction as simply multiply? – hmakholm left over Monica Aug 07 '11 at 17:31
  • I found a cached copy at http://ecmascript.zwetan.com/2007/05/ecmascript-4-specification.html However, that just specifies multiplication (in section 4.3.10, cf 14.16.4) by reference to ECMA-262, which defines multiplication as IEEE double exclusively (ECMA-262 v3 sec 11.5.1). Without a more concrete reference I find it difficult to believe your "ECMAScript 4, which includes true int and uint operations". – hmakholm left over Monica Aug 07 '11 at 17:54
  • Also, the ECMAScript 4 overview at http://www.ecmascript.org/es4/spec/overview.pdf claims (on page 29): "If an operation on byte, int, or uint overflows then the result will be in a "better" representation; . . . int and uint operations overflow to double". – hmakholm left over Monica Aug 07 '11 at 17:57
  • A proposed draft of ECMAScript 4 is here: [link](http://www.ecma-international.org/activities/Languages/Language%20overview.pdf) On page 29, you can see the description of int and uint. Interestingly, operations on ints will overflow into a double. You're right, the AS compiler will use `multiply` for this reason, and that's why trace(0xFFFFFFFF*0xFFFFFFFF); produces a result >2^32. The cast to uint that follows (`callproperty uint 1`) is what truncates the result into a 32-bit integer. I haven't found any documentation to _guarantee_ that this is the functionality yet. – Mike Welsh Aug 07 '11 at 18:06
  • If you look at the various arithmetic and conversion operations in the AVM2 spec, such as `convert_i` (p.49), it says that the conversion is done via the ToInt32 algorithm, as specified in the [ECMAScript specs here](http://www.ecma-international.org/publications/files/ECMA-ST-ARCH/ECMA-262,%203rd%20edition,%20December%201999.pdf). (Although this is v3, it likely still applies). This produces the modulo 2^32 behavior. Although not explicitly mentioned, I'd imagine that the `callproperty uint` operation does the same. – Mike Welsh Aug 07 '11 at 18:18
  • 1
    The ToInt32 _cannot_ produce modulo 2³² behavior if the lower bits of its input have disappeared due to the intermediate result being represented as a double (with only 53 significant bits). The behavior I see in practice gives be _better_ precision than a strict implementation of ECMAScript would, which is what makes me suspicious. – hmakholm left over Monica Aug 07 '11 at 18:24
  • You are absolutely right, and that is very suspicious. I spent some time digging through the Tamarin sources, and, as far as I can tell, it seems to be doing double multiplication, which should result in the lost precision. [Interpreter.cpp, line 1537](http://hg.mozilla.org/tamarin-central/file/fbecf6c8a86f/core/Interpreter.cpp). And I ran `var x:uint = 0x7FFFFFFF; print(uint(x*x));` through the Tamarin shell, and it gave me 0, as opposed to Flash Player's 1. So something fishy is definitely going on in the Flash Player, and you're right to be wary about it. – Mike Welsh Aug 07 '11 at 23:13
  • That said, I would think that the `multiply_i` instruction would behave properly -- the problem is that the Adobe compiler doesn't seem to emit this instruction. I guess it wants to stick to the spec in that respect and only use `multiply`. I did verify that [HaXe](http://haxe.org/) will use `multiply_i` when working with `int` types. You might also consider using [Apparat](http://code.google.com/p/apparat/) to use this opcode. – Mike Welsh Aug 07 '11 at 23:19
  • Yeah, so if all else fails, I can use Apparat or Alchemy's hacked AS3 compiler with inlines. I didn't know that there were VM sources to find on the net (nor that `multiply_i` does the right thing; the ABC specification just sounds like it did a pair of implicit ToInt32 operations _before_ the same kind of multiplication `multiply` provides). Thanks for those references! – hmakholm left over Monica Aug 07 '11 at 23:40
  • Cheers Henning, this was very educational for me as well. Maybe someone else will have more info. I updated my answer so that people don't have to wade through this comment thread. – Mike Welsh Aug 08 '11 at 00:23