8

This page says "Prefix operators are usually right-associative, and postfix operators left-associative" (emphasis mine).

Are there real examples of left-associative prefix operators, or right-associative postfix operators? If not, what would a hypothetical one look like, and how would it be parsed?

Rei Miyasaka
  • 7,007
  • 6
  • 42
  • 69
  • Related: http://stackoverflow.com/questions/12961351 – Steve Jessop Dec 29 '12 at 20:00
  • Maybe the simplest answer would have been best. "left associative" is not the only alternative to "right associative". Another possibility is "non-associative". See the example of "new" in my answer. I think that's how you should interpret the original quote: "prefix operators are usually right-associative, but sometimes they are not associative...." – rici Dec 30 '12 at 02:41
  • Does this answer your question? [Does it make sense for unary operators to be associative?](https://stackoverflow.com/questions/12961351/does-it-make-sense-for-unary-operators-to-be-associative) – anonymous38653 May 20 '20 at 18:31

6 Answers6

3

It's not particularly easy to make the concepts of "left-associative" and "right-associative" precise, since they don't directly correspond to any clear grammatical feature. Still, I'll try.

Despite the lack of math layout, I tried to insert an explanation of precedence relations here, and it's the best I can do, so I won't repeat it. The basic idea is that given an operator grammar (i.e., a grammar in which no production has two non-terminals without an intervening terminal), it is possible to define precedence relations , , and between grammar symbols, and then this relation can be extended to terminals.

Put simply, if a and b are two terminals, a ⋖ b holds if there is some production in which a is followed by a non-terminal which has a derivation (possibly not immediate) in which the first terminal is b. a ⋗ b holds if there is some production in which b follows a non-terminal which has a derivation in which the last terminal is a. And a ≐ b holds if there is some production in which a and b are either consecutive or are separated by a single non-terminal. The use of symbols which look like arithmetic comparisons is unfortunate, because none of the usual arithmetic laws apply. It is not necessary (in fact, it is rare) for a ≐ a to be true; a ≐ b does not imply b ≐ a and it may be the case that both (or neither) of a ⋖ b and a ⋗ b are true.

An operator grammar is an operator precedence grammar iff given any two terminals a and b, at most one of a ⋖ b, a ≐ b and a ⋗ b hold.

If a grammar is an operator-precedence grammar, it may be possible to find an assignment of integers to terminals which make the precedence relationships more or less correspond to integer comparisons. Precise correspondence is rarely possible, because of the rarity of a ≐ a. However, it is often possible to find two functions, f(t) and g(t) such that a ⋖ b is true if f(a) < g(b) and a ⋗ b is true if f(a) > g(b). (We don't worry about only if, because it may be the case that no relation holds between a and b, and often a ≐ b is handled with a different mechanism: indeed, it means something radically different.)

%left and %right (the yacc/bison/lemon/... declarations) construct functions f and g. They way they do it is pretty simple. If OP (an operator) is "left-associative", that means that expr1 OP expr2 OP expr3 must be parsed as <expr1 OP expr2> OP expr3, in which case OP ⋗ OP (which you can see from the derivation). Similarly, if ROP were "right-associative", then expr1 ROP expr2 ROP expr3 must be parsed as expr1 ROP <expr2 ROP expr3>, in which case ROP ⋖ ROP.

Since f and g are separate functions, this is fine: a left-associative operator will have f(OP) > g(OP) while a right-associative operator will have f(ROP) < g(ROP). This can easily be implemented by using two consecutive integers for each precedence level and assigning them to f and g in turn if the operator is right-associative, and to g and f in turn if it's left-associative. (This procedure will guarantee that f(T) is never equal to g(T). In the usual expression grammar, the only ≐ relationships are between open and close bracket-type-symbols, and these are not usually ambiguous, so in a yacc-derivative grammar it's not necessary to assign them precedence values at all. In a Floyd parser, they would be marked as .)

Now, what about prefix and postfix operators? Prefix operators are always found in a production of the form [1]:

non-terminal-1: PREFIX non-terminal-2;

There is no non-terminal preceding PREFIX so it is not possible for anything to be ⋗ PREFIX (because the definition of a ⋗ b requires that there be a non-terminal preceding b). So if PREFIX is associative at all, it must be right-associative. Similarly, postfix operators correspond to:

non-terminal-3: non-terminal-4 POSTFIX;

and thus POSTFIX, if it is associative at all, must be left-associative.

Operators may be either semantically or syntactically non-associative (in the sense that applying the operator to the result of an application of the same operator is undefined or ill-formed). For example, in C++, ++ ++ a is semantically incorrect (unless operator++() has been redefined for a in some way), but it is accepted by the grammar (in case operator++() has been redefined). On the other hand, new new T is not syntactically correct. So new is syntactically non-associative.


[1] In Floyd grammars, all non-terminals are coalesced into a single non-terminal type, usually expression. However, the definition of precedence-relations doesn't require this, so I've used different place-holders for the different non-terminal types.

Community
  • 1
  • 1
rici
  • 234,347
  • 28
  • 237
  • 341
2

There could be in principle. Consider for example the prefix unary plus and minus operators: suppose + is the identity operation and - negates a numeric value.

They are "usually" right-associative, meaning that +-1 is equivalent to +(-1), the result is minus one.

Suppose they were left-associative, then the expression +-1 would be equivalent to (+-)1.

The language would therefore have to give a meaning to the sub-expression +-. Languages "usually" don't need this to have a meaning and don't give it one, but you can probably imagine a functional language in which the result of applying the identity operator to the negation operator is an operator/function that has exactly the same effect as the negation operator. Then the result of the full expression would again be -1 for this example.

Indeed, if the result of juxtaposing functions/operators is defined to be a function/operator with the same effect as applying both in right-to-left order, then it always makes no difference to the result of the expression which way you associate them. Those are just two different ways of defining that (f g)(x) == f(g(x)). If your language defines +- to mean something other than -, though, then the direction of associativity would matter (and I suspect the language would be very difficult to read for someone used to the "usual" languages...)

On the other hand, if the language doesn't allow juxtaposing operators/functions then prefix operators must be right-associative to allow the expression +-1. Disallowing juxtaposition is another way of saying that (+-) has no meaning.

Steve Jessop
  • 273,490
  • 39
  • 460
  • 699
  • you are not right, if those operators are left-associative, than +-1 would be equaivalent to -(+1). You mixing concepts of applying operators and parsing code to operators. – SergeyS Dec 29 '12 at 20:06
  • @SergeyS: Associativity is *only* about parsing, specifically it is *only* about where the language dictates where you should "insert the parentheses" in order to disambiguate a potentially-ambiguous expression. Personally I would not refer to the scenario you describe in your example merely as "left-associativity". It's left-associativity *plus* a rule that the result of the sub-expression `~-` is a function that performs first `~` and then `-`. But the rule for what `~-` means could be anything, and the associativity would still be leftwards. – Steve Jessop Dec 29 '12 at 20:08
  • So in short, the meaning of "left-associativity" is that `+-1` is equivalent to `(+-)1`. It doesn't mean anything else. If it happens (as in the hypothetical language you describe in your answer) that `(+-)1` is equivalent to `-(+1)` then that's fine. But it's not a logical consequence of left-associativity. – Steve Jessop Dec 29 '12 at 20:12
  • Following your logic --1 will equal 1, but it is equal 0 in C#/C++. Because parser understands that that -- is separate operator. Parser does NOT WORK greedy right-associatively. But expression tree WORKS greedy right-assoicatively for this example. – SergeyS Dec 29 '12 at 20:12
  • @SergeyS: No such thing follows from my logic. As you say, `--` is a distinct operator in C# and C++, it is not parsed as two instances of unary `-`. So nothing that either of us might say about the meaning of "associativity" has anything to do with how `--1` is lexed or parsed. In fact it is lexed as two tokens, `--` and `1`. Since only one token is an operator, associativity is wholly irrelevant. – Steve Jessop Dec 29 '12 at 20:13
  • Btw, `--1` is ill-formed in C++, it is not equal to 0. You can't decrement an integer literal, so after lexing and parsing it will be rejected. I don't know C#, so I don't know whether the same can be said there. But anyway I don't think that makes any difference to the argument about the meaning of left-associativity. – Steve Jessop Dec 29 '12 at 20:19
  • Please see my updated answer to help you better understand associativity concept – SergeyS Dec 29 '12 at 20:29
  • Please see my [answer](http://stackoverflow.com/a/14086443/388626) and let me know what you think. – Rei Miyasaka Dec 29 '12 at 22:26
2

I'm not aware of such a thing in a real language (e.g., one that's been used by at least a dozen people). I suspect the "usually" was merely because proving a negative is next to impossible, so it's easier to avoid arguments over trivia by not making an absolute statement.

As to how you'd theoretically do such a thing, there seem to be two possibilities. Given two prefix operators @ and # that you were going to treat as left associative, you could parse @#a as equivalent to #(@(a)). At least to me, this seems like a truly dreadful idea--theoretically possible, but a language nobody should wish on even their worst enemy.

The other possibility is that @#a would be parsed as (@#)a. In this case, we'd basically compose @ and # into a single operator, which would then be applied to a.

In most typical languages, this probably wouldn't be terribly interesting (would have essentially the same meaning as if they were right associative). On the other hand, I can imagine a language oriented to multi-threaded programming that decreed that application of a single operator is always atomic--and when you compose two operators into a single one with the left-associative parse, the resulting fused operator is still a single, atomic operation, whereas just applying them successively wouldn't (necessarily) be.

Honestly, even that's kind of a stretch, but I can at least imagine it as a possibility.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • Not quite the same operator, I'd say. The usual definition allows us to say that `*`, `/` and `%` are left-associative as a group of operators with the same precedence. It's not meaningful to talk of their directional associativity with respect to `+` and `-`. So I think the syntactic term is intimately connected with precedence relations. (This isn't the same as associativity in the mathematical sense; `%` is not associative in that sense at all). – rici Dec 30 '12 at 02:53
  • See my answer to understand assosciativity for prefix and postfix. – SergeyS Dec 30 '12 at 08:20
  • Note: I've rewritten this answer. See the edit history if you want to see what the previous comments were about. – Jerry Coffin Mar 01 '22 at 18:11
1

I hate to shoot down a question that I myself asked, but having looked at the two other answers, would it be wrong to suggest that I've inadvertently asked a subjective question, and that in fact that the interpretation of left-associative prefixes and right-associative postfixes is simply undefined?

Remembering that even notation as pervasive as expressions is built upon a handful of conventions, if there's an edge case that the conventions never took into account, then maybe, until some standards committee decides on a definition, it's better to simply pretend it doesn't exist.

Rei Miyasaka
  • 7,007
  • 6
  • 42
  • 69
  • I have provided a clear example of hypothetical language with left-associative prefix operators. What's wrong with it? – SergeyS Dec 30 '12 at 01:02
  • @SergeyS It's not wrong, it's just that it doesn't seem to be the *only* right one -- i.e., that there isn't a definitive answer. – Rei Miyasaka Dec 30 '12 at 02:12
0

I do not remember any left-associated prefix operators or right-associated postfix ones. But I can imagine that both can easily exist. They are not common because the natural way of how people are looking to operators is: the one which is closer to the body - is applying first.

Easy example from C#/C++ languages:

~-3 is equal 2, but -~3 is equal 4

This is because those prefix operators are right associative, for ~-3 it means that at first - operator applied and then ~ operator applied to the result of previous. It will lead to value of the whole expression will be equal to 2

Hypothetically you can imagine that if those operators are left-associative, than for ~-3 at first left-most operator ~ is applied, and after that - to the result of previous. It will lead to value of the whole expression will be equal to 4

[EDIT] Answering to Steve Jessop:

Steve said that: the meaning of "left-associativity" is that +-1 is equivalent to (+-)1

I do not agree with this, and think it is totally wrong. To better understand left-associativity consider the following example:

Suppose I have hypothetical programming language with left-associative prefix operators:

@ - multiplies operand by 3

# - adds 7 to operand

Than following construction @#5 in my language will be equal to (5*3)+7 == 22 If my language was right-associative (as most usual languages) than I will have (5+7)*3 == 36

Please let me know if you have any questions.

SergeyS
  • 3,515
  • 18
  • 27
  • I don't think your edit explains anything. It restates what you've already stated to be the meaning of "left-associative". I disagree with that meaning, it is not the meaning of associativity that I have encountered anywhere else. – Steve Jessop Dec 29 '12 at 20:35
  • So, you think in my example my operators are right-associative indeed? – SergeyS Dec 29 '12 at 20:47
  • No, in your example they are left-associative. But you have then gone on to make a choice for the meaning of the subexpression `@#`. That meaning is not a necessary consequence of left-associativity, you could have made a different choice. – Steve Jessop Dec 29 '12 at 20:47
  • Please see my [answer](http://stackoverflow.com/a/14086443/388626) and let me know what you think. – Rei Miyasaka Dec 29 '12 at 22:25
  • 1
    No, no, no. Associativity is about where implicit parentheses get inserted, not about permuting pieces of source code to assemble formulas from pieces that are non-adjacent in the source code, as you do (`@5` in your final example). Maybe preprocessor manipulations can do that sort of stuff, but not associativity rules for formulas. If you think they can, think about what to make of `@#5` in case `@` is right-associative but `#` is left-associative. – Marc van Leeuwen Sep 14 '14 at 11:16
  • @MarcvanLeeuwen, in case of a tie - we should always use left-to-right principle. Anyway, I cannot pretend I can be so proficient in terminology as Maths professor, so I trust your opinion here. BTW, if language with such grammar as in my hypothetical example really existed - how would this operators property be called? – SergeyS Sep 15 '14 at 09:37
0

Hypothetical example. A language has prefix operator @ and postfix operator # with the same precedence. An expression @x# would be equal to (@x)# if both operators are left-associative and to @(x#) if both operators are right-associative.

PowerGamer
  • 2,106
  • 2
  • 17
  • 33
  • And what if the operators has opposite associativity? Since that case cannot be allowed, and in the (what you call) left-associative case all prefix operators at this level would bind before all postfix or infix operators (and opposite for the right-associative case) it is much easier to describe this by _splitting_ the precedence level, giving those prefix operators higher precedence. This is indeed how it's done in C/C++, and I imagine in most languages with prefix and postfix operators. – Marc van Leeuwen Sep 14 '14 at 11:29