8

Besides not closing a comment /*..., what constitutes a lexical error in C?

Chilledrat
  • 2,593
  • 3
  • 28
  • 38
DrBeco
  • 11,237
  • 9
  • 59
  • 76
  • Unterminated strings. Non-printable characters anywhere except in trigraphs. Unknown characaters that aren't operators. – user207421 May 16 '23 at 01:54

6 Answers6

10

Here are some:

 "abc<EOF>

where EOF is the end of the file. In fact, EOF in the middle of many lexemes should produce errors:

 0x<EOF>

I assume that using bad escapes in strings is illegal:

  "ab\qcd"

Probably trouble with floating point exponents

 1e+%

Arguably, you shouldn't have stuff at the end of a preprocessor directive:

#if x   %
Ira Baxter
  • 93,541
  • 22
  • 172
  • 341
  • Hum... not-closing-string. I should have thought of that when I saw the similar not-closing-comment. But thanks, valid one! – DrBeco Apr 04 '11 at 06:45
  • Would you consider `"abc` a lexical error? (end-of-line instead of end-of-file) – DrBeco Apr 04 '11 at 17:02
  • @Dr Beco: tain't about me ... The standard version of C I think disallows string literals containing newlines. IIRC, some versions of GCC (not a Standard) did allow it; whether they still do I don't know. – Ira Baxter Apr 04 '11 at 19:36
  • @Ira if the standard allows or not is not in question, but how a compiler would comply with the standard. I can think of a `yacc` rule to check this syntactically `QUOTE LETTERS QUOTE`, or a `lex` regexp to do the job `\"[a-z]*\"` (simplified version, of course). Now, is this a lex or a syntax error, would depend on the implementation? Or there is some default we all could agree? – DrBeco Apr 04 '11 at 19:41
  • @Dr Beco: **abc** won't make it past the lexer, so you can't check this syntactically with a parser rule. The only thing that can/will object is the lexer. You'll find the lexical rules for describing strings lots more complex than the one you wrote, when you include escapes, double-wide characters, and all the other weirdness that goes into a real compiler, but yes, mostly the regex will insist on quotes on each end, which are not there, and so the character sequence doesn't get recognized --> lexical error. – Ira Baxter Apr 04 '11 at 19:46
  • I do agree. The simplified version I used was just to state that lex can match the quotes. The real one is really complex, for example, it can recognize `"abc\def"` (line break with a backslash). Well, thanks for the discussion. – DrBeco Apr 04 '11 at 19:52
3

Basically anything that is not conforming to ISO C 9899/1999, Annex A.1 "Lexical Grammar" is a lexical fault if the compiler does its lexical analysis according to this grammar. Here are some examples:

"abc<EOF> // invalid string literal (from Ira Baxter's answer) (ISO C 9899/1999 6.4.4.5)

'a<EOF> // invalid char literal (6.4.4.4)

where EOF is the end of the file.

double a = 1e*3; // misguided floating point literal (6.4.4.2)

int a = 0x0g; // invalid integer hex literal (6.4.4.1)

int a = 09; // invalid octal literal (6.4.4.1)

char a = 'aa'; // too long char literal (from Joel's answer, 6.4.4.4)

double a = 0x1p1q; // invalid hexadecimal floating point constant (6.4.4.2)
// instead of q, only a float suffix, that is 'f', 'l', 'F' or 'L' is allowed.

// invalid header name (6.4.7)
#include <<a.h>
#include ""a.h"
Peter G.
  • 14,786
  • 7
  • 57
  • 75
  • I don't think 0x0g is a lexical fault. I think it is two tokens. It probably always produces a *syntax* error with *g* as a variable name. – Ira Baxter Apr 04 '11 at 06:54
  • When you recognise the start of an octal literal by a leading 0 and expect it to match the regex `0[0-7]*`, I think it is. – Peter G. Apr 04 '11 at 07:00
  • GCC 3.4.5 outputs: invalid digit "9" in octal constant – Peter G. Apr 04 '11 at 07:05
  • The problem here is that GCC doesn't tell the kind of error. I understand that positional errors are detected by the parser, not the lexical analyzer. But reading the token definition of hexa and octal, I may agree that those two are really lex errors. I'm still uneasy with this. – DrBeco Apr 04 '11 at 19:36
  • @Ira 0x0g is a single preprocessor token according to the standard. – Jim Balter Apr 06 '11 at 10:55
  • @Jim: Really? Wow. That seems dumb. Of possible use could that be? Does it also say 0xgzn is a single lexeme? I would have expect that 0x would require trailing hex digits only, in the same way that 1. requires trailing decimial digits only. (Surely it doesn say that 1.g is single lexeme?) – Ira Baxter Apr 06 '11 at 10:59
  • @Ira Yes, that's a single pp-number as defined by the standard. The reason is to allow preprocessors to be able to parse tokens simply without knowing the full syntax of C numbers. Interestingly, the syntax for pp-numbers includes what used to be valid expressions such as 1e-x ... the committee didn't realize this until the last minute, and tried to come up with a fix but wasn't able to do so. I voted against including this glitch in the language but most of the committee felt that it just wasn't important. – Jim Balter Apr 06 '11 at 11:11
  • @Jim: I'm sorry for your loss :-{ Frankly, of all things to worry about in the preprocessor, number syntax would have been last on my list. Of all the things we know how to lex... – Ira Baxter Apr 06 '11 at 11:13
  • @Ira Oops, I got the example of an invalid expression wrong. The corner case is something like `0xe-2` which is a syntax error rather than an expression with the value of 12. – Jim Balter Apr 06 '11 at 11:21
  • @Ira You have to understand that X3J11 was dominated by benchmark-driven compiler vendors ... – Jim Balter Apr 06 '11 at 11:23
  • @Ira `M(0x,2)` is an error because `0x` is not a valid hex constant, but `M(0xe,2)` is indeed legal while `0xe-2` is not. – Jim Balter Apr 06 '11 at 11:30
2

Aren't [@$`] and other symbols like that (maybe from Unicode) lexical errors in C if put anywhere outside of string or comment?

They are not constituting any valid lexical sequence of that language. They cannot pass the lexer because the lexer cannot recognize them as any kind of valid token. Usually lexers are FSMs or regex-based, so these symbols are just unrecognized input.

For example, in the following code there are several lexical errors:

int main(void){
` int a = 3;
@ —
return 0;
}

We can support it by feeding this to gcc, which gives

../a.c: In function ‘main’:
../a.c:2: error: stray ‘`’ in program
../a.c:3: error: stray ‘@’ in program
../a.c:3: error: stray ‘\342’ in program
../a.c:3: error: stray ‘\200’ in program
../a.c:3: error: stray ‘\224’ in program

GCC is smart and does error-recovery, so it parsed a function definition (it knows we are in 'main'), but these errors definitely look like lexical errors. They are not syntax errors and rightly so. GCC's lexer doesn't have any types of tokens that can be built from these symbols. Note that it even treats a three-byte UTF-8 symbol as three unrecognized symbols.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
jartur
  • 509
  • 4
  • 6
  • The sequence 342 200 224 (octal) is 0xE2 0x80 0x94 (hexadecimal) → UTF-8 sequence for Unicode code point U+2013 ([EM DASH](https://www.utf8-chartable.de/unicode-utf8-table.pl?start=8192&number=128)). – Peter Mortensen May 01 '23 at 02:13
0

Badly formed float constant (e.g. 123.34e, or 123.45.33).

Joel Lee
  • 3,656
  • 1
  • 18
  • 21
0

Illegal id

int 3d = 1;

Illegal preprocessor directive

#define x 1

Unexpected token

if [0] {}

Unresolvable id

while (0) {}            
random
  • 9,774
  • 10
  • 66
  • 83
Ekkehard.Horner
  • 38,498
  • 2
  • 45
  • 96
  • 3
    OP asked for *lexical* errors. "int 3d = 1" has the legal lexemes "int", "3" ,"d", "=", "1". "#defune" is treated as two lexemes "#","defune"; the latter might be illegal. Unexpect token and misspelled keywords are syntax errors, not lexical errors. – Ira Baxter Apr 04 '11 at 07:08
0

Lexical errors:

  1. An unterminated comment
  2. Any sequence of non-comment and non-whitespace characters that is not a valid preprocessor token
  3. Any preprocessor token that is not a valid C token; an example is 0xe-2, which looks like an expression but is in fact a syntax error according to the standard—an odd corner case resulting from the rules for pp-tokens.
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Jim Balter
  • 16,163
  • 3
  • 43
  • 66