5

Why is there no compile time errors or warnings when I call a function in another module that doesn't exist or has the wrong arity?

The compiler has all of the exports information in a module to make this possible. Is it just not implemented yet or is there a technical reason why it is not possible that I am not seeing?

Andy Till
  • 3,371
  • 2
  • 18
  • 23

6 Answers6

6

I don't know why it's missing (probably because modules are completely separate and compilation of one doesn't depend on the other really - but that's just speculation). But I believe you can find problems like this with dialyzer static analysis. Have a look at http://www.erlang.org/doc/man/dialyzer.html

It's part of the system itself, so try including it in your workflow.

viraptor
  • 33,322
  • 10
  • 107
  • 191
  • I was aware of dialyzer but haven't fully gotten to grips with it, or it's output. I'll have to check out if it is fast enough to run and give feedback when saving a file. – Andy Till Mar 13 '13 at 23:17
  • 3
    About using `Dialyzer` for this type of error detection, even though I highly suggest doing so (as Dialyzer is a much more powerful tool), you really just need `xref`. – aronisstav Mar 14 '13 at 08:34
  • 1
    xref is great and I (also) recommend running it as part of your build process. I works great for finding any problems before starting your erlang application. If you do this you will get a build workflow very similar to the normal Java workflow. – David Wickstrom Mar 14 '13 at 09:52
  • Excellent comment. I implemented a plugin for our text editor today so that xref is run when an erl file is saved and undefined function calls are shown as errors so I can pretend that this feature exists! – Andy Till Mar 14 '13 at 22:34
4

It is as others have said. Modules are compiled separately and there is absolutely no guarantee that the environment which exists at compile-time is the same as the one that will exit at run-time. This implies that doing checks at compile-time about the existence of a module, or of a function in it, is basically meaningless. At run-time that module may or may not be loaded, the function you call may or may not be defined in the module, or it may do something completely different from what you expected.

All this is due to the very dynamic nature of Erlang systems. There is no real way as such to define what is in system at run-time. Hot code-loading is a part of this and works properly because of the dynamic nature of the system. It means you can redefine the system at run-time, you can load in new versions of existing modules with a different interface and you can load in completely new modules and remove existing modules.

For this to work all checks about the existence of a module or function must be done at run-time.

Tools like dialyzer can help with this but they do assume that you don't do anything "funny" at run-time and the system you check is the same as the system you run. Which is of course all good, but very static. And against Erlang's nature which is to be dynamic, in everything.

Unfortunately, in this case, you can't both have your cake and eat it.

rvirding
  • 20,848
  • 2
  • 37
  • 56
  • Thanks for confirming that hot loading is the reason the compiler writers did not include this feature. What you describe though is true for most VM run languages though. Maybe they were too worried about giving guarantees about what **could** happen when the system is run in the future, at the expense of productivity when we're writing code today – Andy Till Mar 14 '13 at 22:33
  • @AndyTill I think you can be reasonably certain that if something *can* happen it *will*. Also hot loading was/is a fundamental requirement of Erlang so it is something we **must** be able to handle properly. Amongst other things that you strictly define *what* happens when running code is reloaded or removed. – rvirding Mar 15 '13 at 23:38
2

You may use the xref application to check the usage of deprecated, undefined and unused functions (and more!).

Compile the module with debug_info:

Eshell V6.2  (abort with ^G)
1> c(test, debug_info).
{ok,test}

Check the module with xref:m/1:

2> xref:m(test).
[{deprecated,[]},
 {undefined,[{{test,start,0},{erlang,foo,0}}]},
 {unused,[]}]

You may want to check out more about xref here:

Erlang -- Xref - The Cross Reference Tool (Tools User's Guide)

Erlang -- xref (Tools Reference Manual)

1

It is due hot code loading. Each module can be loaded in any particular time. So when you have in your module A code which calls function B:F then you can't tell it is wrong in compile time when your source code of module B has no function B:F. Imagine this: You compile module A with call to B:F. You load module B into memory without function B:F. Then you load module A which contain call B:F but don't call it. Then compile new version of module B with B:F. Then load this new module and then you can call B:F and everything is perfectly right. Imagine your module A makes module B on fly and load it. You can't tell in any particular time that it is wrong that module A contain call to nonexistent function B:F.

Hynek -Pichi- Vychodil
  • 26,174
  • 5
  • 52
  • 73
  • I suspect this is the correct answer but I do have a problem. You are talking about code loading, but I am talking about compilation. Why would hot loading affect compilation errors, java works fine with this, accepting that java hot loading isn't nearly as robust. – Andy Till Mar 13 '13 at 23:50
  • After many reads I understand your example (v. tired). You are talking about code loading, but I am talking about compilation. At code loading this is acceptable not to know, it is how many VMs work i.e. Java. However the compiler is much stricter than the VM code loader and requires all method calls to exist at compile time so all libs must be on the path etc. So in the end I think both VMs have the same issue, except that erlc is not so strict. – Andy Till Mar 13 '13 at 23:59
1

In my opinion most, if not all, compiler does not verify that a function exists at compilation. What it is required in general is a prototype declaration of the function: the type of the return value, the list and type of all arguments. This is done in C/C++ by including some_file.h in each module definition (not the .c or .cpp).

In Erlang this type verification is done dynamically, while the program is running, so it is not necessary to include these definitions. It is even totally useless because Erlang allows to upgrade the application in run, so the function type may change, or the function may disappear, on purpose or by mistake, during application life time; it is why the Erlang designer have chosen to make this verification at run time and not at build time.

The error you speak about generally occurs during the link phase of the code generation, when the "compiler" tries to gather all together some individual pieces of object code to build an executable file or a library, during this phase the linker solves all the external addresses (for shared variable, static call...). This phase does not exist in Erlang, a module is totally self contained; it does no share anything with the rest of the application, no variable nor function address.

Of course, it is mandatory to use some tools and make some test before updating a running production program, but I consider that these verifications have exactly the same level of importance than the correctness of the algorithm itself.

Pascal
  • 13,977
  • 2
  • 24
  • 32
1

When you compile e.g. module alpha which has a call to beta:some_function(...), the compiler cannot assume some specific version of beta to be in use at runtime. Maybe you will compile a newer version of beta after you compiled alpha and this will have the correct some_function exported. Maybe you will upload alpha to be used on a different host, which has all the other modules.

The compiler therefore just compiles the remote call and any errors (non-existent module or function) are resolved at run time, when some version of beta will be loaded.

aronisstav
  • 7,755
  • 5
  • 23
  • 48