5

Almost all conventional languages today represent programmers intention as text source, which is then (lets say for sake of simplicity) translated to some bytecode/machine code and interpreted/executed by a VM/CPU.
There was another technique, which, for some reason, isn't that popular theese days: "freeze" the run-time of your VM and dump/serialize the environment (symbol bindings, state, code (whatever that is)) into an image, which you can then transfer, load and execute. Consequentially, you do not "write" your code in a usual way, but you modify the environment with new symbols, while in "run-time".
I see great advantages to this technique:

  • Power-boosted REPL: you can introspect your code as you write it, partially evaluate it, test it directly and see the effects of your changes. Then roll back if you've messed up and do it again, or commit it to the environment finally. No need for long compile-run-debug cycle;
  • Some of the usual problems about dynamic languages (that they cannot be compiled, as compiler cannot reason about environments statically) are obliviated: the interpreter knows where what is located and can subsitute symbol references with static offsets and do other optimizations;
  • It's easier on programmer's brain: you "offload" different contextual information about the code from your head, i.e. you don't need to keep track about what your code has already done to some variable/data structure or which variable holds what: you see it directly in front of your eyes! In the usual way (writing source), programmers add new abstractions or comments to the code to clarify intents, but this can (and will) get messy.

The question is: what are disadvantages of this approach? Is there any serious critical disadvantage that I am not seeing? I know, there are some problems with it, i.e.:

  • try building a module system with it, that will not result in dependancy hell or serious linkage problems
  • security issues
  • try to version-control such images and enable concurrent development

But these are, IMHO, solvable with a good design.

EDIT1: concerning status "closed,primarily opinion-based". I've described two existent approaches and it is clear and obvious that one is preferred over another. Whether the reasons for that are purely "opinion-based" or there is a reasearch to back this up, is unknown to me, but even if they are opinion-based, if someone would list these reasons for such an opinion to develop, it should, actually, answer my question.

artemonster
  • 744
  • 4
  • 27
  • What makes you think dynamic languages cannot be compiled? All currently existing production-ready implementations of ECMAScript/JavaScript, Python, Ruby, PHP, Perl, Clojure, Erlang, Smalltalk, many Schemes, many CommonLisps, have compilers. Bot the original Ruby-based and the current self-hosting implementation of CoffeeScript are pure static AOT compilers. The original version of V8 was a pure compiler, the current version has multiple co-operating compilers, but still no interpreters, at no point in the history of V8 did it ever interpret anything. Of course, Smalltalk is usually compiled. – Jörg W Mittag Apr 20 '16 at 16:12
  • Lookup kernel language, first-class macros, first-class environments: there is a very deep discussion on lambda-the-ultimate about this topic. – artemonster Apr 21 '16 at 11:30

1 Answers1

2

As a daily user of smalltalk, I've to say I haven't found any fundamental disadvantages and have to agree that there are lots of advantages. It makes metaprogramming, reasoning about your program easy, and much better supports refactoring and code rewriting.

It requires/develops a different way of looking at your code, though. Smalltalk has little to offer to developers who are not interested in abstraction

Stephan Eggermont
  • 15,847
  • 1
  • 38
  • 65