These are some implementation details that are still subject to change:
I’m leaning towards making the initial Spark implementation use Parrot as its virtual machine. LLVM was suggested, and while it seems nice and powerful, it requires the compiler front end to be written in C/C++ which will cause the time to market to grow considerably. I don’t rule out a later implementation of Spark to LLVM, but during the initial implementation we would like to change things rapidly and quickly, and C or C\++ will slow us down considerably.
I’m not keen on using the Java Virtual Machine, due to Java’s long historical reputation of being “enterprisey” and non-hacker friendly (see http://www.paulgraham.com/javacover.html ), and due to the fact it has a slow startup time, and that it feels very “sluggish”/non-responsive. Again, I don’t rule out a future port of Spark to the JVM.
The Parrot VM seems very suitable for dynamic languages, and it is progressing nicely. The Parrot languages Page already lists several implementation of Scheme which can serve as the basis for a Spark implementation, so I’d like to start there.
Since we’re building on Parrot, the licence of the non-original code will be the Artistic License 2.0, which is free, open-source, GPL-compatible and somewhere between a weak copyleft licence (e.g: the LGPL) and a permissive licence (e.g: the 2-clause or 3-clause BSD licences). The original code will be written under the MIT/X11 licence, which is a very permissive BSD-style licence that specifically allows sub-licensing. To avoid legal confusion, every file should contain an explicit “Licensing” notice to indicate under which license it is.
As opposed to Arc, which shipped with no automated tests, Spark will be developed in a Test-driven development fashion. Namely, it will have a comprehensive test suite that will need to fully pass upon any commit to the trunk (or “master” or whatever the main branch is called).
The code of the tests is not expected to be authoritative for how the final version of the language will behave. Rather, some future design decisions will require changing the code of a lot of the tests accordingly.
I still don’t have a clear idea of how to design a lot of “big picture” Spark design decisions. While I believe that design is good, I also think that Spark should be designed incrementally, and that we can expect many design decisions to change. Test-driven development, while accepting the fact that often a lot of testing code will need to be modified, will allow us to do that.
It is our plan to keep documentation for the Spark language using POD, PseudoPod, AsciiDoc or a similar language so people will be able to learn it, without the need to delve into many tests or the core code itself. The documentation will be kept mostly up-to-date, but we can expect it to grow somewhat out-of-sync with the code.
We’re not planning to make exhaustive documentation - for example, the internals of the front-end will not be very well documented, as they will tend to get out-of-sync with the code, and in general the code should be structured to be self-documenting and easy to understand using refactoring.