on the upcoming ECOOP conference in Nantes, France, the 3rd European LISP Workshop will be held. Part of that workshop is a breakout group on hardware for LISP systems:
The Breakout group "Reclaim the Hardware Design Space!" on the 3rd European Lisp Workshop is about creating hardware that is suited to dynamic programming languages in general, and LISP in particular. We will be investigating current hardware technologies to find out what they have to offer for systems that are dynamic from the ground up.
The breakout group is meant to provide a meeting point for researchers and practitioners working on hardware implementation techniques for dynamic languages. To participate, you should know something about the implementation of dynamic languages, about hardware, or both. You are not required to actually have created a LISP machine yourself, but you should be aware of some of the associated issues in order to take part in the discussion.
Some of the topics of interest are:
Implementation of LISP-friendly CPUs in hardware * Hardware assisted garbage collection * Self-Synthesis and Hardware/Software Co-Design * LISP based ASLs for hardware description and synthesis * Techniques for CPU implementation on FPGAs * Reengineering ancient CPUs in software or hardware
In the breakout group, time will be provided for researchers to demonstrate their achivements and position. If you have something to present, please let us know and we will provide you with presentation time. You may also publish information on this web site in order to let participants get some idea of what you are working on in advance.
We hope that the breakout group will help in advancing dynamic hardware and provide for a place for researchers and practitioners to work together.
The Breakout group "Reclaim the Hardware Design Space!" on the 3rd European Lisp Workshop is about creating hardware that is suited to dynamic programming languages in general, and LISP in particular. We will be investigating current hardware technologies to find out what they have to offer for systems that are dynamic from the ground up.
The breakout group is meant to provide a meeting point for researchers and practitioners working on hardware implementation techniques for dynamic languages. [..]
[..]
Some of the topics of interest are:
Implementation of LISP-friendly CPUs in hardware [..]"
Which features are thought of as being Lisp-friendly?
Beware of the warning in Chapter 2 on page 109 of Hennessy, John L and=20 Patterson, David A, "Computer Architecture: A Quantitative Approach",=20 second edition:
"Pitfall: Designing a "high-level" instruction set feature specifically=20 oriented to supporting a high-level language structure.
[..]
[..] often the instructions are simply overkill-they are too general for=20 the most frequent case, resulting in unneeded work and a slower=20 instruction. [..]
[..]"
Hennesy and Patterson do not consider reconfigurable logic and their prime concern is the raw performance of general-purpose processors. Even though the techniques they describe make sense, given this goal, reconfigurable logic makes it possible to reconsider their initial assumptions, maybe coming to different conclusions.
I don't know about Lisp, but existing implementations on stock hardware of functional languages such as ML and Haskell tends to be penalized by latency on loads and unpredictable [dynamic] branches.
The same kind of support that helps speculative execution (non-trapping loads, predication, etc) is likely to help Lisp and friends. Loads which masks off part of the address (tag bits) can be help. There are lots that can be done for hardware assisted garbage collection. Finer control over the caches can help.
All of these are pretty mundane ideas however. I'd be curious for more radical ones.
I think you're probably right. It is difficult to introduce instructions useful to Lisp in such a way that they:-
Don't make the machine harder to design, or make it slower
Don't fix the idea of how certain lisp idioms should be implemented
But still I'd be interested to see what people come up with. The pitfall H & P describe is that of designing an instruction set in a high-level manner, which is not the subject of the discussion. It may be possible to make a microprocessor more lisp-friendly by doing a lot less than this.
I'm not sure it's really necessary though, modern lisp compilers like CMUCL and SBCL produce fast code. Those performance faults that there are are mainly due to their weaknesses as compiler/VMs, rather than any fundamental problem converting lisp into machine language.
I think instead of implementing the SECD machine, like described at
formatting link
(BTW: in Internet Explorer there are no scrollbars), with bytecodes and a compile step, it would be more interesting to implement an interpreter within a FPGA, like I've started to describe at
formatting link
But using some ideas of LispKit is a good idea. First I'll implement it in software and then translating it to VHDL. Below is a first implementation of an interpreter for a modified version of LispKit lisp. Currently it is non-lazy (I think for the first version of the FPGA implementation this is easier, too). I've started with the description of the interpreter from the the book "Functional Programming" by Peter Henderson, page 120/121, but modified the syntax a bit to make it easier for people who know alreay Common Lisp:
- let and letrec uses a syntax more like Common Lisp
- numbers needs not to be quoted
And expressions are evaluated like when entered in the REPL, the expression needs not to return a callable function.
The next step will be to implement a more hardware oriented simulation of the interpreter (with my own memory management and GC, e.g. like implemented in the original Pascal implementation of the SECD machine at ftp://ftp.comlab.ox.ac.uk/pub/Packages/LispKit/ ).
#+:Lispworks (editor:setup-indent "letrec" 2 0 2)
(defun test () (loop for (result expression) in '(;; some simple tests (nil nil) (1 1) (3 (+ 1 2)) (9 (let ((square (lambda (x) (* x x)))) (square 3)))
;; creating a list ((0 1 2 3 4 5) (letrec ((i (lambda (n) (cons n (if (< n 5) (i (+ n 1))))))) (i 0)))
(defun secd-assoc (x n v) (if n (if (member x (car n)) (secd-locate x (car n) (car v)) (secd-assoc x (cdr n) (cdr v))) (error "variable not found: ~a" x)))
(defun secd-locate (x l m) (if (eql x (car l)) (car m) (secd-locate x (cdr l) (cdr m))))
(defun secd-eval-list (l n v) (when l (cons (secd-eval (car l) n v) (secd-eval-list (cdr l) n v))))
--
Frank Buss, fb@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
I think it depends a lot on what model you have in mind for the implementation - are you trying to make it fast for a bytecode interpreter, jit or compiled lisp? And once you pick a flavour (or several) it makes sense to see that other languages that get processed in a similar way (scheme, java, python, perl, ...) get a speed-up too.
For example, you can easily spend way too much time on modern machine in doing slow string hashes and string compares for symbol lookup (ok, not just symbol lookup). You can also speed up dynamic table lookups.
And of course, then there are tagged pointers... Which I think can be made beneficial and OoO friendly and C-neutral and not dertimental to performace in any way. Which of course means you have to treat them in a totaly RISC-y, the code knows what its doing, and not do things like harware tag checks on load / store / jump.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.