Hacker Newsnew | past | comments | ask | show | jobs | submit | wingo's commentslogin


hah! I was just coming here to comment and mention Igalia, but you beat me to it. :)

Igalia is a great company, really wonderful people and they do fantastic work. I really love the model they've created for the company.


GC needs to know about all references in the program. If the mutator (the program) is running concurrently with the collector, it's difficult (though not impossible) for the collector to construct this consistent view.

Here is a good introductory talk: http://www.infoq.com/presentations/Understanding-Java-Garbag...


I recommend Andrew Kennedy's "Compiling with Continuations, Continued" article if you are actually interested in using continuations in a production compiler. Might's formulation of CPS is good for analysis but not actually that great for code generation in my opinion. My take on the topic is here: http://wingolog.org/archives/2014/01/12/a-continuation-passi...


Additionally, for JavaScript specifically I found this paper to be very clever and enlightening. Good code generation depends on the platform your targeting and I think this is the only way to get them performant on JS. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.89.9...


That's certainly the case for full CPS, but using a CPS intermediate language does not imply call/cc. Indeed Kennedy targeted the .Net CIL; a direct-style VM. The important thing to note is that continuations and functions can be statically distinguished. The article conflates the two, which is why I don't find it very useful.


> In this paper we present our adaption of exception-based continuations to JavaScript.

Goto via exceptions? An intriguing idea, but I guess exceptions are glorified goto in any case. The generated source must be interesting, though.


They play this video while you wait for the train to take you up to that glacier (the mer de glace). I stared at it gapemouthed for some time!


I'm pretty impressed, and especially by the apachebench test -- to serve a spike of 30K reqs/s while having 600K idle connections is quite good.


You already have a programming language in your makefiles.

http://okmij.org/ftp/Computation/#Makefile-functional


AFAIUI libc is in the initrd, but it's a statically linked libc. Guile's dynamic FFI uses dlopen/dlsym to get its function pointers, so it can't do that in a statically linked libc. (Or could it? I suppose there's no reason why a statically linked library wouldn't expose symbol tables, whether via dlsym or some other means.)

Your point about size is well-taken though. Besides libc in this case, Guile's build products are much larger (and slower) than LuaJIT's images because we've been trying to write the compiler in Scheme from the very beginning. We can't even think about doing a nice JIT until we have a good AOT compiler, because otherwise the JIT written in Scheme would be too slow.


I believe that LuaJIT's ffi can use symbols that are statically linked in, but I haven't tested it myself so I could be wrong. You can't use dlsym though in a static binary, so it might need some work. My plan to port ljsyscall to straight Lua without ffi though is to generate C bindings rather than writing them by hand. But I have done all the ioctl stuff in Lua not C so all I need is all the syscalls and structs defined and then all the ffi code for ioctls, netlink etc will just work; all the constants and so on are define on the Lua side (that has other issues like dealing with the ABI directly in Lua).

I might be tempted in your case just to use a huge initrd for now, it will be freed after boot anyway.


LuaJIT's "running" page http://luajit.org/running.html suggests linking with -E (gcc -Wl,-E) to make all your global symbols visible to dlsym() even when you statically link.

That's what we're doing in Snabb Switch http://snabb.co/snabbswitch/ to create a statically linked executable that heavily uses FFI. (Currently don't statically link libc though so I don't know if there is a particular gotcha there.)


I think it won't work if you statically link LuaJIT too. I think it still uses dlopen, which with glibc still needs the dynamic libc available at run time, and might not work with other libcs. But you could fake it all with a statically linked table of function pointers quite easily I guess.


Modulo the bug at http://sourceware.org/bugzilla/show_bug.cgi?id=15022 it's possible to dlopen symbols from the static binary.

However, all libc symbols not used by Guile are omitted from the resulting static binary, which is why the FFI is not an option. (This is independent of Guile or the libc brand.)

As for the size, the initrd is less than 5 MiB (the 'guile' binary is less than 4 MiB).


If you dynamically linked it would be quite a bit bigger, libc alone (none of the other libs like libm libdl etc) is 1.8MB on my system. But it should be manageable. Or you can force the symbols to be linked in by adding a structure with the symbols in.


You can redirect the control-flow of the program by overwriting a return address or a vtable. Once you've done that, it's easiest if you can redirect control to code you've written, in executable heap; but if that's not possible, you can still use return-to-libc, or potentially "return-oriented programming" strategies.


http://git.savannah.gnu.org/gitweb/?p=guile.git;a=commit;h=2...

Fixed by Mark H Weaver, on Fri, 7 Dec 2012.

Avoid signed integer overflow in scm_product

* libguile/numbers.c (scm_product): Avoid signed integer overflow, which modern C compilers are allowed to assume will never happen, thus allowing them to optimize out our overflow checks.

* test-suite/tests/numbers.test (*): Add tests.


A nice example for Regehr's post: C and C++ Aren’t Future Proof

http://blog.regehr.org/archives/880


Awesome, I've added a link to the bottom of the post.


Checking for signed overflow by first performing signed overflow is a great C antipattern.


Exactly. I've seen plenty of overflow detection libraries that do precisely this. The undefined overflow plus the lack of access to CPU flags makes a pretty good argument against doing this stuff at the C/C++ level.


Interesting, but given that the code in question computes (expt 2 488) as the square of (expt 2 244), is there actually any way for that bug to cause the observed symptoms?

(2^244 is being shown as zero. 2^488 is being shown as non-zero; presumably the right value though I haven't actually checked. If 2^244 is actually being computed as zero, then unless what's happening somehow depends on the depth of the stack -- surely not possible if this bug is the cause -- 2^488 should be computed as zero squared, which is zero.)


Sure, it's possible. The sequence of multiplications chosen to reach the exponent is different, so these expressions take different paths through scm_integer_expt.


But the sequence of multiplications chosen to reach the exponent isn't different. What the user's expt function does, when asked to compute (expt 2 488), is to compute (expt 2 244) and then square it. When it computes (expt 2 244) it does the exact same sequence of multiplications as if you just asked for that directly from the REPL.

What am I missing here?


I currently think the C compiler vendors have gone too far with this.

Although the specification says this behaviour is undefined in practice I used to map the undefined portion to being whatever my hardware did, not whatever the compiler felt like this week.

This ruins the C as a portable assembler use case and means if I was doing this sort of work I'd probably write these sort of routines in assembler rather than trying to fight with versions of C compiler.


...how was C "portable" assembler if its semantics depended on the underlying hardware? That's by definition unportable.


Sounds like your problem is more that Scheme is too small for the programs you'd like to write. Implementations that are adequate to your needs being incompatible with each other is a natural side effect.

The new standard will help marginally, but there will be no grand unification of Schemes. R6RS was big enough for your needs but did not achieve wide adoption. There is no fundamental reason why this would be different for the larger R7RS report (on which work has not yet begun). Indeed, some implementations have already forked with an explicit intention of going on their own paths (Racket).

In short, for serious work, you might be able to share modules between implementations if that matters to you, but your overall application will be implementation-specific.

FWIW, IMHO, etc...


> Indeed, some implementations have already forked with an explicit intention of going on their own paths (Racket).

Racket takes an interesting approach with its scoped dialects. This allows the semblance (and some of the semantics) of separate implementations, while still preserving the interoperability[1].

[1] I have one project which uses libraries written in the base, typed, and lazy dialects all together, without any issue.


It's true and really noteworthy - Racket is in large part a toolbox for language creation and framework for those languages interoperability[1]. It's one of the reasons I chose to stay with Racket after fulfilling my initial goal.

Which, because I never learned anything about Lisps in school, was that I just wanted to become familiar with it. I looked hard at CL and, when it looked straight back at me and I felt it's ancient, powerful gaze upon me, I quickly ran off to schemeland. Where, of course, I hit the multi-implementations-wall immediately. I didn't want to learn a "toy" (as in "here, have a language - now go and implement all the libraries you need from scratch or by wrapping C calls") language and I wanted to solve real-world problems with my first Lisp, so I naturally looked for "the best" implementation: most library rich, best documented and actively developed and used.

I chose Racket and I'm very happy I did. Not only I learned about Lisp beauty and power while building small, but useful and fun things, I ended up in an environment that makes me improve my skills every time I go back to the language and probably will continue to do so in the future - even when I will finally learn all of the base (not racket/base - racket) language I will just transition smoothly to learning other languages and then to creating my own.

As an effect I don't know Scheme at all, which is makes me feel like I missed something. I know and use several SRFIs, but only when Racket does not provide alternatives, which happens rarely. I have no idea what is written in R6RS and I don't follow R7RS. Heck, I probably don't even know what Scheme is all about! On the other hand, though, I now know (somewhat) a Lisp and a powerful, batteries included, practical language and an experimental academic beast in one.

Anyway, don't use Racket if what you want is just a Scheme. [EDIT: or for embedding. Or producing native binaries (if I understand correctly 'raco exe' mentioned in the article works like py2exe rather than like "real" native compilation... I can be wrong, never used it). Or for anything that Racket is not suitable for ;), but] for everything else I can only recommend it.

(Don't mention Clojure in response to this comment, please. I'm somewhat allergic to it, it's not Clojure fault, it's mine, Clojure is good, really brilliant, very interesting. I'd like to like it, but I don't, sorry... So no, don't ask me 'have I tried Clojure' :) )

[1] http://www.ccs.neu.edu/home/matthias/Thoughts/Racket_is____....


But few real world app. is developed using racket compared with CL ?


It's true... I didn't search very hard, but I didn't find any "real world and/or big" apps written in Racket. [Edit: of course if we exclude Hacker News website, which is written in Arc, which is in turn written in Racket.] I think it's mainly because it is (or was) a Scheme - language almost universally thought to be very beautiful, very useful in education and completely useless in real world.

If I remember correctly breaking with this image was one of the reasons for name switch, but it was done rather recently (in 2010) and it will take some more time before Racket will get it's chance in the real world... Just how many years it took Haskell to convince people that it's something more than an obscure research project?

Certainly, the language would benefit greatly from much higher number of libraries, for example, and their lack can be a serious setback for larger projects. On the other hand the language itself includes many sophisticated, impressive features that help with programming such projects - one tiny example is a very nice module system, closer in essence (I think) to that of OCaml than to that of Python and another is an object system, which is well thought out, easy to use and really powerful.

So, while Racket is nowhere near Common Lisp (I really hope you used 'CL' as abbreviation for Common List and not Clojure :)) in industrial usage, I believe that it is (or will be shortly) ready for it's chance in the real world. It would take one or two moderately successful startups to build their products in Racket and open-source all the libraries they wrote to make Racket really viable alternative for other languages.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: