I've always written Lisp with an editor, into a file, and used the REPL only for exploratory and debugging tasks.
Some of my early Lisp programs used a Makefile.
Lisp is great even without the scatter-brained approaches that some Lisp programmers advocate in order to roll people's eyes and turn them away from Lisp.
Smalltalk's method editor approach has always seemed nicer to me (when combined with a Smalltalk system that knows how to sync its code to disk, I'm not a huge fan of image-based systems ... just everything else about Smalltalk).
I like Objective-C a lot. I'm pretty open to Smalltalk. What do you use it for?
I've skimmed through a summary of Pharo but I don't want a whole GUI system, I want to write scripts or maybe web apps. Do I actually want the whole GUI and all but just don't know it yet?
Pharo MOC and documentation shows web apps. Also the Pharo by Example book linked from the Pharo site. Note that Seaside is the framework that produces web apps.
Captured objects in closures are another problem in JavaScript.
A long time ago I was using a JavaScript framework where the memory usage went up steadily as you navigated the single page app. One cause was captured variables (referencing large object trees) in event handlers. The only viable way to fix the issue was to write my own custom framework taking extreme care with closures and sometimes nulling variables (the code for the existing framework was just too complex to fix - and memory references are hard to debug in browsers). I saw the same issue with memory usage growth in a different framework. Some of the issue was that Internet Explorer had problems with circular references between JavaScript and the DOM, but the captured variables in closures were their own problem.
I can't see how that would work. The arguments variable is already hard to use (accessing it from within the function hardly helps you, and giving access to it by using it as a parameter to another function, or using another closure to capture it doesn't help you either). Apart from the fact that your idea would severely interfere with optimisations (e.g. causing JIT problems, and interfering with variable liveness analysis) - just like some other dynamic features do.
You'd need a dependency graph of optimizations, to understand where a new dependency would cause other optimizations to need to potentially be disrupted.
This doesn't seem like a hardfast cant. We keep getting arguments against doing stuff because it might not be performant. I don't disregard those concerns entirely. But I think we also over-actively forbid, we prematurely optimize by pruning possibility, all too often.
"This would make functions second-class. [...] Is that really much of a loss?" Yes! A huge loss. The cons of live programming don't even come close to outweighing the pros. You can always fallback to edit-compile-restart if you need to.
Could a two-level semantics provide a spot in the middle? Closures and continuations would capture an abstract control-flow and set of data-demands; this skeleton could not be updated, but specific implementations of it could be swapped out freely.
Do you often need non-trivial lambdas that you completely construct at runtime?
Second-class functions can still be passed around. They just can't be constructed at runtime, you would have to declare them.
Maybe some "first-and-a-half order" approach to functions could work. You could still declare lambdas in place, but they won't have direct access to anything local to your function; instead of a closure, everything needed would have to be passed as a parameter. It could even allow partial application using local variables; values wound just be copied, not closed over.
This is moe interesting than it looks, probably because the best part (IMHO) is about the type system, that is what enables the other ideas.
> In Julia, types are first-class and every value has a type
This is what I do from the start in https://tablam.org and only later found that is not common! Is so intuitive this way and simpler to check, by a lot. In fact, I waste so much time adapting type inference algorithms that are hard to translate because for some reason graphs are imposed on trees, types are second-class and live at a distance (and erased) and all is a mess this way.
The relational model already makes this so simple: `project / rename / extend` relational operators cover you.
From this other facilities become possible. Note how in `SQL` you don't have functions as first-class per se, but now try to imagine that a function is a table and suddenly, is much better!
> Putting a string in a Vector{Int64} is simply not allowed.
In Java the same applies. But Java is a weird beast. While every value has a type, not all types are equal - there are primitive types like int and double, that are value types, and Integer and Double, that are class types. This all is even more muddied with class generics that are forced into the existing system by type erasure at compile time - so while you can't put a String into an array of Double, you definitely can put a String into a hash map of Double when you really want to, as the hash map is implemented as a class and type parameters of a class are just an illusion that is enforced during the compile time. No sane person will do this and uses all the available tricks to eliminate this possibility, but it is definitely possible.
This seems interesting, have you looked into ECS at all? I'm not sure it would make sense, but it seems like you could store data columnulary by type to get efficient vectorization and access patterns for operations over types, and systems might make for an interesting way of handling data. It's not exactly relational, but it's kind of similar. I've wondered what an ECS framework for normal server side code would look like, where you essentially define an RPCs handling implicitly as a series of systems acting on it.
> but it seems like you could store data columnulary by type to get efficient vectorization and access patterns for operations over types
I tried at first to be fully columnar, then cave and try hybrid then now I doing mostly rows with 2d NDArrays.
The major reason is that going columnar flip everything and then I need to recreate tons of API, in special costly with FFI or APIs. I was looking into mimicking kdb+, and yes, that is what they end up doing. This lead to a more insular community (that I don't have but well, that is a worry!)
Ah yeah, that makes sense. Even having types stored in arrays though could be better than the typical graph like structure you might get from an OO language.
I have nothing super finished, this is for fun (until I get time or funding!).
But I bet is far more reusable than normal code in most cases.
The reason is that the relational/array model has more values: https://www.infoq.com/presentations/Value-Values/, and combined with structural types and the power of relational operators you can eliminate a lot of cases where macros or generics come.
One major feature of a "values" language is that is naturally introspectable. Not just values, but types, and metadata. So this is NOT crazy:
mod People:
data Person:
name: Str
age: Int
data Customer: Person //not subtyping but `SELECT * Person INTO Customer`
active: Bool
end
mod Auth:
data User: People.Person //not subtyping but `SELECT * People.Person INTO Customer`
password: Password
active: Bool
data UserSafe: User ? deselect password // SELECT People.Person.*, active INTO UserSafe
end
fn print(p: Person ... ) // Accept anything that is `SELECT name, age FROM ?`, but critically, not remove the other fields, are just invisible here. This means you don't make type conversions in unnecessary cases
fn print_type_no_person(mod:Mod): String
let types = mod ?select as_data(this) != Some(Person...) // Anything is a relation, anything query
types ? union | into(String) //SELECT t1 UNION t2... + reduce()
In a lot of ways the Logic Programming languages are effectively relational (e.g. Prolog, Datalog, KIF) but the search behavior for relations that satisfy a query is a bit different than SQL-like relational languages.
On a previous project I embedded a SQL-like sub-language into a model language so that ETL pipelines and OLAP/OLTP processing could be generated to query aggregates and value lookups during inference. It is nontrivial to embed a relational language into another language without making some compromises but there are certainly contexts where it is quite useful. I think C#'s LINQ is a reasonable effort at this but I'm not much a fan of the rest of that language.
> At some point the only option is to kill it with fire turn it off and on again. Extinguish that spark of life and turn it back into a puppet.
That turns out to be the solution that nature has come up with, so while it might be possible to do an end-run around this constraint somehow and keep a dynamic system running forever, I'll give long odds against.
All "live systems" in nature (i.e. living things) eventually die. Life goes on not by creating things that live forever, but by reproducing, i.e. regularly rebooting from a previously vetted simpler state.
And this happens at all levels of abstraction. At higher levels it's not called "death" but "extinction" but it amounts to the same thing -- the wholesale destruction of previously accumulated state.
> Methods overridden at runtime, traces that end with a call to a closure that no longer exists, event handlers whose execution order depends on side-effects during module loading, stack-traces which contain multiple different versions of the same function.
My experience is that the more orthogonal the data, logic, and presentation in the system, the more methodical the naming, the less important the system is and the more the creativity can be focused on the task, not dealing with the "personality" of the system.
A REPL is indispensable when prototyping and experimenting with ideas. The fact that everything is malleable and inspectable in a running Python / JS / elisp environment is very helpful at that stage.
But when you have chosen a shape, more rigid structures provide static guarantees that are the more welcome the larger your project grows.
>The Mote in God's Eye is a science fiction novel by Larry Niven and Jerry Pournelle, which explores the concept of the "moties," an alien species that has a unique method of starship construction. The moties build their ships in a modular way, with many different components that can be added or removed as needed. This makes their ships highly adaptable and able to respond to changing situations, but also somewhat unpredictable and complex.
>John Nagle's comment, "Late binding is programming for moties," suggests that he is drawing a parallel between the adaptable but complex nature of the moties' starship construction and the approach to programming that relies on late binding. He is likely implying that late-bound programming, like the construction of motie ships, can lead to systems that are highly flexible and capable of responding to a wide range of inputs, but at the same time, these systems can be more complex and harder to predict, maintain, or debug.
Ok so who gets to be the Crazy Eddie of late binding?
(Also, I've never seen a dataset cited before but I kind of love it, wow. But also, I absolutely detest that there is zero ability to go back & see what trained these weights, what source material these ideas & words came from.)
I didn't expect determinism's usefulness to be ever under question but I guess there's first time for everything.
In Elixir most of us use the REPL to sketch out an idea; to shorten the development cycle a bit and have something that looks to be working. Once that's happened then we write the code properly, with tests and all.
REPL and tinkering are just one tool to make your work more deterministic. It's not a personality that wins over all other personalities.
And stop looking for ghosts in the machine, it ain't happening. I like me a good sci-fi as well but general AI is quite far away still.
Google outlawing dynamic code in Web Extensions/mv3 is a travesty of high order. There's no place I want to be able to be more alive than my agents. Yet my agents must all be dead. For shame, ye villains.
> It's not obvious what to do with long-running background tasks though.
You avoid them and chunk the work into pieces.
That's what every framework for long-running tasks do; that's what people that write them by hand ends-up doing after the first or second time they fat-finger a CTRL-C.
Just passing around closures in Java as method arguments—or “callbacks” because this is a webapp so of course—is too dynamic to me. Even if you are just using it to abstract out something and it isn’t really more dynamic than using methods directly.
Not if they're just values. They make it hard to reason about the system if they mutate variables - even then, mutating local variables would probably be ok, the problem is they mutate distant variables through references. In the language design the article is proposing, async promises would not cause trouble (though by the same token they would be quite limited in power).
Some of my early Lisp programs used a Makefile.
Lisp is great even without the scatter-brained approaches that some Lisp programmers advocate in order to roll people's eyes and turn them away from Lisp.