I really wish the community had coalesced around gevent.
- no async/await, instead every possible thing that could block in the standard library is monkey-patched to yield to an event loop
- this means that you can write the same exact code for synchronous and concurrent workflows, and immediately get levels of concurrency only bounded by memory limits
- you'll never accidentally use a synchronous API in an async context and block your entire event loop (well, you can, if you spin on a tight CPU-bound loop, but that's a problem in asyncio too)
- the ecosystem doesn't need to implement libraries for asyncio and blocking code, everything just works
There's a universe where the gevent patches get accepted into Python as something like a Python 4, breaking some niche compatibility but creating the first ever first-class language with green-threading at its core.
But in this universe, we're left with silly things like "every part of the Django ecosystem dealing with models must be rewritten with 'a' prefixes or be abandoned" and it's a sad place indeed.
While having the same code for sync and async sounds nice, monkey patching code at runtime seems hacky. Any library that wants to use a lower level implementation of network calls would need to handle the monkey patching themselves i assume.
The idea would be that if this was accepted into the core as the way to go forward, it wouldn't be "monkeypatched" anymore, it would just be in core, officially supported.
Agreed. I chose gevent over asyncio for a backend in 2019, and we still using it. Works pretty well. No plans to phase out gevent just yet.
Though the community has clearly centered on asyncio by now. So if I were to start a new backend today, it would reluctantly be asyncio. Unfortunately...
Java 1.1 had green threads, so does Java 21+ (though Java 21's green threads are actually an M:N threading model not the M:1 threading model of Java 1.1).
I worked quite a bit with gevent'd code about 10+ years ago and also agree. Dealing with function coloring is incredibly non productive. This is one of the things "go" got right.
libraries provide pure processing machines that pull data in through well defined input functions (and output through return), and then it's up to the callsite (or host code or however we want to call our code) to decide what kind of color to paint the whole thing eventually.
I like `gevent` but I think it may have been too hacky of a solution to be incorporated to the main runtime.
"creating the first ever first-class language with green-threading at its core."
... isn't that what Go is? I think out of all languages I use extensively, Go is the only one that doesn't suffer from the sync/async function coloring nightmare.
I'm with you that function "coloring" (monads in the type system) can be unergonomic and painful.
> ... isn't that what Go is? I think out of all languages I use extensively, Go is the only one that doesn't suffer from the […] coloring nightmare.
Because it doesn't have Future/Promise/async as a built-in abstraction?
If my function returns data via a channel, that's still incompatible with an alternate version of the function that returns data normally. The channel version doesn't block the caller, but the caller has to wait for results explicitly; meanwhile, the regular version would block the caller, but once it's done, consuming the result is trivial.
Much of the simplicity of Go comes at the expense of doing everything (awaiting results, handling errors, …) manually, every damn time, because there's no language facility for it and the type system isn't powerful enough to make your own monadic abstractions. I know proponents of Go tend to argue this is a good thing, and it has merits. But making colorful functions wear a black-and-white trenchcoat in public doesn't solve the underlying problem.
One of the largest problems identified in the original "what color is your function" article ( https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... ) is that, if you make a function async, it becomes impossible to use in non-async code. Well, maybe you can call "then" or whatever, but there's no way to take an async function and turn it into something that synchronously returns its value.
But in Go, it's very easy to do this; you can just do "result := <- ch" to obtain the value from a channel in synchronous code. (This blocks the thread, but in Go's concurrency model this isn't a problem, unlike in JavaScript.) Similarly it's very easy to take a synchronous function and do "go func() { ch <- myFunction() }()" to make it return its result in a channel.
> But in Go, it's very easy to do this; you can just do "result := <- ch" to obtain the value from a channel in synchronous code.
What you call "synchronous code" is really asynchronous. To actually have something that resembles synchronous code in Go you have to use LockOSThread, but this has the same downsides as the usual escape hatches in other languages. This is also one of the reasons cgo has such a high overhead.
Hm. You and parent comment have made me realize something: as much as I dislike how many useful abstractions are missing from Go, async for blocking syscalls is not one of them, since the "green thread" model effectively makes all functions async for the purposes of blocking syscalls. So I retract my "you have to do it manually" comment in this case. I guess that's part of why people love Go's concurrency.
Of course, as you said, stackful coroutines come with runtime overhead. But that's the tradeoff, and I'm sure they are substantially more efficient (modulo FFI calls) than the equivalent async-everywhere code would be in typical JS or Python runtimes.
My "you have to do it manually" comment comes from some other peeves I have with Go. I guess the language designers were just hyper-focused on syscall concurrency and memory management (traditionally hard problems in server code), because Go does fare well on those specific fronts.
I remember this article in 2015 being revelatory. But it turned out that what we thought was an insurmountable amount of JS code written with callbacks in 2015 would end up getting dwarfed by promise-based code in the years to come. The “red functions” took over the ecosystem!
With Python, I’m sure some people expect the same thing to happen. I think Python is far more persistent, though. So much unmaintained code in the ecosystem that will never be updated to asyncio. We’ll see, I suppose, but it will be a painful transition.
All goroutines are async in some sense, so generally you don't need to return a channel. You can write the function as if it's synchronous, then the caller can call it in a goroutine and send to a channel if they want. This does force the caller to write some code, but the key is that you usually don't need to do this unless you're awaiting multiple results. If you're just awaiting a single result, you don't need anything explicit, and blocking on mutexes and IO will not block the OS thread running the goroutine. If you're awaiting multiple things, it's nice for the caller to handle it so they can use a single channel and an errorgroup. This is different from many async runtimes because of automatic cooperative yielding in goroutines. In many async runtimes, if you try to do this, the function that wasn't explicitly designed as async will block the executor and lead to issues, but in Go you can almost always just turn a "sync" function into explicitly async
> I like `gevent` but I think it may have been too hacky of a solution to be incorporated to the main runtime
gevent's monkey-patching might be hacky, but an official implementation of stackful coroutines (similar to Lua's) need not have been.
Instead, stackless coroutines were chosen - maybe on merit, but also maybe because C#'s async/await keywords were in-vogue at the time and Python copied them.
- no async/await, instead every possible thing that could block in the standard library is monkey-patched to yield to an event loop
- this means that you can write the same exact code for synchronous and concurrent workflows, and immediately get levels of concurrency only bounded by memory limits
- you'll never accidentally use a synchronous API in an async context and block your entire event loop (well, you can, if you spin on a tight CPU-bound loop, but that's a problem in asyncio too)
- the ecosystem doesn't need to implement libraries for asyncio and blocking code, everything just works
There's a universe where the gevent patches get accepted into Python as something like a Python 4, breaking some niche compatibility but creating the first ever first-class language with green-threading at its core.
But in this universe, we're left with silly things like "every part of the Django ecosystem dealing with models must be rewritten with 'a' prefixes or be abandoned" and it's a sad place indeed.