Then cities need to invest into noise reduction, pollution reduction, etc. Maybe it's not that people don't value their time, but rather value not walking streets which smell like trash.
I too switched for several months, but am already back to using Chrome. Even with the performance improvements, it's still noticeably slower than Chrome.
I'm curious: what OS do you use and what sites are you visiting?
I've had a stellar experience on Arch Linux. The RAM usage is down. I can't say I really ever found Chrome slow, but nor is Firefox for me. I think they're both plenty fast, but I prefer sites without lots of garbage flying around on the page anyway.
macOS High Sierra. I generally have a lot of tabs open: two Inbox tabs, Facebook, StackOverflow chat + question pages for what I'm working on, many JIRA tickets, HN, sometimes Twitch, etc.
I think it has something to do with video rendering since opening Twitch/Youtube often causes the problems to start.
While the book is really top notch (I'm usually not a fan of programming language books), there's something to be said for not introducing all concepts at once. A scripting language has many of the same concepts, but not lifetimes, etc. Not to say objects are intuitive!
While this is true, those languages have stuff Rust doesn't as well. Take Ruby, for example: Rust doesn't have method_missing, or eigenclasses, or inheritance. "How do I know which method gets invoked" is much more complex.
Okay, I yield. I think you may be right. I've been thinking a lot about how I'll teach my children STEM topics and I've grappled with the idea of teaching Rust. One of the things I like about it is that you know the behavior at compile time, unlike Ruby or even C.
Would any sort of actor model language have the raw throughput for a AAA game? I get how this model is nice for the web, for example, but I'm wondering why it doesn't get used for high performance computing (that I know of).
There are languages, like Pony [1], that use actor model for the sort of high performance you are talking about. Also check out Anna paper, there is a description and argumentation on how and why they use actor model in C++ for high throughput.
I would also say that performance wise actor model is usually better, than low level shared memory multithreading, because it enforces locality-friendly contention-free architecture and fundamentally maps better to modern hardware.
Thanks for addressing the latter point. I forgot to draw that distinction. That's a curious point, however, and I'll have to check that out more for myself as you suggest.
For their AAA games cloud infrastructure it does! In fact, it is a backbone for some of Microsoft's Halo AAA game. Microsoft has this framework called Orleans which uses the Actor pattern. They have used it in various other project's too.
Now if you are wondering about raw throughput for graphics and physics stuff in AAA games, that I don't know. I believe that to be a completely different beast, with different requirements, which may or may not benefit from this paradigm.
The Erlang VM (BEAM) is unusual in that it, like the languages it hosts, is very opinionated.
It is designed for robustness, scalability, concurrency, distributed environments. And immutable data. So far as I know, you literally cannot implement a language with mutability on the VM.
So, raw performance will never be its thing.
In general, I think the actor model could achieve high performance, but perhaps only if messaging is syntactic instead of truly distributed with mailboxes, network transparency, etc.
I'm a beginner, so by all means correct me if I'm wrong, but as I understand BEAM is relatively good when it comes to 'soft realtime', and especially when 'latency' is a concern (in part due to per-process GC?).
Am I correct in thinking that this would be pretty okay for most games, AAA or not? Would an FPS be possible (quick updates, small messages)? Or a World Of Warcraft or Sea of Thieves style game with many actors that need sort-of-realtime performance but don't rely on it entirely?
I've been looking into creating a game, and I'm also learning Elixir, so I'm curious what would be realistic when combining both.
In performance terms, Erlang is broadly speaking a scripting language, in the 10-20x slower than C range. It would be unusable for a AAA game because even using Erlang without any concurrency, it's too slow. It will get even slower if you do what you might be inclined to do for a game and make a separate process for every entity in the game and communicate entirely by message passing. It will be a beautifully clean architecture, but while Erlang may have cheap concurrency, it does not have free concurrency, and if you work realistic math on the sheer number of messages you'd have flying around the system it should become clear that it will not be practical to have literally manifested "messages" in that quantity being continuously created and destroyed.
If we tune our sights down from "AAA game", there are two possibilities. One is you can create a less computationally-intensive game that can run Erlang on the desktop. I suspect you'll find you're a bit short of libraries for that use case, but with motivation you can pound through that. I'm not sure if this has ever been done. The other thing you can use Erlang for is being the backend server of a game system, and that is eminently practical, in the sense that it has been done: http://erlang.2086793.n4.nabble.com/Erlang-Who-uses-it-for-g... You'd encounter some bumps if you tried to scale it up, but that's not a very strong criticism since it's constant regardless of what tech you'd end up using.
Thanks for the info. I was definitely not thinking of using Erlang/Elixir for the actual game, but rather as a back-end. Glad to hear that wouldn't be a bad choice.
I'm afraid I don't know enough about game development to be of assistance.
I assume in general they're very graphics intensive, which is (based on a 20+-year-old education I never finished) very matrix mathy, the type of calculations you absolutely would not want to shove through the Erlang VM.
However, one architecture people have used to varying degrees of success is using Erlang as a control layer (messaging, resilience) and C/something else compute-heavy for data manipulation. Erlang has the ability to drive external binaries via NIFs or ports.
Historically that's been a bit risky because the Erlang scheduler requires insight into its processes to do its job properly, but there have been improvements in recent releases.
So...maybe? Probably a question better suited to the Erlang users mailing list.
Ask yourself this question - "Am I doing lots of math and computationally intensive stuff"?
If so, don't use Erlang.
The corrolary is "Am I mostly passing data around, doing I/O, etc?"
If yes, Erlang is probably a reasonable choice (and a GREAT one if you're doing a lot of concurrent stuff).
So games, backend server -can- work; it just depends what it's doing. You may need to mix and match for functionality if you're doing a mix of things, and then you have to ask if it's worth the effort and translation cost if you have to jump between languages.
>Am I correct in thinking that this would be pretty okay for most games, AAA or not?
No. Soft real time just means that the GC will not pause the process and and it will not cause stuttering during garbage collection pauses. In languages with GCs this is usually achieved by not allocating too many objects in your game loop.
If you write the 3D engine in erlang your game won't stutter but it will have an incredibly low framerate.
The secret ingredient to high performance is primarily data locality and obviously a compiler/runtime that can actually translate the program into efficient code. You can make your compiler as good as you want, without data locality and control over the memory layout the performance simply won't be good enough.
What do I mean by data locality? Primarily these things.
Avoid indirection through pointers.
Java's ArrayList is a nice example. You cannot store raw "int"s in an ArrayList. Only "Integer" objects. This means if you want to calculate the sum of the ArrayList the CPU will have to read the data from main memory if it is not in the CPU cache. This is roughly two orders of magnitude more expensive than an a single addition for example. If data can not be found in the cache it is called a "cache miss".
Of course in python, javascript, erlang almost everything is a pointer that points.
Continguous storage of data.
Java still have primitive arrays. You can declare an int[] array to avoid the above problem. This means the integers are stored directly inside the array. Well what if the array is not in the cache? Wouldn't that incurr the same problem as a above?
A CPU is actually quite smart. It has a prefetching unit that can detect if you're loading data in a regular pattern and load the next piece of data according to that pattern.
If you're the CPU and see this pattern. What would you do to avoid a cache miss?
arr[0]
arr[1]
arr[2]
arr[3]
arr[x]
Of course you would start loading arr[4], arr[5], arr[6], arr[7] ahead of time!
Efficient storage
ArrayList stores a continguous list of pointers. On a 64bit architecture this means every pointer costs you 8 byte of memory.
On top of that every object in Java has a "header". I don't know the exact number but let's say it's size is 8 bytes. Then there are alignment restrictions. Let's say each object is aligned by 8 bytes. 8+8+4 is 20. The nearest multiple of 8 is 24.
We're storing a 4 byte number in 24 byte worth of memory. In other words if we are memory bandwidth constrained we can increase our performance by another 6x just by reducing the amount of "useless" data we have to read.
Mutation
There are efficient immutable datastructures but these usually involve indirection through pointers.
Avoiding allocation of new memory avoids GC pauses or time spent in malloc/free.
Mutating only a small part of a datastructure is more efficient than creating a copy.
Stack allocation
The primary benefit is that the top of the stack is usually in the L1 cache. Managing a stack is more efficient and a GC or manual memory allocation. It's just a pointer that gets incremented or decremented.
Erlang doesn't give you control over either of these. Ponylang has actors with isolated GC heaps and gives you control over memory layout and data locality.
Take the context into consideration: this whole thread is about performance and the comment on mutability was speaking of mutability in place, because that matters for performance.
As far as I know, Ada code using its types of actors (tasks and protected objects) can be made very fast, and more importantly for AAA games, very predictable. The default scheduling methods are good enough for many cases, but with the right restrictions, scheduling can even be statically determined and, in principle, a cyclic executive could be generated by the compiler with the same semantics as the original actor code. Unsure if this is actually done in practice, though.
NaughtyDog used a job system with fibers to parallelize their engine [1]. I suppose that's not exactly the same as an actor system since the fibers don't necessarily own all their state nor do they necessarily communicate using messages.
Even in something like Go which has this kind of concurrency in mind, performance critical code is often written with the more traditional "threads & locks" approach with the goal of using the ideal number of goroutines to maximize hardware use but no more than that.
Not saying that other concurrency models don't have the throughput for a AAA game, but when your goal is to get the most out of the hardware of one desktop/console you're going to have different priorities than a server environment.