Hacker Newsnew | past | comments | ask | show | jobs | submit | somenewacc's commentslogin

The idea of employment also brings more stability to workers, to focus on their families and other stuff. Worker's rights on the beginning of the industrial revolution was terrible, no wonder communism rose to power in a country so fast.

In the end, social democracy was a compromise to maintain capitalism and avoid spreading revolution. And it worked.


ASP.NET does this, you use its components and it generates client side javascript. In a more functional way, there's another language that does this caled Opa! http://opalang.org/


yes, like runat="server" kinda thing.


But comprehensions are also a functional idiom.


Yeah, sorry, I didn't mean to say that Python's not functional without a "map" function. More trying to guess where that perception comes from. Python's comprehensions are (I believe) inspired by Haskell, but, if you're not a Haskeller, you're probably more used to other ways of doing things.


The version Python has is awkward to work with, to the point that many Haskell programmers don't even realise it exists. The "do notation" style is much clearer than the "comprehension" style; unfortunately Python has no equivalent.


FP means referential transparency. For-comprehensions operate on iterators and iterators are not FP, being a very dirty and mutable protocol.

Do not confuse FP with declarative programming or with laziness. FP often implies declarative APIs and laziness but not vice-versa.


Python's comprehensions were lifted from Haskell, and its map and filter functions also operate on iterables. They can't be purely functional because of the surrounding language, but they do follow a functional style.

(expr for item in iterable if cond) is more or less another way to spell map(lambda item: expr, filter(lambda item: cond, iterable)), except readable.

You could define "functional programming" to absolutely require referential transparency, but that's not what the rest of the thread is doing.


I don't mean to be pedantic but IIRC Python's comprehensions actually came from the SETL language.

Obviously it's ultimately the same thing either way, but that's the lineage (SETL -> ABC -> Python).

https://en.wikipedia.org/wiki/SETL


I didn't know about SETL, that's interesting.

https://docs.python.org/3/howto/functional.html#generator-ex... claims Python borrowed them from Haskell, but the ABC link seems more likely.


ABC didn't have list comprehensions though, in spite of being influenced by SETL.

As I recall, Python did get them from Haskell, which in turn got them from Miranda, which got them from KRC. Before that it gets fuzzy, but it's likely that SETL was eventually at the root of it.


Yes, but if you're allowed to timeout, then any system is trivially available, even a system where the nodes are always offline (simply timeout every request).


Perhaps you just need to make the Sapper -> military pun more obvious (my gut feeling was that it was just some ridiculous marketing)



For CPU bound stuff, the 80%/20% rule is to learn that an array/vector is typically much faster than a linked list, and this is due to how CPU caches work. (this is thinking about how much information is being moved around, but between the processor and RAM)

Also, for parallel stuff: locks tend to be bottlenecks. Writing parallel code is tricky overall. Stick to libraries that provide high level lockless data structures like mpsc queues if possible.


Who uses linked lists? Outside of niche algorithms that splice lots of lists together there are practically zero occasions where linked lists beat dynamic arrays.


I think it's a cultural thing rather than a technical one. I haven't used a linked list in 15+ years, because in C++ a std::vector is the 'default'. Before that, I wrote (some) C for Gnome, and the 'default' in Glib is a linked list. I don't know/remember if there is a reasonable, easy to use data structure that wraps a dynamically allocated array in Glib, but most of the 'sample' code at the time used the glib list. So that's what everybody kept on doing.


I've seen LinkedList often used as the go-to list in Java, instead of ArrayList. They're also common in C, for example, a huge number of structures in the Linux kernel are linked lists.


> for example, a huge number of structures in the Linux kernel are linked lists.

Which allocator is used for these? I'm not familiar with the Linux kernel, but since there is no malloc() in the kernel, I would guess that they allocate from an arena that's going to be page-aligned and thus exhibit the same locality characteristics as a vector.


Oh, they have malloc, it's just kalled kmalloc.


I think GArray is the wrapper for a dynamically allocated array.


Ah yes it is, thank you. I should have know about this 15 years ago :)


What about any time you're working with frequent adds/deletes in arbitrary positions, rather than just at the end of the list?

That's not a "niche algorithm" to me, but rather a reasonable use case. The add and delete operations should be faster than for the array case.

(Not that I actually use linked list much myself, but I mean I see the use of them.)


If the list fits in cache it is often to faster to insert into a dynamic array and shift the elements. It of course depends a bit on the allocator that you use how expensive creating a new list node is.


That's a good point, thanks.


Even then, linked lists are only faster than arrays if the list has at least 100s of items.


That's pretty much a standard use case. IIRC the rule of thumb is that if the container isn't expected to hold many elements then the default choice is to use std::vector no questions asked. Otherwise, random insertions and deletions in a sizable container requires a std::list.


I use deques A LOT, e.g. interthread messaging, and decaying/windowed time series calculations.

Like, if you want a running mean of the last 250ms, you put a timestamp and value pair in the back a deque, and every update you check the front of the deque to see if it should be popped (i.e. the timestamp is older than 250ms), and the mean updated.

I suppose you could use a circular buffer with a vector as well, but you have to guess what your max size will ever be, or handle dynamic resizing. Maybe it would be worth it in some circumstances.


If those lists are anywhere on your hot path benchmarking with a deque build from two vectors, or a circular buffer as you suggested, would be worthwhile imho. The constants in linked lists are such that the O(1) operations are often slower than the O(n) operations in vectors, at least for lists that fit in cache, and depending on your allocator etc. Chasing pointers to more or less random memory locations is pretty much the worst case workload for a modern CPU.


Pretty much anyone using a functional language or immutable data structures.


The default immutable "list" types in current mainstream functional languages (eg. Scala, Clojure) have much better performance characteristics than naive head:tail linked lists. A typical implementation is a n-ary balanced tree where n is typically on the order of 100, making operations "effectively O(1)" and also keeping cache behavior acceptable in many common cases.


Haskell still uses head:tail linked lists a lot. However, a lot of effort has been spent in optimizing away the data creation entirely. For example `sum [1..100]` will not actually allocate the list in memory.


Linked lists are very common, especially for non performance critical code. Dynamic arrays either carry a large performance cost at expansion, or is rather complicated to implement in a amortized way. Linked list in comparison is simple, and since it doesn't require a large continues block of memory, the memory pressure is smaller.


> Who uses linked lists?

Isn't the linked list pretty much the main collection data structure in the Linux kernel?


A dynamic array has an average (armotized) time of O(1) but a worst case of O(n) – whenever your next insert is longer then the size of the list. Furthermore, it needs a lot of ram whenever copying elements. A linked list is always O(1) and its ram usage is consistent (although not better – normally it is twice or three times as big).

A linked list is basically only better on embedded systems because you have much better knowledge about the used ram/runtime. It’s also useful if you have a very large data structure and you need real time performance (a lag of a few hundred milliseconds are unacceptable for some applications such games or financial applications). But whenever your insert becomes too expensive, a linked list is probably not the best choice either and you should consider why your data structure is so big.

Lastly a linked list is rather simple to implement but that’s also only really useful for embedded stuff.


Please stick to a font already in my browser. My network is slow and webpages sometimes get stuck loading; the resources in your page took 40 seconds to load and your font didn't even load yet.


> I dread the day when someone finds and publishes the identity of Satoshi Nakamoto for you know, a "good story". I hope it doesn't happen

I hope this happen, but long after he is dead. Perhaps in some centuries.


About 1: take a look at sit https://github.com/sit-fyi/sit and git-dit https://github.com/neithernut/git-dit

It would be nice to work with an existing project than roll your own.


Thanks for the suggestion.


As the author of one of these projects (sit.fyi), I'll be happy to show and explain its philosophy & benefits (some are quite interesting and yet to be possible in conventional options). I think that discussion can be of value to all. Feel free to reach me at me-at-yrashk.com if interested!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: