Hacker Newsnew | past | comments | ask | show | jobs | submit | arvinjoar's commentslogin

"Doesn't make sense for us but mandated by policy" is a super common phenomenon that you'll sadly encounter all the time in the industry. Especially when it comes to security. In this case it's at least motivated by something as peripheral as onion services wanting to fit in with the browser ecosystem, which, fair, maybe it doesn't make sense for browsers to bloat their designs by taking onion services into account, and then onion services have to adapt to modern browser standards.


>"Doesn't make sense for us but mandated by policy"

It's way worse in the physical world than in the software world IMO.


"it would be cool if ..." runs into "but effort" and dies, while "it sucks that ..." is something that gnaws at you


Effort seems to be a quite malleable kind of experience. It can be enjoyed or despised depending on how you've trained your impulses around that certain kind of effort. This further speaks to using the "it would be cool if ..." narrative.

Trying to build upon "it sucks that ..." seems to create a bunch of weird self sabotaging behaviours in my experience. Since I tried moving away from that I've found it easier to motivate myself to do things that aren't immediately rewarding. And perhaps more importantly, the journey there seems more enjoyable.

And I think it makes a lot of sense that using a carrot instead of a stick on yourself produces more consistent behaviour. You move towards a carrot and you flee from a stick. You can flee in most directions.

Changing the way you think about yourself takes a lot of time though. Impulses are deeply rooted.


> carrot instead of a stick

But wanting the carrot and not wanting to get hit are both kinds of discontent. The donkey is dissatisfied at not having the carrot so it pulls the cart that would otherwise be still.


If you're playing Terraforming Mars you want to build your "engine", in terms of resources or points generated per turn. The usual way to achieve this efficiently is to use up as much of your current resources as possible, so that they can compound over the course of the game. An innovative company should thus have costs exceeding revenue, as the costs should lead to future profits. It's of course possible to have costs that do not end up generating any kind of profit, but it would be a bit strange to see a company that has valuable ways to deploy capital simply refuse to do so


https://picrankr.com - upload pictures and have friends rank them. I've already showed it to enough people to know about some of the issues the idea has, and will improve it whenever I find the time .

Some issues:

- most issues where people need to choose a pic is a large N problem (100s of pics, not a handful)

- no-one wants to bubble sort pics for large N

- people want qualitative feedback, not only quantitative

There are some other issues I know about as well, such as image compression, etc., but feel free to try it out and give additional feedback :)


Yeah, it's a good example of lindy


Pretty sure that is because of metals. Living things die when boiling anyway


The temperature of a typical hot water tank is 120-140 degrees Fahrenheit, which is significantly less than the boiling point of water, 212 degrees Fahrenheit. If the water tank temperature is just a little lower, at 115 degrees, it's the prime temperature for these buggers.

Also, this might be a fun read for you: https://en.m.wikipedia.org/wiki/Thermophile


Hot water tanks aren't hot enough to sterilize. In fact, they're a perfect temperature to grow many things.


FYI: The author of this blog post also wrote "How to do research at the MIT AI Research Lab". While it's fair to suspect he may not be fully up to scratch with how things are done, he should have a pretty good overview


My criticism is towards this paper, not necessarily - the author. Surely, he knows something about AI (otherwise it would be impossible write anything gaining such publicity) and philosophy (AFAIK it is his field).

Though, even if someone is accomplished scientist in a given field, it does not mean they are incapable of making (to put it mildly) questionable statements (Noam Chomsky on data-driven NLP, Judea Pearl on Deep Learning, Roger Penrose on quantum measurement and consciousness; from historical - Albert Einstein on quantum physics).

Yet, there are many errors which won't be noticed by newcomers, but are demonstrably false for researchers and practitioners. It is dangerous as novices may be prone to "appeal to authority" and mistake witty style for knowledge.

Don't take me wrong - I am all for sharing ideas, even half-baked. But I think that it works well better when there isn't artificially boosted confidence.


>> (...) Noam Chomsky on data-driven NLP, (...)

If you can excuse the slightly combative tone, data-driven (i.e. statisical) NLP is a big potato and Chomsky was dead on the money: you can model text, with enough examples of text, but you can't model language. Because text is not language.

Which is why we have excellent dependency parsers that are useless outside the Brown corpus (if memory serves; might be the WSJ) and very successful sentiment classifiers for very specific corpora (IMDB), etc, but there is no system that can generate coherent language that makes sense in a given conversational context and even the most advanced models can't model meaning to save their butts. And don't let me get started on machine translation.

Like I say - apologies for the combative tone, but in terms of overpromising, modern, statistical NLP takes the biscuit. A whole field has been persisting with a complete fantasy -that it's possible to learn language from examples of text- for several decades now, oblivious to all the evidence to the contrary. A perfect example of blindly pursuing performance on arbitrary benchmarks, rather than looking for something that really works.


I started with a combative tone, so well - no apologies needed.

Well, still - current translation systems are data-driven, without exception, vide http://norvig.com/chomsky.html.

And LSTMs are awesome at picking grammar, even not one considered English grammar (line braking patterns, proper names, markup for links, etc). Vide http://karpathy.github.io/2015/05/21/rnn-effectiveness/

There are other issues like keeping track of the context, in which they suck (as of now). And right now it is like text-skimming quality, rather than "understanding" of text.

For understanding meaning, it seems that text is not enough, we need embodied cognition. Not necessarily walking robots (though, it might help) but being able to combine various senses. Some concepts are rarely communicated explicitly with words (hence - learning from an arbitrarily large text corpus may not suffice), but we have enough of experience from vision, touch etc.

Since I am mostly into DL for vision (though some interest in cognitive science), I got a lot of insight of the current SOTA (and its limitations) in NLP from http://www.abigailsee.com/2017/08/30/four-deep-learning-tren.... See also:

> while word embeddings capture certain conceptual features such as “is edible”, and “is a tool”, they do not tend to capture perceptual features such as “is chewy” and “is curved” – potentially because the latter are not easily inferred from distributional semantics alone.


On the one hand you make this sound extremely bad, while at the same time you describe it as just "making questionable statements".

Also, maybe I misunderstood the analogy, but I think you're being very unfair putting Albert Einstein who was wrong on quantum physics in the same basket as Roger Penrose with his view on consciousness, which may be questionable, but hasn't been disproved.


You are right that I shouldn't have put them in the same basket.

While Penrose's ideas on consciousness are not considered mainstream (neither by cognitive scientists nor quantum physicists), they don't fall in the infertile basket of:

- people gravitating to the state of science they were "raised into"

- people talking about things they are don't mastered

In this case it is a healthy scientific peculiarity. And who knows, it may turn out true. Or false, yet fertile. As ideas of faster-than-light communication with quantum states - which was flawed, yet gave birth to quantum information (more to this story, and an interesting overlap of non-science and science, in http://www.hippiessavedphysics.com/).


You can't look only at track record. Afaik they have an unreasonable amount of infantry for purely defensive purposes. They're #3 globally in terms of firepower, of course they won't create trouble until they're reasonably sure they'll come out ahead. Once they're more powerful economically and militarily you'll have a an unscrupulous nation, with a dictator for life, capable of very long-term planning, leading a pretty explicitly racist people in need of massive amounts of natural resources.

So yeah, even though they're not currently adventurous militarily, I wouldn't count on that being the case forever.


I got really stressed out by Trump winning, but I still think the backlash against social media and tech companies has a lot to do with the establishment being afraid of losing power. When the Arab Spring happens, the free flow of information is an unalloyed good, but when Trump gets elected we suddenly need to curtail "troll factories", and ensure Fake News is being fact checked, and see to it that personal data isn't being used to influence elections.

I'm the first person to admit that I'm not impressed by my peers when it comes to electing suitable leaders, but it worries me much more that we're now falling for the temptation of reasoning for them, and protecting them from themselves. I think this straitjacket that we're so eagerly putting on our fellow man for his own good may very well be put on us, or perhaps the truly original thinkers that we yet don't know about.

So no, comments like these aren't infantile, they're crucial for maintaining the liberty that we've thus enjoyed on the internet, and that I would very much like to keep enjoying, even if I don't always agree with how people tend to use that freedom.


The author can't have listened much to Thiel. He certainly doesn't speak without a filter, most of what he says he will repeat in interview after interview, for years. Every now and then he adds something new, but that thing is seldom very off-the-cuff. Thiel is unorthodox, not unfiltered.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: