Hacker Newsnew | past | comments | ask | show | jobs | submit | endothrowho333's commentslogin

>This blog is “built” on cloudflare apparently.

You can read more about why that is here: http://teetotality.blog/posts/how-this-blog-was-made/


I found that too as the article linked raised more questions that it answered. I was like "how can I be reading this if it's not using DNS?" And from the link this explanation:

> It updates a dnslink pointer at Cloudflare, which allows Cloudflare's DNS to direct teetotality.blog HTTP traffic to the correct IPFS hash address via their IPFS Gateway. So even though you've (probably) reached this page through a regular old HTTP link that uses the teetotality.blog host name, there is in fact no server with that name - the content is stored on various IPFS nodes, including but not limited to Cloudflare's edge caches.


The blog is hosted on IPFS. To allow normal web browsers without IPFS support to view the website they use Cloudflare's IPFS gateway, which is a service that serves content from IPFS over normal HTTP.


I’d be a bit more precise. The blog is hosted by cloudflare. IPFS hosts all the versions of the blog, none of which are canonical.


It's pieces like these that remind me why good writing skills are important, and one shouldn't stray from the basics unless they're fully aware of the trade-offs. For this article, it would be: write a better hook, and make sure to include a rudimentary thesis statement, because I wasn't able to deduce what you were trying to persuade me of, within the first few paragraphs.

With a title like "Why build this blog -- or anything -- on IPFS?" You were trying to persuade me, right?

I had to read through what is essentially every single cooking recipe on the web, before I got to the actual filling. I.e a whole lotta aimless wandering and musing, that is only tangenitally related to the topic at hand, before giving me what the title promised. Similarily to cooking blogs, this page is 2/3 filler, and 1/3 actually giving me what the title promised "So……why IPFS?"

> 1. Ownership, control, censorship

The author goes on to chastise Medium's censorhsip practices, but not too long ago he mentioned self-hosted Wordpress and staticly-generated Github pages. Wordpress and Github pages get over these hurdles and are easier to setup than IPFS.

> 2. Resilience

Suffice to say, the point of this pargraph was "DNS and HTTP unrobust, webservers fail under unforseen circumstances." Ok, well how does IPFS do things differently? You never explained how IPFS works, much less how it gets over any of the aforementioned issues you outlined.

> 3. Elegance

> But I will say that content addressing strikes me, and many software people who come across it, as obviously superior to host-based addressing along certain dimensions.

Never touched upon or elaborated.

> Plus, it's super cool. You should try it!

Atleast you have a call to action. Otherwise, this post fails to even come close to making me interested in IPFS.


> Suffice to say, the point of this pargraph was "DNS and HTTP unrobust, webservers fail under unforseen circumstances." Ok, well how does IPFS do things differently? You never explained how IPFS works, much less how it gets over any of the aforementioned issues you outlined.

If DNS and HTTP are not working absolutely no one will care about some blog being up.


I think bloggers deserve some leeway in how they write; it's an informal medium and maybe the author's usual audience is arriving with a lot of shared assumptions.

Still, I wish IPFS had been defined within the post. I had to look it up on Wikipedia; presumably it's "Interplanetary File System".


Yes, it occurred to me that his audience already had a background in IPFS and I wasn't his prospective audience; yet, then why craft the title to seem like it was meant for people who have no idea what IPFS is, because surely those with prior knowledge of it, would already be able to answer for themselves "Why... IPFS?"

I think my main problem with this blog post, and many others, is that they come off too "stream of consciousness," instead of something more structured, and easily-digestable.

It's obvious the author can write[0], but at risk of being presumptuous, it seems like it was hastily written and submitted to HN for the sole purpose of generating traffic.

[0] This is a much better piece, albeit short: http://teetotality.blog/posts/think-do-build/


My goodness, this is harsh. I was thinking it was a very well-written article and I appreciated its clarity.


This is very much misguided.

Many websites do have "hacked" (blackhat/shady) SEO, but these websites do not last long, and are entirely wiped out (see: de-ranked) every major algorithm update.

The major players you see on the top rankings today do utilize some blackhat SEO, but it's not at a level that significantly impacts their rankings. Blackhat SEO is inherently dangerous, because Google's algorithm will penalize you at best when it finds out -- and it always does -- and at worst completely unlist your domain from search results, giving it a scarlet letter until it cools off.

However, the bulk of all major websites primary utilize whitehat SEO, i.e "non-hacked," i.e "Google-approved" SEO to maintain their rankings. They have to, else their entire brand and business would collapse, either from being out-ranked or by being blacklisted for shady practices.

Additionally, Google's algorithim hasn't changed much at all from pagerank, in the grand scheme of things. If you can read between their lines, the biggest SEO factor is: how many backlinks from reputable domains do you have pointing at your website? Everything else, including blackhat SEO, are small optimizations for breaking ties. Sort of like PED usage in competitive sports; when you're at the elite level, every little bit extra can make a difference.

Google's algorithm works for its intended purposes, which is to serve pages that will benefit the highest amount of people searching for a specific term. If you are more than 1 SD from the "norm" searching for a specific term, it will likely not return a page that suits you best.

Google's search engine based on virality and pre-approval. "Is this page ranked highly by other highly ranked pages, and does this page serve the most amount of people?" It is not based on accuracy, or informational-integrity -- as many would believe by the latest Medic update -- but simply "does this conform to normal human biases the most?"

If you have a problem with Google's results, then you need to point the finger at yourself or at Google. SEO experts, website operators, etc. are all playing a game that's set on Google's terms. They would not serve such shit content if Google did not: allow it, encourage it, and greatly reward it.

Google will never change the algorithm to suit outliers, the return profile is too poor. So, the next person to point a finger at is you: the user. Let me reiterate, Google's search engine is not designed for you; it is designed for the masses. So there is no logical reason for you to continue using it the way you do.

If you wish to find "deep enough" sources, that task is on you, because it cannot be readily or easily monetized; thus, the task will not be fulfilled for free by any business. So, you must look at where "deep enough" sources lay: books, journals, and experts.

Books are available from libraries, and a large assortment of them are cataloged online for free at Library Genesis. For any topic you can think of, there is likely to be a book that goes into excruciating detail that satisfies your thirst for "deep enough."

Journals, similarly. Library Genesis or any other online publisher, e.g NIH, will do.

Experts are even better. You can pick their brains and get even more leads to go down. Simply, find an author on the subject -- Google makes this very easy -- and contact them.

I'm out of steam, but I really felt the need to debunk this myth that Google is a poor, abused victim, and not an uncaring tyrant that approves of the status quo.


> Google's algorithm works for its intended purposes, which is to serve pages that will benefit the highest amount of people searching for a specific term.

Does it? So for any product search, thrown-together comparison sites without actual substance but lots of affiliate links are really among the best results? Or maybe they are the most profitable result, and thus the one most able to invest in optimizing for ranking? Similarly, do we really expect results on (to a human) clearly hacked domains to be the best for anything, but Google will still put them in the top 20 for some queries? "Normal people want this crap" is a questionable starting point in many cases.


Over the long-term, Google's algorithm will connect the average person to the page most likely to benefit them, more than it won't.

There is no "best result."

Any page falling under "thrown-together comparison sites without actual substance but lots of affiliate links" are temporal inefficiencies that get removed after each major update.

Will more pop up? Yes, and they will take advantage of any ineffeciency or edge-cases in the algorithim to boost their rankings to #1.

Will they stay there for more than a few months? No. They will be squashed out, and legitimate players will over time win out.

This is the dichotomy between "churn and burn" businesses and "long term" businesses. You will make a very lucrative, and quick, buck going full blackhat, but your business won't last and you will be consistently need to adapt to each successive algo update. While long-standing "legit" businesses will only need to maintain market dominance -- something much easier to do than break into the market from ground zero, which churn and burners will have to do in perpetuity until they burn out themselves.

If you want to test this, go and find 10 websites you think are shady, but have top 5 rankings for a certain search phrase. Mark down the sites, keyword, and exact pages linked. Now, wait a few months. Search again using that exact phrase. More likely than not, i.e more than 5 out of 10, will no longer be in the top 5 for their respective phrases, and a couple domains will have been shuttered. I should note that "not deep info" is not "shady," because the results are for the average person. Ex. WebMD is not deep, but it's not shady either.

I implore people to try and get a site ranked with blackhat tricks and lots of starting capital, and see just how hard it is to keep ranked consistantly using said tricks. It's easy to speculate and make logical statements, but they don't hold much weight without first-hand experience and observation.


>Will they stay there for more than a few months? No. They will be squashed out, and legitimate players will over time win out.

This isn't true at all in my experience. As a quick test I tried searching for "best cordless iron", on the first page there is an article from 2018 that leads to a very broken page with filler content and affiliate links. [1] There are a couple of other articles with basically the exact same content rewritten in various ways also on the first page.

A quick SERP history check confirms that this page has returned in the top 10 results for various keywords since late 2018.

>It's easy to speculate and make logical statements, but they don't hold much weight without first-hand experience and observation.

This statement is a bit ironic given that it took me 1 keyword and 5 seconds of digging to find this one example.

[1] https://www.theironingroom.com/best-cordless-iron-reviews-of...


Would it be correct to say your brother is also not the type to network with others easily?

An automated system is the last-line in hiring; recommendations -- internal and external -- are the first-line. If one finds themselves in a position where they're manually submitting cold resumes, it's almost always more productive to start networking into the companies you want to work at, and getting the recommendation firsthand, turning your resume into a hot one.


That would be correct.

> recommendations -- internal and external -- are the first-line

I'm afraid this is a very tech-centric view. Outside of a few specific industries or the very top levels, this is essentially unheard of.

My brother's educational background is biomedical sciences so he's looking for essentially lab work doing analysis for a hospital, drug company, or similar. There are a fair few jobs doing it, but they are relatively low level, have no "community", no real way to facilitate referrals.

In tech it's easy to "network into companies" because companies are so open with their hiring – they hold events, they sponsor conferences that are priced so that people can pay their own entry, and there are community events where you can meet people from them. This is very far from the norm, until you get to the golf clubs where you can mingle with other execs.


No, this is how it works in almost every industry. Even if that's how they try and force hires into the pipeline, if you're simply accepting that instead of circumventing it, you're success rate must be abysmal.

I don't work in tech. But I've got about a 60% lifetime success rate (Job offers to applications). 100% once I got to the interview stage. And that's in a variety of industries: EMS, academic research, the energy industry, and civil/environmental engineering.

I swear nobody has any hustle anymore. I've never bothered to make a LinkedIn or go to "networking" or "hiring" events. They're a waste of time. If you're really out of your existing network (you're probably already doing something seriously wrong if that's the case), you'd be better off figuring out where you want to work and then waiting at a nearby lunch spot for an obvious group of employees to come in around lunchtime (or after work drinks) and start chatting them up. (I actually landed a job doing that.) Or better yet, find a CrossFit gym some of them go to. Sweat and bleed and bond with someone a bit before you leverage them as a recommendation. There's a million ways into an organization if you want it badly enough. If nothing else you can get super good intelligence on how to craft your application to be desireable.

Do your research, know your shit, know exactly what they're looking for before you ever turn in an application. Become that person to the core. Get any new certifications you need to be that person. Make every document you turn in to apply for the position fit that profile. Make every searchable piece of information about you on the internet align with that profile. Know the way they conduct interviews before you get there, and practice and rehearse the questions and flow of your responses in broad ways. Leverage your contacts in the organization to get information about each of the interviewers and how they think and approach interviews.

You know, hustle.


> you'd be better off figuring out where you want to work and then waiting at a nearby lunch spot for an obvious group of employees to come in around lunchtime (or after work drinks) and start chatting them up

I'm going to go out on a limb and guess you're either a US citizen, or at least base this advice on the US.

I have only known 1, possibly 2 people who can make this sort of thing work here in the UK, people just don't do this.

Plus, CrossFit isn't really a thing here except in trendy bits of London. For many of these places there isn't a "lunch spot", people take their lunch in to their building in a business park where there's no lunch options or options for socialising.

Overall, while you don't work in tech, I think you're probably privileged enough to work in an industry that works pretty similarly. Most of your advice would be pretty good for me, but almost none of it would work for someone at the beginning of their career, aiming for a large company with out of town offices – a fairly typical starting point for many graduates.


Yeah, I am American, and I definitely understand there's certain informalities available to us here culturally.

I think the key for new graduates is to have been thinking about getting a job for the last 4-5 years. Don't start looking and preparing when you graduate, you're already behind.

I kept a job I started as a teen as a lifeguard for like 6 or 7 years even though there were much better opportunities available to me financially because I knew the stability was one of the best things I could bring to the table. Resume building.

But in addition to that, I started networking long before I left school. It's essential. Despite the fact that culture may differ in other countries, I don't think that fundamentally changes my advice. The tactics may differ, but the strategy is the same because human nature is the same.


> There are a fair few jobs doing it, but they are relatively low level, have no "community", no real way to facilitate referrals

I’m only tangentially exposed to the biomedical field but I’m fairly sure this isn’t broadly true. I’m a member of a single cell RNA sequencing slack group whose members host a meetup twice a month. Most members work in wetlabs and many conduct research into the effects of various drugs on cancer.

The groups are probably harder to find (than programming meetups) but I doubt there are none of them.


Curious, what areas did you find most inciting?


> “Why Reddit is like this” is a whole other essay; I think it’s built into reddit’s structure itself.

Classic Eternal September.

Around 60k subscribers, the cultural identity starts degrading, as the amount of "old guard" is outmatched by "new blood." Therefore, the old "monkey see, monkey do" phenomenon, where new users would slowly mimic the culture of the prevailing older users to "fit in," is replaced with new users mimicking other new users, and the culture shifting towards the platform's identity instead of retaining the sub's identity.

Generally, the type of person to post on Reddit frequently enough, has social cohesion problems that may preclude him from fulfilling his social needs through more healthy avenues, like real life. The same is true for the majority of people that post online frequently. Usually the pyschological profile that follows that point is one built on abrasiveness, distrust and aggression towards authority, an inability to adopt social manners and participate in social contract, low emotional intelligence, etc.

All of the aforementioned behaviors culminate into the toxicity and vitriol you usually see -- and as well why it's so prevalent.


It might not be just reddit though. The author's summary points to a volatile combination of elements in Rust itself:

1) [T]he Rust project saw Rust as more than just the language...

2) unsafe... is a really important part of Rust, but also a very dangerous one, hence the name.

If a project is considered to be not just a project, but something closer to a cause, people are going to defend their understanding of that cause fervently.

And introducing the language of "safe" and "unsafe" isn't just descriptive, it's a value judgment. It has connotations of recklessness at least, and explicit threat at worst.

People who perceive themselves to be defending a cause against danger are going to react very strongly, much more so than people who are criticizing an implementation choice on purely technical grounds.


> And introducing the language of "safe" and "unsafe" isn't just descriptive, it's a value judgment. It has connotations of recklessness at least, and explicit threat at worst.

Is it really a value judgment? Coming from a formal PL background, I had just assumed that the "unsafe" keyword was referring to the PL concept of "safety", AKA "soundness", which has a specific technical definition, and not that it was necessarily a value judgment. In that context, "unsafe" just means "the compiler can't guarantee the behavior that it can normally guarantee".


You are correct.

That doesn't mean that people will incorrectly interpret it, though.


Ada’s language is probably clearer and less loaded: checked and unchecked.


Yeah, and interestingly, a lot of unsafe functions use "unchecked" in their names.

The issue was that by the time this was recognized, there was too much Rust code, and there was no clear alternative that people universally liked. This kind of conversation is the definition of bikeshedding. I submitted an RFC and it... didn't go well. (I think I picked "trustme" though.)


I don't think it's bikeshedding. It does seem to be contributing to the dogmatism I'm seeing from the Rust community here, and this community reaction is a huge problem for Rust. So it matters.


Bike shedding is a structural description, not a value judgement. It’s about technical complexity, and changing a keyword is one of the most minimally complex bits of language design.


My point isn't to argue over the definition of bike shedding. If the name of this keyword is contributing to this undesirable community outburst, then its name matters, and discussions about its name are important. That's all I'm saying. If that's what you originally meant, then we are on the same page :).


We are on the same page, yes.


Back then Rust did not have editions. I think it would be worth exploring renaming `unsafe` blocks to "sound", because when one writes `sound { ... }` what one is actually stating is that the code in the block has been proven sound.

The `unsafe` function type modifier can be left as unsafe, or renamed to unsound, since that what that is doing is stating that a particular function is not always sound to call.


Points well taken, but I think "unsafe" turns it into a value judgment, especially (as samatman says adjacent) since it isn't necessarily really unsafe.


It is unsafe. There are just multiple definitions of unsafe being used here. I agree that it's unfortunate that the meaning of the keyword is easy to misinterpret.

Given the background of the people who designed Rust, I don't think it's reasonable to just assume that the keyword "unsafe" has an implicit value judgment.


It absolutely includes value judgement. You just described a form of value judgement too. You're saying that predictable generated code behavior is preferred to unpredictable generated code behavior.


> And introducing the language of "safe" and "unsafe" isn't just descriptive, it's a value judgment.

`unsafe` is a PL term that refers to _soundness_. In Rust, an `unsafe { ... }` block is required to perform an `unsafe` operation, and it precisely means "The code in this block has been proven _sound_". If the code in the block turns out to be _unsound_, e.g., because the proof is incorrect, or non-existent, then the whole program is unsound, and there is nothing that can be said about the execution of such program (usually known as "the execution exhibits undefined behavior").

For example, the Rust compiler has a lint that requires you to write a soundness proof on every `unsafe { ... }` block, explaining why that is sound, and all changes to the compiler are gated on that.

In your own projects, you can obviously do whatever you want, but for any non-trivial amount of unsafe code, without a proof, you are basically just building castles in the air.


That's an interesting way too look at it. But that's not how it works in practice. Almost no one in industry is going to write proofs for their unsafe code. It didn't happen for C or C++ and it won't happen for Rust.


In Rust you at least know where to look at, whereas in C/C++ all code can be "unsafe". With that said, writing unsafe Rust is much easier to screw up than C/C++, since you have to uphold more invariants in unsafe Rust than in C to avoid UB (like never having two mutable references to the same thing at the same time. The very issue which was raised in the actix repository)


This is how it works in practice for the Rust compiler for the Rust standard library, and for a lot of foundational crates in crates.io. (Pretty much every well reviewed crate in cargo crate review either does this, or does not contain any unsafe code at all).

We also have tools that change for this for very large projects (e.g. cargo-geiger), and tools that help you test your proofs (e.g. cargo-miri). For some unsafe components, there are also proofs in Coq, and the proof systems for Rust unsafe code are making a lot of progress in both defining the rules that unsafe code must uphold in the unsafe-code-guidelines and the Rust spec, as well as in providing example proofs and a standard library of theorems that you can reuse for your own proofs.


I didn't say it no one will do it. I said almost no one. The Rust compiler is about as far from a normal project as can be. I would like to see this changed but really I have never even seen a proof requirement when looking at Rust jobs. Not even once.


> I would like to see this changed but really I have never even seen a proof requirement when looking at Rust jobs. Not even once.

?

Any B.Sc. in CS can do most of the unsafe proofs in a one liner. All crates I maintain require unsafe blocks to be commented with a proof, for most of them the proofs are trivial, and for all of them that weren't, the unsafe code was correct, and the correct one had a trivial proof.


It's fine that YOU do it. But how many unsafe blocks have a proof in the entire Rust ecosystem (read all crates). 5% maybe? I'm not sure what your point is that any B.Sc. in CS can do it.


Rust didn’t introduce this language, they adopted it. Memory safety has been called that as long as I’ve been programming.

Using the antonym of safe was a natural move, though with the benefit of hindsight, `danger` would have been a better keyword.

After all, good unsafe code isn’t unsafe! It is dangerous though, because it forgoes guarantees of memory and resource safety provided by the compiler.


> Generally, the type of person to post on Reddit frequently enough, has social cohesion problems that may preclude him from fulfilling his social needs through more healthy avenues, like real life.

You mean like making sweeping generalisations about a platform with millions of users?


Not to compare or anything. But I’m active at r/Golang and I find that the community is comparatively better there. I was shocked as to how would someone told anyone to never write a code again!

Maybe this because some members on Rust subreddit believes that they are more intellectual than others? It seems like a cultural thing, go people don’t bother to be seemed as smart or so and that’s possibly reduce the aggressive competition that naturally arise in places where intellect, high performance etc are very important metrics?


A forum being large doesn't have to lead to a loss of cultural identity.

My favourite counter-example is /r/AskHistorians. Over a million members (16x larger than 60k) and it has some of the highest quality posting and discussion I've seen anywhere on the internet. I think it serves as a prime example of how a well a forum can scale, if done properly, so long as there are strict rules and moderators.


>distrust and aggression towards authority

Not an exclusive characteristic of a reddit user, there are many different levels and types of authority. Perhaps the aggression towards all authority, or weighted to certain personalities of authority rather than the ones in the background, especially the ones that don't really matter i.e. internet forum moderators

I agree with your general point that reddit fills a social vacuum in people's lives, like most social media does for others in different ways. Validation and expression etc.

Personally I think think that's why society in general has become generally more toxic, everyone is really telling it straight to others but not being honest with themselves.


Psh, do you even know what time the narwhal bacons? /s

I think Reddit is interesting because its design is more resistant to Eternal September than other communities. See the trend of creating r/TrueX when r/X gets Septembered.

Unfortunately, this hits diminishing returns when more and more obscure subreddits need to be created. Tons of oddly specific niche subreddits have popped up and completely gone to shit in a flash. I think we're nearly due for the next migration (maybe a federated alternative?).


Person posting on online forum says people posting on online forums have social problems.

If that statement is true, you (and me, and everyone else here) have those same issues, surely?


> the [psychological] profile ... is one built on abrasiveness, distrust and aggression towards authority,

Hey hey hey, "aggression towards authority" is not a psychological trait, but at most and pattern of action. Also, I disagree with your implicit maligning distrust of authority without generally distrust of others.

> ... and participate in social contract,

The claim of the existence of a "social contract" is part of the self-justification of authoritarian aspects of social structures in capitalist democracies.


A community grows and eventually reaches the point where users cannot recognize who they're interacting with nearly every time, where the submission queue is trailing down too fast for any single reader to process. When that happens, it can no longer function as a cohesive community. It becomes about as personal as a magazine about the very same topic. Furthermore, the incentives to post change due to the higher potential for "karma" (and posts therefore become more like clickbait), and the barrier for the admissibility of submissions becomes lower as a result of the "expert" or "enthusiast" segment of the population becoming a tiny minority.

In my experience, these larger communities typically grow less tolerant of antisocial behavior, most likely due to acceleration of the process known as "dogpiling". Trolls get more exposure, and better reactions, in small-to-medium communities.

It's still conjecture, but I think I prefer my hypothesis.


> Armchair psychology, coupled with pure distilled condescension.

Please edit the name-calling out of your comments on HN, as the site guidelines ask: https://news.ycombinator.com/newsguidelines.html

Your comment would be fine without the first sentence.


The edit window for that post is sadly over, but I'll be more careful in the future.


If you want to edit it out, I've reopened the comment for editing.


The above was from personal experience and observation from having been a part of many online communities -- pre-dating reddit -- in all ways including: here-and-there member, first-hand maintainer and manager, and antisocial member causing a ruckus.

Armchair psychology? No, I've made many friends and acquaintances, both online and in real life. The ones that bubble around posting online heavily, have developed antisocial tendencies that were reinforced through social exclusion. A self-fulfilling prophecy. I know this, because I know those people well and because I was there at one point in my life as well.

Neither is it condescension. Perhaps I may have been in-exact and may have offended some that do post online very regularly, but there must be a distinction between what they consider "regularly" and what I consider "heavily." Heavily, in my observation, is someone that puts aside a significant amount of time, usually involuntarily, to do nothing but interact with online communities for the sole purpose of social interaction. No value judgements were made either, but those characteristics are common among the aforementioned group.

I won't address your hypothesis, because I'm not here to argue.


>Armchair psychology? No, I've made many friends and acquaintances, both online and in real life. The ones that bubble around posting online heavily, have developed antisocial tendencies that were reinforced through social exclusion. A self-fulfilling prophecy. I know this, because I know those people well and because I was there at one point in my life as well.

This would still qualify as anecdotal data, and thus armchair psychology. Now, it works perfectly valid as a hypothesis and can be rigorously tested and determined if data reject or supports (fails to reject) it. But without peer reviewed research, that last step hasn't happened.

It also sounds good. Makes sense. Fits our notion of common sense of how humans works. The problem is that psychology is filled with examples of where these kinds of intuitions are wrong.


You got me there.

My connotation of armchair psychology is more informal and doesn't match the more rigorous, APA definition.

I'll make one note: the psychological profile I wrote of, is based on first-hand experience, as well as pieces of mental notes recovered from internet-addiction and FBI profiling papers.


If I understand correctly, your anecdotal evidence is what makes this not "armchair psychology"?


What's wrong with simply sharing personal observations based on the decades of related personal experience? Anyone who has some training in science has learned to always be looking for patterns.


Theres nothing wrong with it. Calling someone an armchair psychologist is an anti social behavior.


Nothing. I simply don't value this kind of loaded and facile analysis very much.


that is armchair psychology.


A lot of guesswork and generalization, right there.


Well said- sometimes i feel that HN is basically a practice for people on how to reply to toxic emails from coworker. "I should have framed it like this"


Is it entitled? Most would agree, "very" (but that's not my concern nor judgement to make)

Did the maintainer screw over people that were using Actix? Very likely.

I sympathize with both sides here. I've been the unthanked, punching bag contributor on a few notable projects, and I've been a user of software whose leadership got into drama and squabbles, that ultimately fucked me over.

There were times that I pulled certain projects and essays, that had greatly helped people, and taken my ball home because I was fed up with the people I was catering to. There is no correct answer here, dealing with people and their emotions, but there is a way for both sides to handle things without contributing negatively to other's lives.


> Did the maintainer screw over people that were using Actix? Very likely.

How so? Just find a fork with a decent maintainer, or push your copy to your own repo. Let the Actix guy do his thing and move on with life.


That marketing babble is there for the same reason the majority of YouTube videos are >10 minutes and filled with the same filler: Google's ranking algorithm.

For YouTube videos it's mean watch time. Filler babble pumps up those numbers.

For Google, it's mean read time. Filler babble also pumps up this number. However, it also ranks higher directly because it's considered a "long form" piece instead of a soundbite, and because it's more unique than similar pages (f.e compare all news outlets covering the same story -- it's all relatively the same).

Blame Google, for its algorithms that are directly shaping our communications and culture. Hell, blame all tech companies that use algorithms to determine culture (see: Facebook, Twitter, Reddit, Quora, and yes, HN). Whether this is good or bad, is someone else's argument to write.

Mozilla, and every other website owner, must play the game in order to drive organic traffic to its pages through search engine optimization. Otherwise, you might not be reading this article right now, and it would be solely shared by word-of-mouth from people who check the site or are on a newsletter.


> Blame Google, for its algorithms that are directly shaping our communications and culture. Hell, blame all tech companies that use algorithms to determine culture

> Mozilla, and every other website owner, must play the game

No. Mozilla and every other contributor still have a responsibility for what they put out there. Google and Facebook messing up the overall environment doesn't let individual contributors off the hook.


Hating the game vs. hating the player.

We all have to do things in our own self-interest, especially when there are higher powers bearing down upon us.

They're not off the hook, but they're not the biggest fish that needs to be fried. Going after them won't solve anything except the need to vent one's own emotions.

The cycle will continue to repeat for other websites and other people. We've established that it happens often, and we've even established the root cause. Idling on "who's at fault and what should their punishment be" is just that: idling.


Severely off-topic but this is also how I feel about battling racism in America, but the general notion of systems vs specific agents was strongly reinforced for me by the Slovenian political writer Slavoj Zizek in a book of his after 9/11 wherein he remarks that Left politics were becoming more and more about targeting specific agents when the processes that motivate and constrain those agents (as well as our own) are so much more anonymous that its difficult for people to even parse the significance of.


I doubt that Mozilla needs this page to be ranked high, it's more of a press release and not a landing page that should come up high in search results when you search for Firefox...


Ranking individual pages is vitally important in the website traffic game.

Suppose Mozilla's goal for its blog is to generate a lot of organic traffic so that people are aware of Moz Corp's continuing development efforts. If that were the case, individual ranking of pages matters.

A domain name has a certain "rank," that's based entirely on on-page SEO -- or all of the indexed pages and site as a whole -- and off-page SEO -- or all of the domains and pages that link to it.

One of the algos in determining on-page SEO is cumulative ranking of all pages under that domain. That is, a single page's ranking is not an isolated variable based purely on that page and its content alone. The other pages that fall under that same domain impact every other page on said domain.

If Mozilla published a poorly SEO'ed blog article, that would negatively affect the ranking of all other articles under the blog.mozilla.org name -- and more severely any other pages that link to it or are linked inside of it.

That's only the first part. Suppose they wanted to generate traffic for this individual press release? Then that's a whole nother ball of wax.


They could put the actual content in understandable form at the top, and leave the page of marketing babble for after.


Google crawlers treat the first few paragraphs as an abstract, and weigh it heavier than the sections that follow after.

The balance between SEO and readability has already been struck, and an equilibrium has been reached. Changes will only happen when Google ships out new major algorithm changes. They're relatively often though, and always shake up the field.

If people were really interested in fixing this problem, they would start writing and campaigning for more transparency and democracy from Google. Do you think that would affect their own rankings? ;) Rhetorical question.


> the majority of YouTube videos are >10 minutes and filled with the same filler: Google's ranking algorithm.

Ironically now sometimes google shows video results suggesting to skip the first few seconds. I swear I've seen it a few times, but maybe I was being A/B tested


Also seen it when searching for some how-tos. More than a few times. I thought it was brilliant, but now that you mention it: why doesn't this show up more often?


Can you elucidate on that line of reasoning some more? I am unconvinced.

It has been my experience -- which some more "keepin it real" journalism concurs with -- that New York's outlandish infrastructure costs, as well as the majority of the USA's public projects, are primarily due to two main agents:

1. White collar bureaucracy

2. Blue collar "bureaucracy"

Both parties -- and the multitude of agents that operate within them -- are all too able to align their "piece" of the project towards their own interests.

These interests very much involve time and money.

For money, it's simply rerouting costs. And for time, it's "making one's mark," by misallocating resources to personally-enriching matters, that do not benefit the project as a whole, and usually hinder it.

It's late, and I'm not at 100% to go into detail, but surely this is well-known already?


I did structure my argument to say that the optimistic estimates precede the other problems. Before ordinary problems of inefficiency and corruption arise, the project has to be sold as a viable investment.

I take the argument about selection bias from Brent Flyvbjerg, who has been studying megaprojects for a fair while now. He calls it "survival of the unfittest"[0].

A similar resource is Merrow's Industrial Megaprojects[1], which is based on a large dataset of such efforts. He identifies many factors in project overruns, but essentially notes that most projects start off with fanciful estimates that will never be met. For a book about petrochemical plants, bridges and power stations, it's a fun read.

[0] https://arxiv.org/ftp/arxiv/papers/1409/1409.0003.pdf

[1] https://www.amazon.com/Industrial-Megaprojects-Concepts-Stra...


Yes.

Depending on the size of the microplastic particles, they will be absorbed through the roots. Most common sources are usually contaminated water and fertilizers. Other times, regular pollution (f.e air and runoff)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: