Hacker Newsnew | past | comments | ask | show | jobs | submit | d4nyll's commentslogin

Working on the metro makes a boring commute fly by - blink and you are at your destination already. 30 minutes each way is 1 hour a day. Over the entire year it's 250+ hours or 9 whole 16-hour days.

That's why whenever I move to a new city, I typically look to live somewhere that's at the end of the metro line. It meant in the morning commute I can _always_ get a seat.

That was one of the reasons I liked living in Hammersmith when I worked in Shoreditch/Old Street - it's at the end of the Hammersmith and City Line and I don't have to change lines. There's also the add bonus that the line is above ground until Paddington which meant I have more than enough time to load up any tabs I need to use before the Internet blackout.

In Hong Kong I worked at Central and lived in Tsuen Wan. Literally from one end of the line to the other. This had the added bonus that I was also guaranteed a seat on the way home as well.


Growing up in a culture similar to Japan, where the expectations placed on kids are similar to what's seen in the video, I can say there are pro's and con's.

The pro's are kids learn faster when it comes to STEM subjects, where brute hard-work and repetition pays off (because until you go to university, you're pretty much limited to learning widely-accepted truths, which just needs to be understood and remembered).

The con's are the same work ethics and discipline applied to creative subjects doesn't necessarily work. Yes - you need practice to be good at playing musical instruments, but you also need to have the space to "mess around" and have some fun. In the video, the girl was obviously enjoying her time playing the cymbal, only to be told by her music teacher to be serious and not mess around. It really hurt me to see her creativity and happiness smothered by that killjoy.

Watching the Op-Doc reminds me of the book Totto-chan, the Little Girl at the Window (窓ぎわのトットちゃん) (there's also a movie) where it showed that some "problematic" children are really just curious children who didn't get their curiosity satisfied.


I was waiting for a happy ending at the end of the article that never came. But I'm glad the Guardian ran this piece, it shows how powerful someone can be when they know how to "use" the law. And how powerless one can be when one doesn't know how to navigate the law.


John Oliver with a quite informative video about why many carbon offsets/carbon credits schemes are most likely not genuine in their attempts to offset carbon emissions.

https://youtube.com/watch?v=6p8zAbFKpW0


There is StoryGraph.

But what do you want from Goodreads that it is not providing? What makes it "bad" in your opinion?

Functionally, it does everything I want it to.


Very enlightening posts. I appreciate how they tell you that million is not for every situation, it's for sites with "Lots of static content with little dynamic content". Many frameworks boasts about how they are the best at everything, without any nuance or justifications for their claims.

Personally, I feel like we are getting to the point of over-optimization on the front-end. Is a 70% optimization in performance, which may lead to a, say, 5% increase in user experience, worth the increased complexity/maintenance cost of another library integration? I think for most companies that are not Amazon or Google - probably not.


One question and one comment:

Q) What's the difference between a blocking comment and a "normal" comment + rejecting the PR?

C) Exposing metrics like "Review time per reviewer" may, at the wrong hands, incentivise the wrong behavior. For example, a team lead may view long review times by a particular reviewer as them being slow, whereas they are just more thorough. Tracking the total review time for the PR (aggregating the times from multiple reviewers) is more useful.


> Q) What's the difference between a blocking comment and a "normal" comment + rejecting the PR?

Good question. The granularity of the states are a bit different. Rejecting the PR doesn't capture individual thread states. You may even forget to look at thread later by mistake. With the blocking thread, the individual thread needs to be marked as addressed in order for the status check to go green. This helps ensure that the work gets done and also makes it possible to leave comments like: LGTM after comments or something similar.

Another difference is cultural. At a couple of the companies I was at, Request Changes/Reject on GitHub was viewed as passive aggressive and not used. Blocking and non-blocking comments help these team be more explicit and block on smaller changes in a friendlier way.

> C) Exposing metrics like "Review time per reviewer" may, at the wrong hands, incentivise the wrong behavior. For example, a team lead may view long review times by a particular reviewer as them being slow, whereas they are just more thorough. Tracking the total review time for the PR (aggregating the times from multiple reviewers) is more useful.

Visibly does both, it tracks cumulative time across all reviewers and the individual times. The reasoning is to help show the effort people are putting in to the reviews and the people they are reviewing and to understand, to some degree, if a review was thorough. The metrics help show trends and just surface more knowledge, they should still be a single input in a more holistic picture though.

A few examples that becomes possible with these metrics: (1) You can show why you couldn't get to a task sooner if you spent time reviewing a lot of PRs on the day. (2) You can understand if your PRs "cost" more than other PRs opened by the team. This can help you write smaller PRs or identify the differences between your work and others'.

We also plan to use this time to help surface an estimated "It will take you X min to review this PR" in the future.


Thanks for clarifying. I agree that rejecting a PR can be seen as negative in some companies, but I'd also argue that it's a symptom of a bad culture to view rejected PRs as something negative. And using tools to mask a a negative culture instead of surfacing it may not be wise.

> they should still be a single input in a more holistic picture though.

Whilst, in theory, no single metric should be used to determine performance, in practice it may (especially if your lead is inexperienced). But it can be prevented altogether by exposing individual metrics to indivuals only, and exposing the less granular aggregate metrics to the team and team lead.

> You can show why you couldn't get to a task sooner if you spent time reviewing a lot of PRs on the day.

I think in a culture where you have to justify AND PROVE why you couldn't do a task sooner shows a lack of trust, and the problem won't go away even if you are able to show, this time around, that the delay is justified.

I do think tracking the cost of a PR is important, as it will incentivize smaller PRs. But my point is that exposing the "wrong" metrics (like individual review times) isn't just a data point that people don't have to use, but that it can be harmful as it'll incentivize the wrong behavior (e.g. knowing the team lead value quick reviews, developers may be incentivized to review quicker and thus less thoroughly).


I've quit most of the social media like Facebook, I'm not active on Twitter or LinkedIn. But I've always struggled to quit Reddit.

But now, partially because of this (and partially because they've intentionally made the mobile web experience unusable over the last few years), I decided to quit Reddit a few days ago.

And it feels great. I've spent the time that I would have wasted on Reddit tackling my TO-READ list of books instead. And I feel much happier for it.


I have a degree in biochemistry. Would love to combine my passion in software and biology, but academic research is often funded by governments which means the salary is (super) low.

It's the same reason why there's a lack of qualified computer science teachers in schools.


I agree with everything you said. The best interviews ive had are those where I felt like I am working together with the interviewer.

I think being able to handle criticism is an important trait to look out for. Many times interviewees will defend their method too aggressively.

But if you have to pick between two people who are equally pleasant to work with, but one was able to solve the problem, you'll probably pick the one who solved the problem. When the competition is fierce, the candidate would probably still have to spend some time practicing on these problems.


I interview candidates at a company with a relatively high volume of interviews and I use DSA criteria in my interviews. But here's what I do:

- I phrase the question in a way that has some semblance of day-to-day relevance. That is to say that at some point in the process of coming up with a solution, the ability to apply a relevant data structure will come up, but it will be in service of an end goal that looks like the deliverable of a sprint task.

- I come into the interview aware of multiple solutions and I am open to any of them.

- I pace feedback so that the candidate actually solves the problem by the end of the interview, no matter their level (which does mean, in some cases, literally spelling out the step to unblock themselves).

The rationale is that solving a hard DSA question doesn't give me all that much signal in and of itself. Watching a candidate bang out something with a level of complexity a little higher than fizz buzz is usually sufficient to evaluate whether the candidate has familiarity with the language. The choice of idioms and APIs can tell me things about their relative level of expertise with the stack (i.e. it can generally be safely assumed that an already employed candidate can hold their candle, and the question for me is more along the lines of "to which extent").

During the course of an interview, I can usually pick up a distinct and noticeable difference in focus between candidates, especially surrounding topics related to proactiveness/curiosity (e.g. does the candidate have understanding of aspects one abstraction level lower than the API they usually use, are they aware of well known pros and cons of some specific idiom, does their argumentation seem derived from personal experience vs parroted from a hivemind, etc). This tends to correlate surprisingly accurately to how much autonomy and growth they demonstrate on the job.

"Hardcore" DSA evaluation only really comes in as a criteria to determine whether the candidate is of very high quality when most other criteria have already been evaluated as acceptable/desirable. These nice-to-have criteria come into play in some cases where I want to advocate for the candidate when the evaluation panel is split due to one seemingly bad session (possibly due to factors such as nervousness or mixed signals), or inversely when the role logically demands a higher bar but the panel is situationally incentivized to hire down to meet a quota.

I've been told by several candidates that they appreciate my interviewing style, and conversely, I feel like I get a much better feel for the candidate than strictly evaluating DSA skills and nothing else.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: