Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To me OP seems to assume a competent, unbiased, and well meaning interviewer: always giving hints, knows exactly what they want, will field and answer questions comprehensively. At Google's scale with hundreds of thousands of interviews on the books, thousands of interviewers who like all human beings have good days, sick days, angry days, and depressed days, this will not always happen. What happens when its the interviewer trying to have a "smartness" contest? There are too many testimonials of this happening at Google to just ignore it.


What happens is the hiring committee will disregard the interview report, and send feedback to the interviewer, saying "don't do that". (There are generally 4-6 interviewers on an interview panel, so if one interviewer does a bad job, there are other interviewers who will be providing signal to the hiring committee. A single bad interview report won't sink a candidate.)

Google's interview training is pretty specific about what is expected of an interviewer, and the interviewer has to write up fairly comprehensive reports about the questions that were asked, and how the candidate answers them, what hints were given, etc. Most interviewers ask questions that they have asked multiple times before, so they will include things like, "The candidate (TC) took twice as long to code this a very basic warmup question that was designed to motivate a more in-depth distributed computing question; unfortuantely, what a strong candidate could do in ten minutes, TC couldn't get working C code in 45 minutes". (Current best practice is to try to avoid revealing the gender of the candidate in the interview notes, hence the use of TC instead of he or she. The goal is to try to avoid triggering any unconscious bias on the part of the members of the hiring committee.)

Now, there are some screening questions that are asked by recruiters as part of an initial phone screen when they are hiring for SRE candidates. Those questions are multiple choice and are really basic questions designed to screen out old-school operators who are quite good at mounting tapes (for example) but don't have any understanding of what TCP might be, and who think an SRE job is no different from a traditional system administrator position. Those are not asked by an engineer, but by a recruiter; the goal is to avoid wasting everybody's time.


Asking a question well takes skill. It is possible that an interviewer screwed up asking the question rather than that the candidate screwed up.

Most interview systems don't have a systematic, ongoing way for accounting for this especially given that interviewers are self reporting.

In my opinion, most interviewers are undertrained/underskilled at asking specific questions and at interviewing in general. They don't invest the resources to systematically improve but rather treat interviewing as a burden to be minimized.

There is still a lot play in systems that allow for borderline smart ass behavior. Do you think an interviewer will write up that they sneered at the candidate?

The interview process is a factory. The company is mostly concerned with reaching outputs. The candidate is just a happenstance casualty.


It’s almost as if interviewers need to be tested as well. An interviewing contest where a faux candidate rates interviewers and suggests area for improvement.

Of course, that would be rediculously expensive, so at best some canned training is used instead. Still, it might make sense for higher value teams.


After interviewers are trained, they will perform at least two "shadow" interviews where they tag along an experienced interviewer and they want the interviewer conduct the interview and then are asked to write up an interview report. After they finish writing the report, they can see what the experienced interviewer wrote up, so they can understand how the writeup should be done.

Afterwards, the new interviewer has to do a non-trivial number of interviews before they are considered "calibrated". When an interviewer is uncalibrated, their score won't be given much weight, and the hiring committee can see how their interview reports and scores compare against more experienced interviewers. This also gives an opportunity for the hiring committee to send interview feedback (e.g., you're asking a banned question; the coding question is too simple, so it's not providing enough of a useful signal; don't ask "trick questions" which again don't provide much useful signal whether or not the candidate can answer it correctly, etc.)

So there is certainly ways in which interviewers do get suggestions for improvement. And it doesn't have to be _that_ expensive. It's just a matter of making sure you don't have more than one uncalibrated interviewer per panel.


Sounds very ritualistic and cult-ish.

It won't be possible to extract any useful information from such interviews. But I guess it's possible to convince employees to haze candidates this way. You should feel bad for doing this to people though.


>It won't be possible to extract any useful information from such interviews. But I guess it's possible to convince employees to haze candidates this way. You should feel bad for doing this to people though.

Empirically this isn't the case. It would be much more interesting if you took the time to elaborate on what about this process is cultish, why it won't provide useful signal. Without that, it just comes across as a mean-spirited complaint.


At Google, responses are calibrated against other interviews that interviewer has done. If they have some consistent skew then this can be adjusted for. If the interviewer sucks and the committee can tell then they can give that score less weight.


So self reporting by the interviewer?


I'm not sure what you are asking. Each interviewer on the interview panel writes up a large amount of notes on their interview. The C++/Java/C/Python code written by the interviewee, what hints were given, what blind alleys the candidate might have wandered by, how the candidate tested the code, how long did it take for candidate to find a bug (with or without hints, etc.)

For a design question, the interviewer will write up a sketch of the design, what tradeoffs were identified by the interviewee; what hints, if any, were needed, etc.

Then the interviewer will rate candidate on various technical dimensions (coding efficiency, design, etc.) and non-technical dimensions (communication, leadership, etc.) For each of these ratings the interviewer has to justify the rating, by pointing at examples from the interview notes.

Finally, the interviewer will be asked to score the candidate along a dimension of "strong hire" to "strong no-hire" and again, the score must be justified with a paragraph. For people on the hiring committee, the justification for the scores are often far more important than the actual rating given by the interviewer.

The hire/no-hire decision is not up to the interviewers; the hiring committee is composed of a different panel of engineers who review the interview reports from the interview panel; and the members of the interview panel write their interview reports without getting to see or hear from the other members of the interview panel.


Every interviewer provides a lengthy write-up of each interview: several pages of notes, including the raw notes of what the candidate said, questions asked, plus an evaluation against a rubric for a number of criteria.

The system is not perfect (what system is?) but as someone who has conducted tens of interviews as a Googler and at other tech companies, I can say it's one of the most rigorous and fairest systems I've seen so far. Interviewers are trained to try and get candidates to a 'win', trained to be aware of their unconscious biases, and the hiring committee process reduces the significance of any one vote. In general, the process is designed to select the best candidates and give all candidates a positive experience.

It's not perfect -- indeed, I'm sure I've given 'failing-grade' interviews as an interviewer here and there -- but it's one of the least-worst human systems that I've encountered, from both sides of the process.


Can you imagine how much effort you'd need to fabricate a convincing account of you asking reasonable questions, after you assaulted the candidate with ego-tripping trivia questions...

Besides being wrong and pathetic, why would anybody want to do that? The goal of maybe 90% of Google engineers is to spend as little time as the company allows on interviewing and writing feedbacks, and instead spend time on actually working on their projects.


I agree 100%, there are too many testimonials about people asking about things like linux syscalls and the different bits in TCP headers for me to believe that all Google interviews are conducted without any tech pissing-contest shenanigans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: