Hacker Newsnew | past | comments | ask | show | jobs | submit | wandering2's commentslogin

What was the difference in the interviews?


It varied. The biggest difference is the Canadian interviews covered a lot more ground and asked very specific, fact-based questions. I interviewed for a bioinformatics research position at some podunk Sunflower genomics lab at UBC, and their several hour long interview consisted of questions that went pretty deep on C++, Perl, Linux sysadmin stuff, and molecular biology, and was followed by a round-table interview where ~15 people grilled me on like, BS HR stuff (stereotypical "give us an example of how you used conflict resolution in the workplace" type questions). For a position that paid around $40K! That's probably the toughest interview I've ever given. It's mind-boggling that they are able to hire people, in retrospect.


Thinking about everything else said in this thread, I think the type of interviews these companies do sort of make sense. After all, all the good talent that knows its own value is already gone; all that's left is a mix of the incompetent and the undervalued, and they're trying to pick out the undervalued.

That requires a lot more rigour than the US process, which basically consists of "hey, we're offering [huge salary], compete for it."


>front-end features like form generation and HTML templating were really important, but now they're arguably better to avoid.

Why is that?


It's the DRY principle. If you have a single-page app, the frontend needs to know how to render everything (using JS), so rendering on your backend as well is, at best, duplicated work. There may also be complications at the boundary of those regimes, like how your frontend initializes on top of server-rendered HTML.


At most 5% of apps should be a SPA. Seems rash to pick your only web framework on 5% of apps.


Not OP but I feel like it's because JavaScript has evolved enough that some of the niceties they offered before doesn't make sense. I like using Ember.js and doing client-side validations is pretty darn easy without complicated JS. Additionally, having inherited a legacy Rails project that uses `remote: true` in it's forms, it is causing more of a headache to fix these legacy issues.


Validating data should always be done on the server. You should never, ever, trust the client.

That's not to say you can't do (or shouldn't do) form rendering, processing, error/success handling, etc. on the client-side, but the server should always have the last word on the validity of what you send.


> I was shocked to learn that the binary search program that Bentley proved correct and subsequently tested in Chapter 5 of Programming Pearls contains a bug.

Huh? How did he prove it correct, then?


'Proofs' done by humans are done within a certain context; there's usually a lot that goes unstated, or is left unexamined at the finest level of detail.

Generally that isn't a problem and the 'small' details or assumptions don't 'leak' up to higher levels and do any real damage to the proof.

There's a movement to make proofs formally verified, to try move towards eliminating this issue, but that's hard, because writing out every little detail, specifying every last piece of context & assumption & step is laborious.


> because writing out every like detail, specifying every last piece of context & assumption & step is laborious.

Can code be written and subsequently formally verified that can then help in the process of formally verifying code?


It is not the verification that is laborious, although it can be computationally intensive, it is the specification. It is making sure that whatever it is you are proving is actually exactly and completely what you desire of the function.

In this case the proof did not properly account for the full range of inputs, of at least skipped over the fact that in real hardware the integer domains are finite.

There's a lot of formal verification tools out there though, they do make it easier,easy enough to at least properly verify the correctness of a simple sorting algorithm.


On an idealized machine with arbitrarily large integers I assume.


How else do you "prove" trivial algorithms other than by removing the realities of an actual runtime environment?

I'm pretty sure you're right though. Integers must have been defined as infinite in the proof for it to have been a proof at all.


If you sufficiently specify the runtime environment the proof can still work: Since the array is composed of integers, many C compilers/machine architectures will dictate a maximum size of the array s.t. there can't be overflow.


Touche, and fair enough- although you would expect them to call out that limitation since it's both critical and highly un-general. Bytes are comparable, small, and common enough to at least accommodate an exclusionary statement in the proof.


This is why I prefer to use Python for my binary searches ;-)


A proof is only, at bottom, a convincing argument. Mathmematicians are harder to convince than most people, but they can still miss things.


One possible scenario is he proved it correct assuming that the ints in the function were mathematical integers(-infinity, ... -1, 0, 1, ..., infinity). In realty all representations of numbers have an upper limit after which the representation will become corrupted or unsupported.


>Python and Ruby get you where you need to be fast, and if you get far enough and need to scale, there are a huge number of options, from stalwart holdouts like Java and C# (now on Linux!) to newer languages on rock-solid run times like Clojure and Elixir, to whole new languages like Go and Rust.

To be clear, are you suggesting that's reasonable to do a rewrite (or at least for certain parts) from Python/Ruby when you need to scale? Is that going to be reasonably accomplishable in many cases? Honest question. Just wondering what you have in mind.


Rewriting systems in another language can certainly be accomplished. If possible, doing it in bits and pieces will help it actually happen; if you do it as a single big switch, you have to work a lot harder to avoid the second-system effect.

In this case, you probably first start by switching your message queuing to Erlang (RabbitMQ), since you had a problem with that anyway; then you can expand outwards from there if it makes sense or makes you happy: job running in Erlang, webservices in Erlang, putting the data in mnesia instead of postgres, etc. You can do the same with any language, of course.


So are they outdated now?


A lot of the commands they documented have become obsolete or out of fashion.

E.g. Apache 1, BIND, sendmail, cgi-bin, ifconfig, etc.


This I really don't understand. All it takes is a simple Google search about "what can I expect in a software interview?" and you'll see right away the importance of whiteboard interviews for so many companies, especially when you are coming out of college. I don't how true the quote actually is, but for anyone, regardless of race, I feel like this is just basic due diligence if you want a job.

I've barely been to any career events at my school, and professors never mention about whiteboard interviews. My friends in CS don't discuss this either. I just learned right away when I did some basic research on job searching.


Your experience seems very weird to me. But I don't have enough data to know if it's atypical. I graduated not too long ago, but my school had many industry veterans as teachers, not pure academics, and sometimes they'd tell stories, one of which included formally proving to an interviewer during a whiteboard interview that the interviewer's expected answer was wrong. Besides that, the school (like I thought many colleges these days) put an emphasis on helping students get hired post-graduation. Additionally students themselves would practice whiteboarding each other, and I occasionally brought it up with friends in rants about how stupid the whiteboard hazing culture is.

Bad students or bad schools, it's really surprising either way they can be that bad.


I'm a senior in CS at the moment; I declared somewhat late, and I never did any programming until my sophomore year. Before I got into in CS, I was worried that classes would assume outside knowledge (beyond prerequisite classes), and that my classmates that had programmed in high school or earlier would be way ahead in terms of ability.

In my experience (and this is at a "top" school/program), this has not been the case at all. In fact, a lot of the types that already had prior experience seemed to struggle when it came to more difficult or theoretical/mathematically rigorous classes (e.g. Algorithms & data structures etc.). I suspect that a lot of those who had already programmed did a lot of little personal projects or hacks. Maybe they learned some different languages, played around on the command line, generally dabbled in different areas of CS and software, etc. But I think the kind of skills gained from doing these sorts of things are mostly trivial. The types of problems solved in most typical apps and websites are not that difficult technically. You learn about a lot of the difficult problems in software and computability through CS material. And the other difficult problems are engineering ones - things like necessarily complex systems with lots dependencies, large-scale or scalable systems, software with high technical demands, comprehensive testing etc. etc. But that kind of thing is learned on the job, or at least in some kind of capable team working on an important project. Not by writing little scripts or apps on your own. Learning how to write readable, modularized code can be naturally (and fairly quickly) learned in intro classes if you are mindful and dedicated to improving.

However, this is just my own experience and observations. And there is no doubt that there are plenty of people out there who started at a young age, and are also superb at computer science and/or are fantastic engineers. But I don't think that in itself is a great predictor of someone's capability.


Reinforcement learning is not strictly what is considered supervised learning in ML, but it's very much in the same vein. And a supervised learning algorithm doesn't have any "knowledge" of the domain it's learning about either - it just adjusts its parameters based on training example and class/output pairs. RL attempts to find the best actions to take to ultimately maximize a measure cumulative reward, i.e. a signal which provides an objective measure of performance (much like the class or target output of a training example used in supervised learning).

RL is definitely not unsupervised learning, which in contrast, attempts to find some structure in unlabeled data.


What do you mean by "bliss states"?


One-point concentration leads to very well-described states called jhana (dhayana). In deeper states, there can arise an orgasmic bliss, probably better than anything you can take.

There's a verse from one of Mitten & Deva Premal's songs that describe some of this bliss well:

    I stood alone at the gateless gate
    too drunk on love to hesitate
    To the winds I cast my fate
    and the remnants of my fear.
Concentration is not mindfulness meditation, though.


For me, a bliss state feels similar to a runners high. As said its not the point of meditation but is enjoyable as a side effect.


What did you like about that AMA? As many of the comments expressed, most of the answers were very vague, and the whole thing was pretty much a tease.

>Basically, it is a specialized method of learning faster and retaining knowledge more comprehensively. Think about it -- what percentage of what you learn do you retain? In all likelihood, you are losing information almost as fast as you are gaining. Second, assemble the information into a useful format within your mind. Then, find out where inventions emerge within the mind. Turns out, you won't like the answer. Your mind invents in a place you may not be able to access. Break into this space and you will be inventing quickly, methodically, and reliably. To solve the learning problem and the thinking problem will take some years.

>I invent using a specific system that was developed by myself and a colleague when we were in college. The system allows one to invent in whatever field you want and methodically (you will definitely solve the problem more effectively than even the practitioners within the field). However, there are specific limitations. However, it is one of the few "systems" that is methodical and that can be taught. It is not random. My colleague has something like 60-70 patents and is also a successful inventor and intrapreneur. He did not like being independent so he has stayed at a large company. I went solo.

Everyone wanted to know the details of these ideas, but the OP refused to provide any specifics, not even a very general overview.

Despite this, there were a few interesting tidbits concerning patents and about how he generally approaches his career and problem solving. I'd really like to know more about his process though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: