Piero is now at Stanford :). Piero is definitely worth listening to. He's been in the weeds, knows the details and yet keeps the bigger picture in mind. Also one of the nicest people around.
I took this class and can vouch for it. They update the class every year to go over recent research - not an easy task in such a fast moving field. For example, this offering covers the Transformer architecture which has recently been used to obtain state of the art results across a wide range of NLP tasks.
Tangentially related if you're interested in keeping up to date with what's going on in the field:
Sebastian Ruder's blog has many good posts on recent advancements in NLP (literature reviews, conference highlights): http://ruder.io/
The posts are concise and accessible enough that you can skim through them quickly. Then you can go check out the paper directly if something piques your interest.
Since we're on the topic of tutorials to understand neural nets and modern deep learning, I will throw in Michael Nielsen's excellently written free online "book" on neural nets. It's really a set of 6 long posts that gets you from 0 to understanding all of the fundamentals with almost no prerequisite math needed.
Using clear and easy to understand language, Michael explains neural nets, the backprop algorithm, challenges in training these models, some commonly used modern building blocks and more:
This book opened my eyes to the power of textbooks written in such easy to understand, clear style. Bet it took repeated revisions, incorporating feedback from others and hours of work but such writing is a huge value add to the world.
I'm not too knowledgeable on how these deals work, but figured someone on HN would know:
A quick Google search shows that Twilio's market cap is currently $7.4 billion. Does this $2 billion "all-stock" transaction mean that they are giving away over a quarter of the company to pay for this acquisition? Or how else should I read this?
I haven't read the details but not entirely. They will likely dilute their own outstanding shares by adding the required shares needed to acquire the company at that value. This will lower the total ownership as a percentage that each share represents but now every share owns more stuff.
So yes they are creating new shares making every other share worth less but it isn't a direct transfer of existing shares. You'll notice companies get board approval for setting aside a number of theoretical shares they could create if they wanted to for things like this or secondary offerings, etc.
Most likely this will be done by issuing new equity. The press release states that they will exchange Twilio Class A common stock per share of SendGrid common stock. They defined the exchange ratio (that's why the price is ~$2B and not a specific number) and will issue as many shares as they need to based on that ratio:
> 0.485 shares of Twilio Class A common stock per share of SendGrid common stock
Technically, issuing new shares dilutes every existing shareholders' ownership. But, if the market deems this to be an intelligent combination that was valued correctly, the market cap of the combined company should be ~$9.4B or (ideally) more. So the goal is for existing shareholders to hold, at worst, the same amount of value in dollar terms as before or, at best, hold more value as a result of the acquisition.
The other commenters gave good answers, but to simplify (in case it helps), the math of what they're saying about issuing new stock, changes the arithmetic from 2 / 7.4 = 27%, to 2 / (7.4 + 2) = 21%.
They will be creating stock to fund the purchase, so in essence, yes. The number of shares outstanding will rise by $2B (price) / $76 (Twilio price as of transaction agreement) = ~26M new shares of Twilio to Sendgrid owners.
> Auto-encoders are overplayed, mostly because they're a pretty easy intro ML project.
I think you mean "normal" autoencoders, like denoising autoencoders or the identity autoencoder that are used for feature learning. Note that variational autoencoders are not really autoencoders in that sense. They are called “autoencoders” only because the final training objective that derives from the probabilistic setup does have an encoder and a decoder, and resembles a traditional autoencoder.
Traditional autoencoders are the common intro projects used for representation learning and to bootstrap other networks, not variational autoencoders.
The insight that made it possible for me to grasp VAEs was digging into the probabilistic setup that leads to this formulation. The neural networks are "just" a powerful function approximators applied on top of this probabilistic framework.
"Kentley-Klay, 43, is an improbable entrant into the crowded race to develop self-driving cars. He has no engineering degree, no background in computer science. Through his early 30s, he was a successful artist and designer—creating music videos and ads for major companies like McDonald’s and Birds Eye frozen vegetables."
"In a move that some will call devious and others will call ingenious, Kentley-Klay reached out to some of the biggest names in the field and told them he was making a documentary on the rise of self-driving cars. The plan was to mine these people for information and feel out potential partners."
The whole thing seems insane to me. I don't understand why you would fund someone with no tech or business background... I'm sure he hired some good people, but it still doesn't make sense.
As to why the investors booted him? Investors generally don't kick out the CEO unless they did something crazy. It doesn't look good for the investor, and it makes it investment look bad (and follow on investment less likely).
I'm curious how likely it is that driving sequence (on real roads, not the track) from the Bloomberg video is as real as it looks. If so, is that impressive? It certainly looks more advanced than other demos that have come out, but it's unclear what they didn't include in the video.
If that were the case, you'd have a better managed transition.
Ideally you'd have the CEO onboard with this from the start. Alternatively when it came time to boot them, you'd a) already have someone in place to take over b) be able to pay them off sufficiently to not make a fuss, and help you frame it as transitioning to a new role (i.e. they'd stay on as an advisor, move to a COO role etc.).
Yes, I think throwing out the CEO of an established company with revenue is quite a bit different than throwing out the CEO of a pre-product startup however.
With a startup, you've invested in the founding team. If you throw them out... you've effectively thrown out a significant part of what you invested in. It also calls in to question your own judgement as an investor. And it reduces your standing in the eye of companies looking for investment.
Overall, unless something really really bad happens you don't want to throw out the founder...
If you just don't have confidence? You either let the investment go, or you try and pump it and get someone to buy you out in a subsequent round.
The only possibility I can see if that someone wants to buy Zoox, they made a really generous offer... but the Zoox CEO blocked the sale. Acquiring company is willing to take Zoox without the CEO to get Zoox out of the market, and acquire the team for their own projects...
Ideally yes, likely, no. From the sparse details we do have it sounds like there were some fundamental disagreements between the board and CEO. The board may have seen firing the CEO as the path of least resistance. It seems harsh, but without extensive insider details, it's hard to know whether it was the right call.
Tim has lots of experience in business and tech. From scratch he built a self driving car company valued at $3.2 billion, whose autonomous OS is outperforming efforts from major automakers and tech companies and with a fraction of the resources. Zoox has gotten to where they are now on about $300 million. Others have spent far more and have a lot less to show for it.
Your comment history suggests that you have some relationship with Zoox that you're not disclosing.
You've commented on Zoox several times before in an overly enthusiastic manner. You've also commented several times before on autonomy and your comments have been called out for astroturfing in a couple of instances.
Readers please beware and take this comment with a grain of salt.
I've been accused of working for Waymo and Cruise too, because I defend them against the unfounded bullshit you guys spread about them. And about me, too, apparently. I'm a self driving car nerd, I moderate a subreddit dedicated to the subject under the same username I have here, and I've been following the industry, the technology and it's players since the DARPA days. Relative to the rest of the industry Zoox is doing incredibly well, so if you want to challenge me about something, how about instead of making up teleological conspiracy theories, challenge me on the facts.
I'm interested in the facts of what they're doing so well - do they have deployed systems taking passenger rides? This is/was my industry, so I'm not just asking idly.
As an aside, the lack of clarity about who you do work for is probably what's contributing to the "teleological conspiracy theories".
Zoox did several years of closed course testing and started on public roads in San Fransisco about 1 year ago with just 10 cars. Last fall they did a press event and took a few dozen journalists around for rides and everyone had good things to say about the performance of their vehicles. Their first set of disengagement reports for 2017 had them at 1 every 430 miles, which is worse than Waymo or Cruise, but way ahead of everyone else, and especially impressive given how few test miles they had racked up at the time. It lends credibility to the claims some have made that Jesse Levinson is the brightest guy in the industry.
Ashley Vance for Bloomberg did a big puff piece on Zoox a month ago, the video is pretty interesting, it's the first we've been able to see of their prototypes in action, and I had been waiting years to see if they were actually following through with their original vision:
A couple days ago some pics of an unidentified av test vehicle was spotted, and one of the smart guys in my subreddit called it out as a zooxmobile with an new sensor configuration arranged to match the configuration of their protoypes:
Rock on, dude. Eff the haters. People love to make claims like above or downvote as soon as a positive comment is posted on something they don’t like or someone else comes to their enemies defense.
I know a programmer who worked for BioWare in my home city who has been with Zoox for about a year. I met him once, years before he left for SF, because we have a mutual ex-girlfriend. So yeah, I'm right up in there.
Hey, as someone who is just observing this and doesn't have a dog in the fight -- thanks. (I am assuming that -- although you didn't say it -- this is a full disclosure of ALL your conflicts of interest.)
I am not quite sure what I believe about when it is appropriate to accuse someone of having undisclosed conflicts of interest. But I am certain that the best way to respond to such an accusation is with a full disclosure of all conflicts of interest. Regardless of whether the accusation was appropriate, the disclosure ends the issue. And conflicts of interest (to one degree or another) are perfectly normal and do not invalidate a person's opinions or eliminate them as a useful contributor to the discussion.
Accusing someone of astroturfing (or in your case, merely suggesting it) undermines the integrity of online discussion. It has a chilling effect on perspectives that may be viewed as controversial.
Just because someone is enthusiastic doesn’t mean they’re a shill. Even if you’re ultimately correct you shouldn’t wield that accusation without exceptional evidence - being an apologist for a company is not exceptional evidence. Cynicism has a place but you can’t just use it like a blunt instrument.
Valued at 3.2B USD, by investors who have just fired him...
It's a sign of how crazy things are when 300M USD can be considered a small amount of money to spend on technology development (particularly for a product that doesn't actually require anything inherently very-very expensive, aside from staffing costs).
For people that know more about web security than I: Is there a reason it isn't good practice to hash the password client side so that the backends only ever see the hashed password and there is no chance for such mistakes?
Realize the point of hashing the password is to make sure the thing users send to you is different than the thing you store. You'll still have to hash the hashes again on your end, otherwise anyone who gets accessed to your stored passwords could use them to login.
In particular, the point is to make it so that the thing you store can't actually be used to authenticate -- only to verify. So if you're doing it right, the client can't just send the hash, because that wouldn't actually authenticate them.
But at least, with salt, it wouldn't be applicable to other sites, just one. Better to just never reuse a password though. Honestly sites should just standardize on a password changing protocol, that will go a long way towards making passwords actually disposable.
I don't think a password changing protocol would help make passwords disposable. Making people change passwords often will result in people reusing more passwords.
No the point is for password manager. The password manager would regularly reset all the password.... until someone accesses your password manager and locks you out of everything!
Ultimately, what the client sends to the server to get a successful authentication _is_ effectively the password (whether that's reflected in the UI or not). So if you hash the password on the client side but not on the server, it's almost as bad as saving clear text passwords on the server.
You could hash it twice (once on the server once on the client) I suppose, but I'm not entirely sure what the benefit of that would be.
A benefit would be that under the sort of circumstance in the OP, assuming sites salted the input passwords, the hashes would need reversing in order to acquire the password and so reuse issues could be obviated. But I don't think that's really worth it when password managers are around.
I'm imagining we have a system where a client signs, and timestamps, a hash that's sent meaning old hashes wouldn't be accepted and reducing hash "replay" possibilities ... but now I'm amateurishly trying to design a crypto scheme ... never a good idea.
Assuming that you are referring to browsers as client here. One simple reason is that the client side data can always be manipulated so it does not really makes any difference. It might just give a false sense of safety but does not changes much.
In case we are talking about multi-tier applications where probably LDAP or AD is used to store the credentials then the back end is the one responsible for doing the hashing.
I can't think of a good reason not to hash on the client side (in addition to doing a further hash on the server side -- you don't want the hash stored on the server to be able to be used to log in, in case the database of hashed passwords is leaked). The only thing a bit trickier is configuring the work factor so that it can be done in a reasonable amount of time on all devices that the user is likely to use.
Ideally all users would change their passwords to something completely different in the event of a leak. But realistically this just doesn't happen -- some users refuse to change their passwords, and others just change one character. If only the client-side hash is leaked rather than the raw password, you can greatly mitigate the damage by just changing the salt at the next login.
If you don’t have control on the client, it’s a bad idea: Your suggestion means the password would be the hash itself, and it wouldn’t be necessary for an attacker to know the password.
For one, you expose your hashing strategy. Not that security by obscurity is the goal; but there's no real benefit. Not logging the password is the better mitigation strategy.