Hacker Newsnew | past | comments | ask | show | jobs | submit | saycheese's commentslogin

Found the aggerate survey data:

https://goo.gl/forms/8bmb7dwWyBtS5nDM2

And it is pretty obvious the sample is bias, though take a look to see for yourself and comment if you notice anything too.


> And it is pretty obvious the sample is bias

Sure, who's claiming otherwise?


1/3 of respondents are still in school. 2/3 of respondents have been reading the blog for >1 year. 94% consider the blog "favorable"

Pretty skewed results.


What do you mean by "skewed"?

Obviously the SSC readership is a very long way from being an unbiased sample of (say) the whole world's population. No one would expect it to be, and in fact that's the point here: a survey of people who are obviously unusual in some respects (whatever combination of quirks turns someone into a likely SSC reader) turns out to be unusual in another respect with no obvious connection, namely having substantially more firstborn children than you'd expect.

Whatever it is that makes someone more likely to read SSC, it seems like it's probably a combination of things that surely can't correlate with birth order (e.g., being a native English speaker) and, broadly speaking, personality traits (e.g., being interested in the sort of thing Scott writes).

So the results show evidence of a link between birth order and personality, and (from the survey results) apparently a strong one. Which is interesting if true. And all of this only works because the SSC readership is far from typical of the population of a whole.

So, again, what do you mean by "skewed"? And why is it a problem?


Past HN coverage of Differentiable Programming:

https://news.ycombinator.com/item?id=10828386


Stating the obvious, this clearly shows man is not the only animal to use fire as a tool.


oh god I thought it was a team name not an actual animal.


To be fair "Firehawk Raptors" is a pretty cool name. Even maybe for a death metal band. Or the next javascript framework.


I dearly thought it was an anarchist group burning strips of land preventively..


Not seeing a major difference between 3:2 and 16:9 aspect ratio displays, what am I missing?

http://www.imaging-resource.com/PRODS/panasonic-lx100/z-lx10...


Huge difference.

If you're in a country where they sell the Surface Book, go take a look and you'll immediately notice that there is a _lot_ more vertical space.


Did you even look at the graphic I linked to? If so, there’s no need to “go look” at anything, since that graphic very clearly shows there’s not a major difference.


The graphic minimizes the apparent difference by splitting it between top and bottom and maintaining the same diagonal size. 3:2 gives 18% more vertical space in the same width https://www.popsci.com/gadgets/article/2013-02/lets-get-rid-...

If you're doing primarily vertical tasks (coding, web pages, etc.), the taller aspect ratio can be really helpful.

That said, I've mostly made my peace with 16:9. Write shorter functions (that's good anyway) and throw bars over to the side instead of top and bottom.


Aren't screen sizes typically based on that diagonal size (i.e. a 15" screen is a diagonal 15")? If so, doesn't that make parent's link more accurate?


Screen sizes are reported on the diagonal, but the makers are not constrained to maintain the same diagonal size with different apsect ratios. For instance, the pixel Chromebooks have 12.82" and 12.3" diagonal screens. I've never seen a 16:9 laptop with those sizes.

And specifically for an xps 13 device, where they trim excess bezels, etc., the keyboard width becomes the limiting factor.


The graphic you linked shows the different aspect ratios available to cameras from Panasonic's LX series. If there's "not a major a difference" then why is the LX series so highly regarded for this feature?


Whatever you say.

Here's a better image though, rather than a really weird bunch of lines for camera sensors: https://wolfcrow.com/wp-content/uploads/2013/01/43169aspectc...

It's about 20% more vertical space, as you can see (and you have to put together the top and bottom, which is a lot more impressive).


16:10 vs 16:9 is very noticeable for me at screen sizes below 15 inches and 3:2 is even better.


Just a bit more vertical space (a few? a dozen? more lines of code per screen) vs slightly wider screen (better for movies, and games maybe). It may be nitpicking, I actually grew used to 16:9 aspect ratios and don't really mind it.


i find 16:9 allows me to comfortably fit a text editor (with tree browser, minimap, and 100 columns of text) on the left and a terminal on the right. i need to shrink the text further than is comfortable for me if i want that layout on most 3:2 displays.


More vertical space = better for reading code. Still wide enough to split. I generally use 3 vertical splits (or sometimes more), so I'm personally not convinced, but that's the argument.


Why does the XPS keep putting the camera at the base the screen instead the top of where it is on most laptops?


To have a smaller bezel at the top because most of their target audience doesn't use the webcam anyway.


Business folks have to put tape over the camera half of the time anyway so Dell puts only minimal effort into it.


And that's how we'd get the Dell XPS 13 with Notch™.


No space available there due to infinite display.


There’s no singular definition within that document for cognitive complexity, just a bunch of metrics.


Related HN post to your blog post on the benchmarks on CPUs vs. GPUs on Google Compute Engine: https://news.ycombinator.com/item?id=15940724


>> “QUESTION: Look at the two photos below and see if you can figure out which person is real.”

>> “ANSWER: Sorry! This was a trick question. Both images were generated by computers.”

Not really a trick question when even if you know they’re both fake that the only way to be right (confirm you are right) is to be wrong.


I’d also like to comment on the “ha, fooled you!” tactic used in this article, where the authors asks the reader to choose the photo of the real person from two given photos and then reveals that, gasp, both are computer generated.

Whenever I run into this often-used tactic in papers and talks, I can’t help but feel – no, the author didn’t just convince me of their point. Instead they convinced me that they don’t value being trustworthy. Often I will just stop reading the article right then. Or if I do continue I will become unforgivingly skeptical of any claim that doesn’t provide a citation that is independently verifiable.

Use of the tactic feels particularly peculiar in an article which itself grasps towards the implications of a future in which photos and videos are no longer trustworthy, a future in which personal reputation will be more meaningful.


Yeah, I thought both looked a tiny bit off. I think it has to do with the reflection in the eyes which is a tiny bit inconsistent, among other things.


maybe so (they fooled me) but you were already prepped to scrutinize them. To the point others have made, we’ll soon need to be constantly prepared to assume fakery.

the technology of fakery is rising the meet the “everything is fake news” moment


I immediately picked the right image, because I saw whisker stubble on the left, and I already knew that image-generation AIs seem to have a thing for painting whisker stubble all over anything even remotely resembling a male face.

Surprise! Guess I should have considered the possibility of a trick question.


On the same note, depends on how you define “scanning” sense they’re clearly scanning.

You can do the same thing they’re doing with a bunch of pinhole, video feed with variation in the lighting, etc.


I think "scanning" would imply compositing info gathered at different instances of time to reconstruct 3-D models ... e.g., LIDAR captures a point cloud with points measured at different times and tries to reconstruct objects from that point cloud. Perhaps the time it takes LIDAR to gather the whole point cloud is small, but objects will drift during that time and the reconstruction algorithm might need to account for this. On the other hand, the article's method captures all info in one instantaneous image and reconstructs based on the known, recorded, bias of the irregular lens and the implied 3-D locations of objects.


If you want to be really technical. Each data-point is not collected at the exact same instant.


If it was however it wouldn't interfere with the functioning of this tech.


that's technically true for e.g. DSLRs, too.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: