Hacker Newsnew | past | comments | ask | show | jobs | submit | adobkin's commentslogin

Suppose you give a test to a room full of perfectly average B-grade students who know they are average B-grade students. Most will get a B but a few will do a little bit better and a few will do a little bit worse.

Now, you focus in on everyone who got a C and you find that everyone who got a C estimated themselves as a B student. From this you conclude that low performers overestimate their ability.

Then you look at the A students and find that they all also thought they were B students. You conclude that high performers underestimate their ability.

But this is just a statistical artifact! It'a called regression to the mean and this study does not account for it. If you isolate low-performers out of a larger group you will pretty much always find that they expected they would do better (which they were right to expect). You are just doing statistics wrong!


That's not what they're doing here. They're asking the students how confident they are that they got what they think they got. Doesn't matter what the C group actually got, or what they think they got, they are still more confident that what they think they got was what they actually got than the B group, while the A group was less confident than the B group.


To be honest, I misunderstood the study when I first ready it. However, the study is also not saying what you're saying. The authors had a bunch of students take a test and also predict their own score on it, as well as how confident that they were about their prediction.

The study says "for low performers, the less calibrated their self-estimates were the more confident they were in their accuracy". By "calibrated" the authors mean that the actual and predicted scores were the same. In other words, the C and D students were very confident that they got A and Bs.

The authors go on to explain:

"In other words, [for low performers] the higher the discrepancy between estimated score and actual scores, the greater participants’ confidence that their estimated scores were close to their actual scores... As expected, high performers showed the opposite pattern. High levels of miscalibration predicted a decreased in SOJ [second-order judgment]..."

Suppose everyone in the class was a B student and knew it. After taking the class, most got Bs but a few got A and a few got Cs and Ds.

Focusing exclusively on the D students (low performers), we find that they all expected to get a B. For these low performing students, the more miscalibrated they were the more confident they were. This makes sense because they expected to get a B and didn't expect to get a C or D.

Now let's take a look at A students. It makes sense that the more miscalibrated they are, the less confident they are because they all expected to get a B.


This type of vulnerability can be used to aid phishing attacks but it cannot be directly exploited by an attacker to obtain or modify user data. Phishing attacks are not listed as qualifying in the Program Rules http://www.google.com/about/appsecurity/reward-program/ although they are evaluated as security issues on a case by case basis.

In this case a bug was filed, but it took some prodding to get it fixed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: