Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is only true when there isn't enough information supplied. You can't expect the doctor to figure out what the problem is just by telling the doctor you have a pain in your side, same goes for the lawyer's scenario.


Exactly. It's not that the doctors know anything more here, they just don't (can't) quantify their confidence (and later reliably update on new evidence).

No offense to doctors/lawyers meant here; all human brains suck at that.


Did you consider that that's not the point I'm making?

The confidences here provide misinformation. This is more harmful than no information.


How is it misinformation? The human doctor would not tell us his diagnosis in terms of percentages, because we as humans have a hard time grasping probabilities intuitively. That doesn't mean that a probabilistic diagnosis would not be more accurate.

The doctors job is to provide me with as much information about the objective criteria of my physical condition as possible. However when it comes to making choices about my treatment, say in the case of accepting/rejecting an experimental drug with some potentially nasty side effects, it should be entirely my own value judgement on what to do with said information.


I've learned a bit about a Watson from internal IBM information and this is something they understand and are working on. There are serious ethical concerns about what to tell someone, even if the diagnosis is quite compelling, IOW, "you have 6 months to live" needs to come from a human. Obviously, the approach is to have it work as a tool for a doctor, not as a WebMD type self-diagnosis service. There are all kinds of follow-up questions, which you'd need to be a doctor to even answer, because they'd be couched in medical lingo e.g. systolic/diastolic blood pressure.


They are only misinformation if it is misunderstood what they represent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: