Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I use the heuristic that many of us here probably use, consciously or not, after our years of experience with math problems: if it's a math problem, as opposed to problem in some other domain that ends up requiring math (science, accounting, carpentry, etc.), there will be some degree of artifice in the problem. Somehow, the numbers will just happen to end up being integers or perfect squares or exact multiples or whatever, so that there is an easy way to solve this specific problem (not a general problem of this sort but this specific instance).

In this case, you examine the numbers and spot that they are both just "one off from one" fractions, so the sum is roughly 1+1. The test givers will then see to it that there is only one answer that matches the result of the "trick" they were testing to see if you could find.

Kids who get a lot of math internalize this heuristic, which actually trips them up briefly when they start having real science classes, because they think they've done something wrong if the answer turns out to be 5.6293 or 0.07291 instead of 4 or 9 or 5/8 or sqrt(10). They assume they missed the trick.



I've been tutoring 1 on 1 middle school kids (at the 8th grade level) in underprivileged areas and what I find is that they have little understanding for what fractions are/represent at all. For example, I asked a student what the decimal value of 1/2 was (I had explained what decimal values were beforehand) and she didnt know. I was shocked (maybe its because we're cs people we have a special affinity for 2s). As a further test I gave her a piece of paper and asked fold the paper in half and she knew it instantly. I then asked where on the paper the 1/4 mark was. Again, she didnt know. This came from a further problem of not understanding that (1/2)/2 is 1/4. After playing with the paper folding it in so many ways she started to internalize what these fractions meant.


When I did my undergraduate degree in physics I think one of the best things I learned early on was estimation skills. I was used to doing things precisely and finding the tricks. Our professors made jokes about things just needing to be right to "within an order of magnitude", and it wasn't for two years that I internalized that.

When you deal with the real world there are always a lot of errors and uncertainty in measurement. Simply being within 10% of the right answer is generally sufficient and quickly getting that answer over getting the 99.99% accurate answer is better if it takes you one-tenth the time.


This is something I find tremendously useful in programming, but at the same time find a lot of other developers amazed when it's used.

I don't care if the dataset in memory is 553MB or 632MB - what I really need to know is whether it's "a few tens of MB", "a few hundreds of MB", or a "a few thousand MB".

I don't care if the API server can service 7321 simultaneous requests or 6578 - I just need to know if its "a few hundred", "a few thousand", or "a few tens of thousands".

You can solve an enormous number of engineering and architecture problems with a reliable order-of-magnitude estimate - at the very least you can quickly exclude solutions that are vastly under (or over) provisioned for the problem you're trying to solve.

A good order-of-magnitude estimate is also a great error check for a more detailed calculation, if my quick estimate said "5000-ish plus or minus 50%", and your calculation says "24,152", one of us has got something wrong.


I remember this from my first university physics class. We would derive a movement equation for a cannonball, to find the optimal angle to shoot a cannon for maximum travel. Everybody knew the answer of course, but we'd always just used the formula. This time we'd start with the obvious integration equation, movement + attraction between 2 point masses, integrate over flight time, and find the point where it crosses the ground plane.

And then the teacher just took the range from the integration, and the formula, multiplied the two and put a ~= sign between them. I believe I actually stood up and said you can't do that and we had the first of many discussions about exactness.

That was scary.

That was my first run-in with what I considered the central article of my then faith : that you can derive the structure of the physical world from first principles. Throwing away terms in an equation in order to arrive at correct physics laws, I don't know, I considered it sacrilege or something. Of course I've since learned that deriving all of physics from it's own basic laws doesn't work, and the way we fix that is that we delete "inconvenient" terms in the equations when required. Deriving physics from a few mathematical laws is completely impossible. You can't even correctly derive the (mathematical) fields used in physics, so the very numbers that one uses to do physics aren't actually valid mathematical numbers.

So the relation between physics and mathematics is not that one is based on the other, because that was tried and didn't work out, and people have almost completely given up. So it was replaced by a marriage of convenience (this works ! Sure it won't validate mathematically but the numbers look really similar), ignoring at least a dozen elephants that stood in the way, and we just act like they don't exist.


You may enjoy Feynman's excellent talk "The Relation of Mathematics and Physics": http://www.youtube.com/watch?v=kd0xTfdt6qw#t=1m05s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: