Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>he/she is using the same procedure: assume axioms; make hypothesis; prove or disprove using observations, deduction/induction; repeat till you have reached your goal

Absolutely! This problem-solving process is very similar to what we were asked to do in physics and chemistry on a nightly/weekly basis. I've found chemistry problem sets to be very similar to programming.

However, what you describe is not math, not as it is taught anywhere between first grade and Calc III. Being "good at math" is exactly equal to being good at manipulating symbols (first numbers, then variables) according to a memorized set of rules and procedures. The higher you go, the weirder the rules get. Symbol manipulation and the memorization of probability and geometry formulas are also the entirety of the "skills" represented by standardized tests of mathematics.

Programming is about coming up with the rules and procedures by which the computer should manipulate symbols. It's really completely different.

Excellent mathematical skills would make me a good compiler. Not a writer of compilers, but a compiler. Of course, that might be useful if I wanted to work on one (I'd be able to check its work by hand), but in practice, I'm that sort of work is done billions of times faster and with much greater accuracy by computers than humans anyway.



> not as it is taught anywhere between first grade and Calc III

This is the problem when most programmers talking about math. Everyone seems to think that math after Calculus is just more abstract versions of calculus, with more complicated symbols and more complicated rules for manipulating those symbols.

I once talked to an engineering student who, after founding out I'm a mathematician, boasted to me that he took both differential equations classes, so he was going to tell me some cool things about math. I was sitting with my friend who is getting her PhD in dynamical systems; the irony was rich and totally lost on him.

Anyone who says "being good at math is exactly equal to being good at manipulating symbols" is completely ignorant. I suggest you update your beliefs about mathematics, because right now you're the butt of the irony.


The qualifier was important.

If, by arguing that math skills are important to programming, you mean "skills in post-Calculus math" than say so. If you say "math," people will continue to believe that you mean math in the way that they experienced it, i.e. math ending at or shortly after calculus.

Lots of mathematicians freely admit that they're terrible at computation. Which is precisely my point - the two represent different skills. Being good at symbol manipulation (i.e. "math" as represented by your high school transcript, SAT, ACT, and college transcript if not a math major) is not a determinant of your aptitude as a programmer.

Is it useful to study math beyond calculus? Of course! I know (abstractly) that things like the typed lambda calculus and set theory exist and are the foundation for much of what we do. However, it is ignorant to talk about these things like they're related to your aptitude for pre-Analysis math.

But what it isn't is rational to to use performance in lower math classes to screen (computer science undergrad applicants|interns|programmers), or to encourage people to choose or not choose programming as a career path based on their performance in mathematics classes.

Basically, I side with Lockhart's Lament. https://www.maa.org/external_archive/devlin/LockhartsLament....


Your comment made it very unclear whether you believed what you were saying about mathematics or not. Rereading, it still seems like you think conjecture and proof is more like physics than math. And you claim that math is irrelevant to compiler design. It's not "abstractly" relevant, it's directly relevant. Parsing is formal language recognition. Register allocation is literally graph coloring. These things are not third-generation applications of mathematics. You seem to think that mathematics makes you good at computing, when in truth most mathematicians are bad at computing things by hand.

The root misunderstanding might be that most people wouldn't know math if it hit them in the head, but changing my definition of math to fit their misconceptions is certainly not going to fix the problem. I would even prefer it if people thought, "math? yeah I have no idea what goes on in that subject" to what you describe. Because distinguishing between "math" and "post-Calculus math" (the latter of which is almost all of math) won't help anyone.


Dammit: http://xkcd.com/435/

You are basically claiming that everything is math, which is true, but useless. Programming is more like chemistry or biology in the purity access...not "like them" but at a similar level.

Writing parsers, which I do a lot, requires very little parsing theory...ya, they make for good academic papers, but nothing beats good ole simple recursive descent in flexibility for error handling. And for register allocation, you might use - gasp - an algorithm, but that doesn't dominate your work - writing a compiler involves some math (among other tasks), but is not mathematical activity.

Mathematicians can be bad or good at programming, just like musicians can be...there is no strong correlation to their aptitude based on their previous training.

> The root misunderstanding might be that most people wouldn't know math if it hit them in the head, but changing my definition of math to fit their misconceptions is certainly not going to fix the problem.

Math can be defined so broadly as to basically be a useless word. Is sociology math? Is chemistry math? Is physics math? Is engineering math? Is implementing an algorithm math? Meh, if so, then whatever, we haven't made any progress.


> Writing parsers, which I do a lot, requires very little parsing theory...

I am currently writing an Earley parser. When I'm done, you will indeed need little math to use it. However, I had to grasp several non-trivial mathematical insights to write this damn tool (most notably graph search). And I'm not even done: currently, my parser only handle grammars like this:

    A -> B C D
    A -> B 'x' E
    B ->
    etc.
I want to handle the full Backus Naur Form, however, so I will need to compile BNF grammars down to a set of production rules. This will involve things very close to lambda lifting, which is rooted in lambda calculus.

Math is the main activity in writing this parser. The code is merely a formalization of such math.

I believe the result will be worthwhile: the error handling should be just as good as manual recursive descent parsing (RDP). Parsing code using my framework should be about 10 times as short as RDP. It will not be limited to LL grammars, unlike RDP. And you will still need very little math to write a parser —just like RDP. But that's because I will have abstracted it under the rug.

---

I don't know what kind of compilers you write, but I'm surprised to hear you say math is not the main activity. We're talking about semantic-preserving transformations here, how could it not be math?

Or, you already know all the math, and applying it doesn't feel like "math" any more.


I do Bret Victor style interactive programming environments. I've developed a set of tricks over the years and they are all quite simple. It really is programming in the classic sense and not so much Greek symbols on the whiteboard.

also, what I do is not very well understood, so there is no good theory for it yet and a lot of open questions to investigate. So its more experiment and measure vs. find a proof that will tell you for sure the right thing to do.


> I've developed a set of tricks over the years and they are all quite simple.

Actually, so is Earley parsing. The more I study this little piece of CS, the more I see how simple this really is. This is why it feels so much like math to me: hard at the beginning, then something "clicks" and everything becomes simpler.

Your "set of tricks" are probably similar. Knowing nothing about them, I'd bet their simplicity is rooted in some deep, abstract, yet simple math, just waiting to be formalized:

> what I do is not very well understood, so there is no good theory for it yet and a lot of open questions to investigate.

And how do you plan to further your understanding, or finding good theories? It can't be just psychology and cognitive science. I'm sure there will be some math involved, including proofs.


My set of tricks is more like: trace dependencies for work as it is done, put work on dirty list when dependency changes, redo work, + some tricks to optimize previous steps. It is really hard to interpret that as math, especially if the word "math" is to remain meaningful and useful. It is all math at some level, but so is everything.


Thank you. This is what I was trying to say, but better articulated.

I would say that math done by mathematicians would be more indicative of skill, but most people who are/are trying to be programmers haven't really tried that, so it's impossible to assess them based on it.


If you still write parsers as recursive descent parser by hand, then this is because NOT ENOUGH MATH IS APPLIED IN PRACTICE (by programmers who think they don't need it).


No, its because incremental performance and error recovery are more important than raw performance and expressiveness. If you think Dr. Odersky doesn't know enough math...not to mention most production compilers out there written by the best professionals in the field. Reality is a harsh mistress.

My parsers, by the way, do things only your parsers could dream of:

http://research.microsoft.com/en-us/people/smcdirm/liveprogr...


> No, its because incremental performance and error recovery are more important than raw performance and expressiveness.

As if the former would require less math than the latter… Really?


Yep. Well, see my managed time paper; basically we use no fancy algorithmic tricks and it all works out fine. There are 2 ways to do incremental parsing: the hard way based on some fancy delta algorithm, and an easy way based on simple memoization and conservative recomputation.

Academics especially often over think these problems when the simple solution often works fine, and performance wise you'd have to mess up pretty bad before parsing becomes a noticeable bottleneck.


Well, I am working on a new parsing algorithm as well. Using lots of math. Not sure if your dreams are bigger than mine.


> I would even prefer it if people thought, "math? yeah I have no idea what goes on in that subject" to what you describe. Because distinguishing between "math" and "post-Calculus math" (the latter of which is almost all of math) won't help anyone.

Why not?

Most educated Americans did "math" for a minimum of 13 years of their lives. It would be immensely useful if people who control gates like employment and admission understood that the math relevant to computer science and they math they did/can measure about candidates are different things.

The fact that most mathematicians are bad at computing things by hand is the key takeaway. Hand computation skill doesn't mean you should be a mathematician or programmer. Lack of hand computation skill doesn't mean you shouldn't.


> However, what you describe is not math, [not as it is taught anywhere between first grade and Calc III]

I think this is where we disagree then. That is pretty much what Euclidean geometry is, and I would indeed file that in the 'math' cabinet. Euclidean geometry has very few axioms and all of the theorems are just consequences of those axioms and choosing and applying truth preserving operations on them (in other words, reasoning, or as you called it 'problem solving').

Say you are tasked with deciding whether the angular bisectors of all triangles all meet at a point inside the triangle (In programing you may be asked to find out whether this piece of code will ever crash). You seek out theorems that you think would be useful and then try to prove or disprove them. It might turn out that the theorem wasnt useful after all, then you try to prove another theorem that you now think will be more useful. In debugging that is pretty much exactly what you do, same with ensuring properties of code: the code will not crash, the pointer will not be null etc.

I was not schooled in the US but I would guess it is not that different here.

> Programming is about coming up with the rules and procedures by which the computer should manipulate symbols.

How do you think mathematicians come up with a set of axioms and how to operate on them ? Think about how mathematicians came to use imaginary numbers, it is quintessentially the same process, i.e. "coming up with the rules and procedures ...[to]... manipulate symbols". Is it really as fundamentally different as you make it out to be. I am hesitant to say that programming and math are one and the same, but would claim that their methods are the same, that processes in programming, no matter what application you are programming, is indeed at least a subset of the fundamental processes of mathematics.

Furthermore what you are talking about is one aspect of programming, the synthesis part of it. The other is debugging or the deductive aspect of it. It is no less of a part of programming, and again the methods are indeed the same. Whether you call them theorems or not, whether you write them with symbols on paper or not, when you are debugging you are indeed manipulating symbolic objects, and proving or disproving theorems based on rules, exactly like in math, example: "if my assumption about initial conditions and the function foo() is correct then 'a' ought to be 42. If it is not, either my reasoning is wrong or the function implementation or the initial condition is wrong. Ok so it was not 42..." and you keep going like this manipulating your conjectures and observations using rules of mathematics, more precisely logic with '&', '|', 'for all', 'there exists'.

Consider writing tests and choosing what tests to write, its the same process. Consider drawing conclusion form the tests, consider using the type-system to encode properties you desire in the code, again its fundamentally the same deal. One may be aware of it, one may be doing it implicitly without being aware of it, but regardless, its still the same process.

I would argue the match with math is better than the match with sciences like Chemistry or Physics, because there the rules have been set by nature. In programming you choose the rules and try to do something interesting with those, much like in mathematics. For actual computers you do have to co-opt nature into executing those rules.

=============

EDIT: @superuser2 replying here as I dislike heavily nested threads with cramped widths.

> I have never been asked to do anything this in a math class.

I upvoted your comment and now I understand more of where you are coming from, and it seems that there is a difference in the way math is taught in schools where you are from [I am guessing US]. We would typically do this stuff in grade 7 and it is taught in school, so anyone with a school education would be aware of this (quality of instruction varies of course, in fact varies wildly).

> Geometry is an exception, as you state. However, geometry is one year of many, and was extremely easy for me. I'm talking about Algebra, Algebra II/Trig, and Calculus.

I think this sheds more light. For me at least the most difficult homeworks and tests were in geometry, also the most gratifying. Most of my schoolmates would agree. I think we had geometry for 3 years (cannot remember) very lightweight compared to Russian schools. I would however encourage you to think about solving simultaneous linear equations, it is again deduction at work, the only difference is that its scope is so narrow and we know the procedure so well that we can do it by rote if we want to. We also had coordinate geometry, which was about proving the same things but with algebra, but we had this much later in grade 11.

BTW post high-school 'analysis' is different though, it is not devoid of such logical reasoning but its focus is different.

..and thanks to you I now know a little bit more about the methods of Chemistry.


> and proving or disproving theorems based on rules, exactly like in math, example: "if my assumption about initial conditions and the function implementation is correct then 'a' ought to be 42. If it is not, either my reasoning is false or the function implementation or the initial condition is wrong. Ok so it was not 42..."

Chemistry is like this. "Cadmium would be consistent with the valence electrons required for the bond to work. If the mystery element is Cadmium, then the mass would be xxx, but it's yyy, so we can rule that out..." Of course I've forgotten the specifics, but the method is just like it is in programming.

I have never been asked to do anything like this in a math class. Equations are sometimes called theorems, but that's the extent to with this sort of analytical thinking ever factored into a math class up through Calculus. (There was a little of this in geometry, and a really fun two weeks in propositional logic, but geometry was still mostly formula-driven - numbers of vertices and such.) I gather that's the sort of thing Analysis is about, but I'm not talking about math for math majors, I'm talking about math as most people experience it.

You memorize some formulas. You're given some formulas. You recognize the information you're given and apply the procedure you were taught to convert it to the information you're asked for. That's it. (Geometry is an exception, as you state. However, geometry is one year of many, and was extremely easy for me. I'm talking about Algebra, Algebra II/Trig, and Calculus.)

Memorizing and drilling steps is not how you become good at debugging - thinking is. Holding the program in your head, tracing it, asking yourself how you would design it and then figuring out where the problems in the logic are. This is nothing close to recognizing the situation as belonging to a particular category and then blindly cranking the computation for that category of exercise to reach the answer.


> I have never been asked to do anything this in a math class.

This is a tragic comment. It is also true. American children waste their time and their brains in American math classes for about 8 years each. They learn to multiply and maybe solve a quadratic equation. "Equations are sometimes called theorems," and that's it. And it's not getting better; as a culture, we don't understand or respect math so we don't support it.

Modern mathematicians don't memorize or drill steps. We don't just find a bigger number each day by adding one. We hold our problems, trace them, ask ourselves how the proof should be designed and figure out where the problems in the logic are. If you've got 2n points in a line, how many ways can you draw a curve from one point to another such that you match them all up and none of the curve intersect? What about if they're in a circle -- is the answer different? How do you characterize a straight line if you're in a non-Euclidean surface? What's the right definition of straight -- should straight mean "direction of derivative is in direction of line" or "shortest distance" or what? And if we're talking about points on a grid, how should distance be measured anyway? Do we have the right definitions to solve our problem? Distance from streetcorner to streetcorner by cab in Manhattan is different than distance as the crow flies is different than "distance" through time and space; how are the concepts consistent, then?

Formulas are just shorthand for relationships. We don't teach students the relationships or the thinking in the US. That's why my Romanian advisor would ask our graduate algebra class, "You should have learned this in high school, no? No... Hmph." American school math is like prison, a holding cell until release that merely increases the likelihood you'll do poorly in the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: