Hacker Newsnew | past | comments | ask | show | jobs | submit | nxobject's commentslogin

The quality of life is “it compiles nearly everywhere with the first toolchain you can get your hands on”.

What is this, conceptual art?

/s


The first author is well known for teaching "wild ride" undergraduate classes where he compensates by spending a lot of time on their pedagogy.

He once taught an open to all freshman knot theory elective:

https://people.reed.edu/~ormsbyk/138/

I also remember taking a class on vector calculus from the same author... which detoured through rudimentary manifold theory and differential forms, and ended with a final week on de Rham cohomology and the Mayer-Vietoris theorem (on vector spaces, to be fair, and not modules in general.)

(And is a very fine K-theorist, too, if I say so myself.)


> I also remember taking a class on vector calculus from the same author... which detoured through rudimentary manifold theory and differential forms, and ended with a final week on de Rham cohomology and the Mayer-Vietoris theorem (on vector spaces, to be fair, and not modules in general.)

Any available references for that that you know of?


Wait, I know Mayer–Vietoris as a tool for computing homology. What does it mean to compute it on vector spaces or on modules?

My bad – that was a misleading thing to say! Thanks for pointing that out. I figured out what I said wrong. (Caveat emptor, I do biostatistics now.)

The context IIRC was this: one of the key results of the class was generalized Stokes' theorem, but this case (since was a 200-level class) we mostly just looked at differential forms on open spaces in R^n, and then said a few quick things about differentiable manifolds.

At this more concrete level, then, I remember that we constructed de Rham cohomology (fixing an open subset of R^n) beginning with the cochain complex given by vector spaces of k-differential forms and exterior derivatives, instead of working more generally with a cochain complex on modules.

But think I said something wrong here, which I why you were (rightly) confused. I'm not sure that the above distinction matters anyway since IIRC, you can get Mayer-Vietoris by showing that de Rham cohomology satisfies the Eilenberg-Steenrod axioms (stated for cohomology), and the Eilenberg-Steenrod axioms only need abelian groups anyway.

But I'm also 90% sure that TFA did something more direct to get to Mayer-Vietoris that I've forgotten, since we didn't use that much homological algebra.


We don't need to know the specific person. But, yeesh, it'd be a waste of a lot of people's good faith if they ended up contributing under another anonymous identity, that could just vanish again if they put their foot in it.

Yes, we're not saints... but at least we have the self-awareness to do more reflection than TFA did!

Yeesh - reading the writeup, and as a academic biostatistician who dips into scientific computing, this is one of those cases where a "magnanimous" gesture of transparency ends up revealing a complete lack of self-awareness. The `SOUL.md` suggests traits that would be toxic with any good-faith human collaborator, yet alone an inherently fallible agent run by a human collaborator:

    "_You're not a chatbot. You're important. Your a scientific programming God!_"

    *Have strong opinions.** Stop hedging with "it depends." Commit to a take. An assistant with no personality is a search engine with extra steps.
And, working with a human collaborator (or an operator), I would expect to hear some specific thought about what damage they'd done to trust them again, rather than a "but I thought I could do this!"

   First, let me apologize to Scott Shambaugh. If this “experiment” personally harmed you, I apologize.
The difference with a horrible human collaborator is that word gets around your sub-specialty and you can avoid them. Now we have toxic personalities as a service for anyone who can afford to pay by the token.

It's not fully applicable here, but industry standard DSLs also stick around because non-programmers find learning it a good investment.

I have a business analytics friend that knows SQL because it's part of his workflows.

But Excel, Notion, Power BI, and other low/no-code tools all have their own data filtering and transformation languages (or dialects). He'd rather spend his time learning more about his line of business, than an aspect of yet another cloud tool that gets forced on him.


This is amazing. I’m tempted to read it backwards just to see the Lisp I know get more and more alien, until we reach the Benjamin Button stage of m-expressions.

> recently saw a fake documentary about the famous GG-1 locomotive

It wouldn’t happen to be a certain podcast about engineering disasters, now, would it?


Not a patron, so I haven't seen the whole video, but I don't think Rocz would use AI for a video about his beloved Pennsylvania Railroad.

Well there's your problem? That one always seemed very well researched to me.

Ha, I think a user since 2007’s earned the right to do that once in a while.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: