That compound command issue is infuriating, and really what drove me to make this.
FWIW, the prompt is easily adjustable in a `.claude/PERMISSION_POLICY.md` file in the project.
It's also quite easy to remix the script with claude to meet your needs. Right now it prompts the user and runs the script, so it's a race. If you added a delay in the script and increased the hook timeout in `.claude/settings.json` you should be able to accomplish what you're looking for pretty easily.
I've been thinking about minimal models of evolution. I concluded that you need information to be copied with some transformation, some death, and a way for the information in question to encode its ability to avoid death.
In trying to simulate that, neural networks were a good fit since they are universal function solvers. I definitely took some inspiration from NEAT[0], though I'm not using any form of crossover.
This is a reasonable conceptualization, IMO. However, the problem isn't that we can't access the information in a black hole (there are other places in the universe where information becomes inaccessible).
The problem is that black holes evaporate. If the particles released via evaporation don't contain the information about the particles that entered, information is lost when the black hole is completely gone.
The proposed solution is that the information is encoded onto the surface of the black hole and thus into the hawking radiation being released from that surface.
This idea in physics that information is conserved, neither created nor destroyed, just transformed seems awfully similar to a computer to me. A classical computer is not the right metaphor really when you think of the universe as a possible computational process, but the parallels are striking to me.
I suspect it may be just necessarily true that information is preserved in a consistent universe. I don't know though, maybe someone could come up with a model for a consistent universe with information loss, but it seems to me that would lead to physically possible states that are not derivable from consistent laws of physics.
I'm tying loose ends in my head that probably have long been tied in other ones...
Reading this next to the comment making a parallel between the black hole event horizon and the cosmological horizon...
Wouldn't this give credence to holographic model of the universe?
Could the so-called heat death of the universe and black hole evaporation be identical phenomena seen from either the inside or the outside of the boundary?
Flutter is Canvas-based on the web, so this post well-could mean a switch to Flutter. Hard to see how non-Flutter customer Canvas-renderer for Google Docs makes sense.
Flutter web can use both DOM and Canvas, but the default is Canvas. I haven't to this day seen any disadvantages of using Canvas though, possibly speed on slow mobile devices.
Whilst it's obviously powerful, I often find myself wishing math used syntax even half as easy to understand as any decent programming language.
I suppose it's a result of being developed on a chalk board, but math seems be value _terseness_ above all else. Rather than a handful of primitives and simple named functions, it's single greek characters and invented symbols. Those kind of shenanigans would never pass a code review, but somehow when we're talking about math they're "elegant" and "powerful".
However I'd like to add that often in mathematics, we are discussing very generic situations. For instance, we are not talking about the radius of some specific circle, which perhaps should be named `wheelRadius`, but about the radius of an arbitrary circle or even an arbitrary number.
I wouldn't really know a better name for an arbitrary number than `x`. The alternative `arbitraryNumber` gets old soon, especially as soon as a second number needs to be considered -- should it be called `arbitraryNumber2`? I'll take `y` over that any day :-)
Also there are contextually dependent but generally adhered to naming conventions which help to quickly gauge the types of the involved objects. For instance, `x` is usually a real number, `z` is a complex number, `C` is a constant, `n` and `m` are natural numbers, `i` is a natural number used as an array index, `f` and `g` are functions, and so on.
My favorite symbol is by the way `よ` which denotes the Yoneda embedding and is slowly catching on. All the other commonly symbols for the Yoneda embedding clashed with other common names. This has been a real nuisance when studying category theory.
We use i, x, y, etc. all the time as variable names as professional programmers.
So you're sort of arguing against a straw man there, almost no programmer would expect you to name such a concept 'arbitraryNumber2', we would also name it x or y if it made sense in the code.
Sorry, I didn't want to argue against a straw man. You are right. I just wanted to indicate that in mathematics, we are much more often in such generic situations than in programming. Accordingly, I wanted to argue that the increased brevity in mathematics is to some extent to be expected.
"My favorite symbol is by the way `よ` which denotes the Yoneda embedding" which was named after its mathematician inventor/discoverer Yoneda [1]
The character is the syllable "Yo" in Japanese katakana, and although not everyone knows katakana, still it is mnemonic for "Yoneda" rather than being wholly arbitrary. [2]
Every programming language overloads the same few ASCII non-alpha-numeric characters to have multiple meanings. A colon : can mean multiple things in most computer languages, or it gets combined. Even a symbol like less-than < gets extremely different meanings depending upon context: comparison, template, XML, << operator (bonus if overloaded), <- operator (ugggh mixed with minus for bonus confusion) etcetera.
I think saying programming languages are better than Mathematics is just due to your familiarity.
Oooh don't even get me started on how they name things after people instead of anything remotely descriptive or helpful. Imagine if you named functions after yourself.
And then there is the Hann window function, sometimes called ...
"named after Julius von Hann, and sometimes referred to as Hanning, presumably due to its linguistic and formulaic similarities to the Hamming window." Wikipedia, window function
Krebs' cycle, HeLa cells and Lugol's formula in medicine as well; And also an Allen key, a Phillips screwdriver, many of the physical units (Ampere, Volt, Tesla, Weber, Newton, ...).
It happens in all fields. For that matter, the Hebrew word for masturbation is named after Onan, who was described in the bible as having done so.
I find it a good thing to name it after someone who discovered something or pioneered branch in the field. Sometimes it makes it confusing but most of the time the name reference makes it very easy to remember as well
If that were true, then why don't we do the same thing for programming? We only do that for languages and algorithms (which I'm still not a fan of), but everything else tries to have descriptive names because then it's easier to understand how it fits together with other concepts. If we called for loops "bobs" and while loops "jims", how are you supposed to know how they are related, structurally?
Mathematical concepts are more general and abstract and so a short description is sometimes hard or inconvenient to describe without overlapping dozens of other concepts. Wherever it makes sense there are all sorts of descriptive names : loops, knots and so on
Comparing programming languages to maths doesn't really makes sense because they serve to express vastly different things. Programming languages need to unambiguously describe how to transform input data into output data. Maths language is more like a natural language and is used to communicate. It evolves in the same way as natural languages evolve and an attempt to codify it precisely is futile because there will always be idiomatic expressions, exceptions to the rules and it depends heavily on a context.
You use maths language to write a story or talk with a friend what you did last night, you use programming language to build a shed or bake a bread
It might be awful from the outsider's perspective but so is a foreign language if you never learned it. Hard to complain about it though and if you want to know what others are taking about there is no other way around but to learn it - it won't change in order to make it easier for you, it will change to make it easier for the speakers
A codebase easily contains thousands of identifiers - sometimes millions. You need a verbose, (hopefully) unique way to refer to them, because otherwise you will never find out which variable refers to what.
On the other hand, in a typical math textbook, the kind that will take you a full year to read through, the list of "all the symbols ever used in this book" usually fits in a single page.
There's no point in writing "CircumferenceRatio" when π does the job. Imagine solving a partial differential equation with CircumferenceRatio appearing five times each line.
Algebraically manipulating stuff (factoring, rearranging, cancelling, expanding, simplifying, etc.) without the terseness of math notation sounds like a nightmare, regardless of whether I'm using a chalkboard or an endless sheet of paper.
> Those kind of shenanigans would never pass a code review
Yes, because code is used in very different ways to a mathematical expression. When you see code in a repository or a textbook, I doubt you find yourself copying it out over and over again in your own work.
Both math(s) and (almost?) all programming languages have their quirks and inconsistencies. What immediately comes to mind is (a thing I've recently learned) the template syntax of C++ where if you get a single character wrong you get dozens of lines of error messages (there's a code golf on exactly this).
At least with programming, you generally don't see different semantics depending on the value of something! With math, there's sin^2 as in:
sin^2 theta + cos^2 theta = 1, which reads the square of the sin of theta, etc.
But then there's
sin^-1
which means the inverse sine AKA arcsine, and NOT 1 / sine, which would be consistent with previous usage.
The use of invisible operators is obnoxious because it means symbol names must all be atoms. Why is yz (multiply y z) but 23 is (toint (cat “2” “3”))? A great deal of mathematical syntax is actually ambiguous as written too. Plenty of it is fine, but it’s intellectually dishonest to deny that many common notations have no merit beyond widespread historical usage. Which in case it isn’t clear means yes of course the student should learn them for the benefit of reading great works of the past.
in computer science people can use pseudocode or descriptive variable and function names, and do sometimes, but still often fall back on math notation and Greek letters.
sometimes the terseness, and leaving certain details implicit, does actually add to clarity rather than hurting it. the eye can only take in so much at one time.
My main frivolous gripe with math notation is how everyone uses radians by default, to the point where your first visual clue that something is an angle is not any kind of unit, but rather the fact that it's being multiplied or divided by some multiple or fraction of pi. I think that the most sensible universal angle unit is "rotations". So, 360 degrees is 1, 45 degrees is 1/8, and so forth. Radians are only useful for a few special cases, like determining how far a car rolls if it's 10 inch radius tire rotated by 300 radians. (I wonder if somewhere, there's a mathematician who has modded their car's tachometer to output radians per second rather than revolutions per minute, just to make the math work out easier...)
Anyways, programming languages generally follow math notation, and use radians for trig functions and so on. Usually that's not too much of a problem, but when applied to file formats like VRML which were meant to be human readable, the results are ugly.
For the most part though, I think math notation is pretty good. At least when compared to something like standard music notation, which is full of weird rules and historical accidents.
Algorithms for calculating trig functions would probably not look good using degrees. Maybe it might look OK with what I assume is what's usually used (lookup tables + interpolation?), but for the Taylor series expansion you have to multiply by powers of pi/180 everywhere.
Calculus is generally worse with degrees. The derivative of sin(pi/180 x) is pi/180 cos(pi/180 x). That's pretty inconvenient, especially if you're writing any sort of models that need to solve differential equations. Same reason base e is preferred for exponents.
Radians vs. degrees isn't notation, it's a convention. You even say the reason why it is the convention. That multiplying the radius by the radians gives you the circumference. It is the only representation of angles with this special property. I mean, why should 360 represent 1 rotation? Why not use rotations itself? That way 1/4 is 1/4, 1/8 is 1/8, and so forth.
Multiple configurations are a maintenance burden. So, all else being equal, it's better not to have them. That said, not all else is equal, and configurability can indeed increase the user experience. But if your defaults are so good that very few people take advantage of the settings, you're better off dropping them.
At what point does user experience trump the ability to use the software at all in the first place?
UX designers are often seduced by reductionism because it makes their lives easier and combined with various hand-wavy aesthetical rationalizations, proceed to ignore several immutable aspects of reality:
Developers of an open source project usually have little insight into how their project is actually being used out in the real world. They only know about their use cases and (if they're lucky) that of their co-maintainers and bug reporters.
If at some point in time, an option was added to make some piece of the project do "x" instead of "y", it is overwhelmingly likely that the option _was put there for a very good reason_. Just because you don't know the reason doesn't mean one doesn't exist.
If the project has more than a few dozen users worldwide, then it is extremely likely that for every UI widget, command-line option, or configuration setting, there is a non-trivial subset of users who rely on that thing being there for their workflow. Remove it, and at best you lose those users. But more often you make them angry as well.
When looking at others' code and designs, it is a very human trait to assume that the person who created it didn't know what they were doing and that the way _you_ would do it is better. I get this and I fall into this trap more often than I would like to admit. _But it is almost always wrong_. We often look at the code in a vacuum and don't see the context under which is was written and why it is still there.
I've watched this happen time and time again with various open source projects: Person (or team) creates a successful project, maintenance of the project shifts to a new person or group who decide that the old project was "unmaintainable" and decides to a ground-up rewrite into what they _think_ is going to be a better version of the same thing. When in reality, all they have done is written a new, far less useful version of a somewhat similar thing.
FWIW, the prompt is easily adjustable in a `.claude/PERMISSION_POLICY.md` file in the project.
It's also quite easy to remix the script with claude to meet your needs. Right now it prompts the user and runs the script, so it's a race. If you added a delay in the script and increased the hook timeout in `.claude/settings.json` you should be able to accomplish what you're looking for pretty easily.