Having been writing a lot of AWS CDK/IAC code lately, I'm looking at this as the "spec" being the infrastructure code and the implementation being the deployed services based on the infrastructure code.
It would be an absolute clown show if AWS could take the same infrastructure code and perform the deployment of the services somehow differently each time... so non-deterministically. There's already all kinds of external variables other than the infra code which can affect the deployment, such as existing deployed services which sometimes need to be (manually) destroyed for the new deployment to succeed.
Hmm but that sounds pretty much like how I currently understand how our brains work. Not sure how factual this is, but I remember watching a video about how our brains essentially lie to us.
I think there was a ping pong example in the video. It said something like you think you watch the ball come towards you and you think that you are making a decision and action to move the paddle on the ball's trajectory, but what really happens is that most of that is pre-observed, pre-decided and pre- acted upon subconsciously.
So the subconscious part does most of the work and then when your conscious part catches up and you feel like you are doing the reacting, it's actually your subconsciousness lying to you that this was your observation and your decided reaction.
Again, not sure how factual any of that is, but it made sense to me when I thought about how complex the task of observing+deciding+acting is in e.g. ping pong and how very little time there is to actually do all of that. Is it really possible to consciously observe, decide and act to a ping pong ball with so very little time there is to do all of that?
So based on that it does seem like we are the observer and our subconscious is the actor which also lies to us to make us feel like that the actor is us.
I can introspect, but that could just be my subconsciousness doing it and lying to me that it was by own conscious introspection.
I'm Finnish and in in Finnish we translate "call" in function context as "kutsua", which when translated back into English becomes "invite" or "summon".
So at least in Finnish the word "call" is considered to mean what it means in a context like "a mother called her children back inside from the yard" instead of "call" as in "Joe made a call to his friend" or "what do you call this color?".
In German, we use "aufrufen", which means "to call up" if you translate it fragment-by-fragment, and in pre-computer times would (as far as I know) only be understood as "to call somebody up by their name or nummer" (like a teacher asking a student to speak or get up) when used with a direct object (as it is for functions).
It's also separate from the verb for making a phone call, which would be "anrufen".
Interesting! Across the lake in Sweden we do use "anropa" for calling subprograms. I've never heard anyone in that context use "uppropa" which would be the direct translation of aufrufen.
'Summon' implies a bit of eldritch horror in the code, which is very appropriate at times. 'Invite' could also imply it's like a demon or vampire, which also works!
> Structure and Interpretation of Computer Programs (SICP) is a computer science textbook by Massachusetts Institute of Technology professors Harold Abelson and Gerald Jay Sussman with Julie Sussman. It is known as the "Wizard Book" in hacker culture.[1] It teaches fundamental principles of computer programming, including recursion, abstraction, modularity, and programming language design and implementation.
> "Be, and it is" (Arabic: كُن فَيَكُونُ; kun fa-yakūn) is a Quranic phrase referring to the creation by God′s command.[1][2] In Arabic, the phrase consists of two words; the first word is kun for the imperative verb "be" and is spelled with the letters kāf and nūn. The second word fa-yakun means "it is [done]".[3]
> The phrase at the end of the verse 2:117
> Kun fa-yakūn has its reference in the Quran cited as a symbol or sign of God's supreme creative power. There are eight references to the phrase in the Quran:[1]
My 5 cents would be that LLMs have replaced all those random (e.g. CSS, regex etc) generators, emmet-like IDE code completion/generator tools, as well as having to google for arbitrary code snippets which you'd just copy and paste in.
In no way can AI be used for anything larger than generating singular functions or anything that would require writing to or modifying multiple files.
Technically you might be able to pull off having AI change multiple files for you in one go, but you'll quickly run into sort of "Adobe Dreamviewer" type of issue where your codebase is dominated by generated code which only the AI that generated it is able to properly extend and modify.
I remember when Dreamviewer was a thing, but you essentialyl had to make a choice between sticking with it forever for the project or not using it at all, because it would basically convert your source code into it's own proprietary format due to it becoming so horribly messy and unreadable.
Regardless, AI is absolutely incredible and speeds up development by a great deal, (even) if you only use it to generate small snippets at the time.
AI is also an absolute godsend for formatting and converting stuff from anything and to anything - you could e.g. dump your whole database structure to Gemini and ask it to generate an API against it; big task, but since it is basically just a conversion task, it will work very well.
This to me seems like one of those things that makes me think why is this considered such a problem that it needs a "proper" solution like this.
If I have a function written in TS which takes a string type parameter called a hash... isn't it already obvious what the function wants?
Furthermore when the function in question is a hash checking function, it is working exactly as intended when it returns something like "invalid hash" when there is a problem with the hash. You either supply it a valid well formed hash and the function returns success, or you supply it any kind of non-valid hash and the function returns a failure. What is the problem?
In case the function is not a hash checking function per se, but e.g. a function which uses the hash for someyhing, you will still need to perform some checks on the hash before using it. Or it could be a valid hash, but there is nothing to be found with that hash, in which case once again everything already works exactly as it should and you get nothing back.
It's like having a function checkBallColor which wants you to supply a "ball" to it. Why would you need to explicitly define that you need to give it a ball with a color property in which the color is specified in a certain way? If someone calls that function with a ball that has a malformed color property, then the function simply returns that the ball's color property is malformed. You will, in most cases probably, have to check the color property in runtime anyway.
I've used TS based graphics libraries and they often come with something like setColorHex, setColorRGB, etc. functions so that you know how the color should be given. If you supply the color in a wrong way, nothing happens and I think that is just fine.
Sorry for the rant, but I just don't get some of these problems. Like... you either supply a valid hash and all is fine, or you don't and then you simply figure out why isn't it valid, which you will have to do with this branding system as well.
Isn't the whole point of the term "invariant" that it describes something as unchanging under specific circumstances.
e.g.
The sum of the angles of triangles is 180 degrees in the context of euclidean geometry. However, if we project a triangle on a sphere, this no longer holds. So the sum of the angles is an invariant under euclidean geometry.
On the other hand, the value of PI is a constant because it stays the same regardless of the circumstances. That's why all the numbers themselves are constant as well - the number 5 is number 5 absolutely always.
So if you have a value that changes over time, it is definitely not a constant. It could be invariant, if you, e.g. specify that the value does not change as long as time does not change. Your value is now an invariant in the context of stopped time, but it can never be a constant if there is any context where it does change.
Surely their research is not about shipping broken software though, then your goals would be in conflict.
I suppose your point is that the researcher's goal in not exactly to ship working software either, but wouldn't that put the researcher's goals at worst neutral then?
Likewise your goal is not to publish research, but it is also not to actively work against it either. From their standpoint you're also at worst neutral.
It's also worth pointing out that the motivating factor doesn't necessarily have to be the same for each party to have a common goal. I'd argue that this is how it actually is most of the time.
I do work for a customer so that my boss doesn't fire me and I get my paycheck, whereas my boss does work for the customer to bring money to their company and to not go bankrupt. The customer helps with the work we're doing for them because they want something out of the project that is their money's worth. Our motivations are different, but the goal is the same - to build a thing that works.
My point is that it just seems very likely that some common ground can be found between you and the researchers, regardless of your individual motivations and since there's really no inherent conflict either.
> regardless of your individual motivations and since there's really no inherent conflict either.
Within organizations, there's an inherent conflict between any orthogonal goals.
In theory there's no conflict, but in practice, there is constant competition for time and resources. This creates conflict between any groups whose goals are not aligned, including groups whose goals are completely unrelated. This is also why organizational politics is the way it is.
you stated the condition/assumption - that the goals are orthogonal
I believe these are not - researchers publishing how to create good software and developers creating software may be seen as goals as aligned as fixing an incident and publishing a post mortem, or as writing an RFC and implementing it. Or publishing a post on how you remodeled your infrastructure.
They diverge at some point, yes, but that's not orthogonal at all
> researchers publishing how to create good software
I disagree that this is what researchers are doing (at least from my point of view). This is actually an area where I agree with the article, there's a gigantic gulf between researchers and practitioners. The things that academia puts out are not, generally, what I would consider to be good software.
I think this fundamentally comes down to a difference in the definition of "good" between the two camps. So far as I can tell (not being an academic), the academic definition of "good" seems to revolve around software having certain provable characteristics. My definition of good software involves the software exhibiting useful characteristics. And those are, generally speaking, orthogonal. If not sometimes inhibiting each other.
But, of course I would think this, I'm a practitioner. The academics probably have similar complaints about me.
I can see this is going to be one of those HN discussions that goes back and forth forever, maybe somebody should do some research on how orthogonal the goals really are!
If the researcher makes a neutral contribution and someone else you could hire with the same resources would make a strongly positive contribution you're always going to favor the strongly positive contribution.
I've been following a data science course called "The Data Science Course 2023: Complete Data Science Bootcamp" at Udemy.
The course starts all the way back from basic statistics and goes through things like linear regression and supposedly will arrive at neural networks and machine learning at some point.
So I don't know if something like this is exactly what you're looking for, but I think that, in general, if one wants to learn about (the history) AI, then it might be a good idea to start from statistics and learn about how we got from statistics to where we are now.
On another note, I think something like this should come with some kind of zero knowledge proof of the future encrypted thing being what it is claimed to be, so that one doesn't have to wait for a long time just for the secret to encrypt into nothing.
I don't know if I disagree with this, but this reminded me about one thing that for some reason stuck to my mind once.
It was about statistics and probabilities. I think I was talking with ChatGPT about superpositions or something and somehow we got to talking about how e.g it might be impossible to make a system which could predict everyone's favourite flavour of ice cream.
Interestingly enough though, it is perfectly possible to gather data about people's favourite ice cream flavours (and we could even go as far as to say that we could ask every single human on the planet) and make a statistical model which is able to answer what is most probably everyone's favourite ice cream flavour.
I find this really interesting. We could think of one person's flavour as essentially random and impossible to predict, but when we gather enough of these random data points, we are for some reason able to build a relatively accurate system to guess someone's favourite flavour. I don't think this is obvious at all.
Anyway, I didn't have any real point here. I just wanted to share one example of interesting thing that I think is an example of "emergent behaviour" and seemingly magical at that too.
It would be an absolute clown show if AWS could take the same infrastructure code and perform the deployment of the services somehow differently each time... so non-deterministically. There's already all kinds of external variables other than the infra code which can affect the deployment, such as existing deployed services which sometimes need to be (manually) destroyed for the new deployment to succeed.