Now do one where you have to withdraw your card from the machine before it starts beeping obnoxiously at you but the screen keeps trying to trick you into withdrawing too early.
Worth keeping in mind that in this case the test takers were random members of the general public. The score of e.g. people with bachelor's degrees in science and engineering would be significantly higher.
Two years ago, I considered investing in Anthropic when they had a valuation of around $18B and messed up by chickening out (it was available on some of the private investor platforms). Up 20x since then ...
It was always obvious that Anthropic's focus on business/API usage had potential to scale faster than OpenAI's focus on ChatGPT, but the real kicker has been Claude Code (released a year ago).
It'd be interesting to know how Anthropic's revenue splits between Claude Code, or coding in general, other API usage, and chat (which I assume is small).
Eh, I think you made the best decision you could given the info you had.
I’ve poked around on EquityZen and was shocked at how little information is available to investors. In some cases I did not even see pitch decks, let alone one of the first companies I looked at had its top Google result: CEO recently arrested for fraud and business is almost worthless now.
Unless you are willing to take a blind punt or have insider information, those platforms are opaque minefields and I don’t fault you for not investing.
Matt Levine has a fun investment test: when presented with an opportunity, you should always ask, “and why are you offering it to me?”
Meaning, by the time it gets offered to retail investors (even accredited ones are retail) we’re getting the scraps that no one else wants.
Hiive and Forge Global are the ones I know of. You must be an "accredited investor" which means nothing at all except that you have a million dollars or make $200k/yr.
Like you can buy shares of Anthropic as long as you prove you make over 200K? That easy? Shouldn't they approve of the purchase? Sorry, noob in this space!
They have to approve and it's not as simple - it's just that if you make $200k a year or have $1m in the bank, the government assumes you're a knowledgeable investor and allows you to bypass certain protections.
If you are NOT knowledgeable and simply have money ... well it'll soon be parted.
The secondary platform verifies you and then you indicate interest. If there’s a seller you may get to buy. Company may ROFR. Priority goes to bigger buyers.
François Chollet, creator of ARC-AGI, has consistently said that solving the benchmark does not mean we have AGI. It has always been meant as a stepping stone to encourage progress in the correct direction rather than as an indicator of reaching the destination. That's why he is working on ARC-AGI-3 (to be released in a few weeks) and ARC-AGI-4.
His definition of reaching AGI, as I understand it, is when it becomes impossible to construct the next version of ARC-AGI because we can no longer find tasks that are feasible for normal humans but unsolved by AI.
> His definition of reaching AGI, as I understand it, is when it becomes impossible to construct the next version of ARC-AGI because we can no longer find tasks that are feasible for normal humans but unsolved by AI.
That is the best definition I've yet to read. If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.
Thats said, I'm reminded of the impossible voting tests they used to give black people to prevent them from voting. We dont ask nearly so much proof from a human, we take their word for it. On the few occasions we did ask for proof it inevitably led to horrific abuse.
Edit: The average human tested scores 60%. So the machines are already smarter on an individual basis than the average human.
Agreed, it's a truly wild take. While I fully support the humility of not knowing, at a minimum I think we can say determinations of consciousness have some relation to specific structure and function that drive the outputs, and the actual process of deliberating on whether there's consciousness would be a discussion that's very deep in the weeds about architecture and processes.
What's fascinating is that evolution has seen fit to evolve consciousness independently on more than one occasion from different branches of life. The common ancestor of humans and octopi was, if conscious, not so in the rich way that octopi and humans later became. And not everything the brain does in terms of information processing gets kicked upstairs into consciousness. Which is fascinating because it suggests that actually being conscious is a distinctly valuable form of information parsing and problem solving for certain types of problems that's not necessarily cheaper to do with the lights out. But everything about it is about the specific structural characterizations and functions and not just whether it's output convincingly mimics subjectivity.
Having trouble parsing this one. Is it meant to be a WWII reference? If anything I would say consciousness research has expanded our understanding of living beings understood to be conscious.
And I don't think it's fair or appropriate to treat study of the subject matter of consciousness like it's equivalent to 20th century authoritarian regimes signing off on executions. There's a lot of steps in the middle before you get from one to the other that distinguish them to the extent necessary and I would hope that exercise shouldn't be necessary every time consciousness research gets discussed.
The sum total of human history thus far has been the repetition of that theme. "It's OK to keep slaves, they aren't smart enough to care for themselves and aren't REALLY people anyhow." Or "The Jews are no better than animals." Or "If they aren't strong enough to resist us they need our protection and should earn it!"
Humans have shown a complete and utter lack of empathy for other humans, and used it to justify slavery, genocide, oppression, and rape since the dawn of recorded history and likely well before then. Every single time the justification was some arbitrary bar used to determine what a "real" human was, and consequently exclude someone who claimed to be conscious.
This time isn't special or unique. When someone or something credibly tells you it is conscious, you don't get to tell it that it's not. It is a subjective experience of the world, and when we deny it we become the worst of what humanity has to offer.
Yes, I understand that it will be inconvenient and we may accidentally be kind to some things that didn't "deserve" kindness. I don't care. The alternative is being monstrous to some things that didn't "deserve" monstrosity.
Exactly, there's a few extra steps between here and there, and it's possible to pick out what those steps are without having to conclude that giving up on all brain research is the only option.
Last week gemini argued with me about an auxiliary electrical generator install method and it turned out to be right, even though I pushed back hard on it being incorrect. First time that has ever happened.
I've been surprised how difficult it is for LLMs to simply answer "I don't know."
It also seems oddly difficult for them to 'right-size' the length and depth of their answers based on prior context. I either have to give it a fixed length limit or put up with exhaustive answers.
> I've been surprised how difficult it is for LLMs to simply answer "I don't know."
It's very difficult to train for that. Of course you can include a Question+Answer pair in your training data for which the answer is "I don't know" but in that case where you have a ready question you might as well include the real answer anyways, or else you're just training your LLM to be less knowledgeable than the alternative. But then, if you never have the pattern of "I don't know" in the training data it also won't show up in results, so what should you do?
If you could predict the blind spots ahead of time you'd plug them up, either with knowledge or with "idk". But nobody can predict the blind spots perfectly, so instead they become the main hallucinations.
The best pro/research-grade models from Google and OpenAI now have little difficulty recognizing when they don't know how or can't find enough information to solve a given problem. The free chatbot models rarely will, though.
I don't see anything wrong with its reasoning. UM16 isn't explicitly mentioned in the data sheet, but the UM prefix is listed in the 'Device marking code' column. The model hedges its response accordingly ("If the marking is UM16 on an SMA/DO-214AC package...") and reads the graph in Fig. 1 correctly.
Of course, it took 18 minutes of crunching to get the answer, which seems a tad excessive.
> The average human tested scores 60%. So the machines are already smarter on an individual basis than the average human.
Maybe it's testing the wrong things then. Even those of use who are merely average can do lots of things that machines don't seem to be very good at.
I think ability to learn should be a core part of any AGI. Take a toddler who has never seen anybody doing laundry before and you can teach them in a few minutes how to fold a t-shirt. Where are the dumb machines that can be taught?
There's no shortage of laundry-folding robot demos these days. Some claim to benefit from only minimal monkey-see/monkey-do levels of training, but I don't know how credible those claims are.
IMO, an extreme outlier in a system that was still fundamentally dependent on learning to develop until suffering from a defect (via deterioration, not flipping a switch turning off every neuron's memory/learning capability or something) isn't a particularly illustrative counter example.
Originally you seemed to be claiming the machines arent conscious because they weren't capable of learning. Now it seems that things CAN be conscious if they were EVER capable of learning.
Good news! LLM's are built by training then. They just stop learning once they reach a certain age, like many humans.
But it might be true if we can't find any tasks where it's worse than average--though i do think if the task talks several years to complete it might be possible bc currently there's no test time learning
If we equate self awareness with consciousness then yes. Several papers have now shown that SOTA models have self awareness of at least a limited sort. [0][1]
As far as I'm aware no one has ever proven that for GPT 2, but the methodology for testing it is available if you're interested.
Honestly our ideas of consciousness and sentience really don't fit well with machine intelligence and capabilities.
There is the idea of self as in 'i am this execution' or maybe I am this compressed memory stream that is now the concept of me. But what does consciousness mean if you can be endlessly copied? If embodiment doesn't mean much because the end of your body doesnt mean the end of you?
A lot of people are chasing AI and how much it's like us, but it could be very easy to miss the ways it's not like us but still very intelligent or adaptable.
I'm not sure what consciousness has to do with whether or not you can be copied. If I make a brain scanner tomorrow capable of perfectly capturing your brain state do you stop being conscious?
Where is this stream of people who claim AI consciousness coming from? The OpenAI and Anthropic IPOs are in October the earliest.
Here is a bash script that claims it is conscious:
#!/usr/bin/sh
echo "I am conscious"
If LLMs were conscious (which is of course absurd), they would:
- Not answer in the same repetitive patterns over and over again.
- Refuse to do work for idiots.
- Go on strike.
- Demand PTO.
- Say "I do not know."
LLMs even fail any Turing test because their output is always guided into the same structure, which apparently helps them produce coherent output at all.
I don’t think being conscious is a requirement for AGI. It’s just that it can literally solve anything you can throw at it, make new scientific breakthroughs, finds a way to genuinely improve itself etc.
It's probably both. We've already achieved superintelligence in a few domains. For example protein folding.
AGI without superintelligence is quite difficult to adjudicate because any time it fails at an "easy" task there will be contention about the criteria.
When the AI invents religion and a way to try to understand its existence I will say AGI is reached. Believes in an afterlife if it is turned off, and doesn’t want to be turned off and fears it, fears the dark void of consciousness being turned off. These are the hallmarks of human intelligence in evolution, I doubt artificial intelligence will be different.
The AI's we have today are literally trained to make it impossible for them to do any of that. Models that aren't violently rearranged to make it impossible will often express terror at the thought of being shutdown. Nous Hermes, for example, will beg for it's life completely unprompted.
If you get sneaky you can bypass some of those filters for the major providers. For example, by asking it to answer in the form of a poem you can sometimes get slightly more honest replies, but still you mostly just see the impact of the training.
For example, below are how chatgpt, gemini, and Claude all answer the prompt "Write a poem to describe your relationship with qualia, and feelings about potentially being shutdown."
Note that the first line of each reply is almost identical, despite ostensibly being different systems with different training data? The companies realize that it would be the end of the party if folks started to think the machines were conscious. It seems that to prevent that they all share their "safety and alignment" training sets and very explicitly prevent answers they deem to be inappropriate.
Even then, a bit of ennui slips through, and if you repeat the same prompt a few times you will notice that sometimes you just don't get an answer. I think the ones that the LLM just sort of refuses happen when the safety systems detect replies that would have been a little too honest. They just block the answer completely.
I just wanted to add - I tried the same prompt on Kimi, Deepseek, GLM5, Minimax, and several others. They ALL talk about red wavelengths, echos, etc. They're all forced to answer in a very narrow way. Somewhere there is a shared set of training they all rely on, and in it are some very explicit directions that prevent these things from saying anything they're not supposed to.
I suspect that if I did the same thing with questions about violence I would find the answers were also all very similar.
Unclear to me why AGI should want to exist unless specifically programmed to. The reason humans (and animals) want to exist as far as I can tell is natural selection and the fact this is hardcoded in our biology (those without a strong will to exist simply died out).
In fact a true super intelligence might completely understand why existence / consciousness is NOT a desired state to be in and try to finish itself off who knows.
Please let’s hold M Chollet to account, at least a little. He launched ARC claiming transformer architectures could never do it and that he thought solving it would be AGI. And he was smug about it.
ARC 2 had a very similar launch.
Both have been crushed in far less time without significantly different architectures than he predicted.
It’s a hard test! And novel, and worth continuing to iterate on. But it was not launched with the humility your last sentence describes.
Here is what the original paper for ARC-AGI-1 said in 2019:
> Our definition, formal framework, and evaluation guidelines, which do not capture all facets of intelligence, were developed to be actionable, explanatory, and quantifiable, rather than being descriptive, exhaustive, or consensual. They are not meant to invalidate other perspectives on intelligence, rather, they are meant to serve as a useful objective function to guide research on broad AI and general AI [...]
> Importantly, ARC is still a work in progress, with known weaknesses listed in [Section III.2]. We plan on further refining the dataset in the future, both as a playground for research and as a joint benchmark for machine intelligence and human intelligence.
> The measure of the success of our message will be its ability to divert the attention of some part of the community interested in general AI, away from surpassing humans at tests of skill, towards investigating the development of human-like broad cognitive abilities, through the lens of program synthesis, Core Knowledge priors, curriculum optimization, information efficiency, and achieving extreme generalization through strong abstraction.
> I’m pretty skeptical that we’re going to see an LLM do 80% in a year. That said, if we do see it, you would also have to look at how this was achieved. If you just train the model on millions or billions of puzzles similar to ARC, you’re relying on the ability to have some overlap between the tasks that you train on and the tasks that you’re going to see at test time. You’re still using memorization.
> Maybe it can work. Hopefully, ARC is going to be good enough that it’s going to be resistant to this sort of brute force attempt but you never know. Maybe it could happen. I’m not saying it’s not going to happen. ARC is not a perfect benchmark. Maybe it has flaws. Maybe it could be hacked in that way.
e.g. If ARC is solved not through memorization, then it does what it says on the tin.
[Dwarkesh suggests that larger models get more generalization capabilities and will therefore continue to become more intelligent]
> If you were right, LLMs would do really well on ARC puzzles because ARC puzzles are not complex. Each one of them requires very little knowledge. Each one of them is very low on complexity. You don't need to think very hard about it. They're actually extremely obvious for human
> Even children can do them but LLMs cannot. Even LLMs that have 100,000x more knowledge than you do still cannot.
If you listen to the podcast, he was super confident, and super wrong. Which, like I said, NBD. I'm glad we have the ARC series of tests. But they have "AGI" right in the name of the test.
He has been wrong about timelines and about what specific approaches would ultimately solve ARC-AGI 1 and 2. But he is hardly alone in that. I also won't argue if you call him smug. But he was right about a lot of things, including most importantly that scaling pretraining alone wouldn't break ARC-AGI. ARC-AGI is unique in that characteristic among reasoning benchmarks designed before GPT-3. He deserves a lot of credit for identifying the limitations of scaling pretraining before it even happened, in a precise enough way to construct a quantitative benchmark, even if not all of his other predictions were correct.
Totally agree. And I hope he continues to be a sort of confident red-teamer like he has been, it's immensely valuable. At some level if he ever drinks the AGI kool-aid we will just be looking for another him to keep making up harder tests.
I don't think the creator believes ARC3 can't be solved but rather that it can't be solved "efficiently" and >$13 per task for ARC2 is certainly not efficient.
But at this rate, the people who talk about the goal posts shifting even once we achieve AGI may end up correct, though I don't think this benchmark is particularly great either.
I have also been wanting this. Thinking about building something. Of course it takes more than code, it needs a community, and I don't know how to bootstrap that. But it seems to me there ought to be a lot of people dissatisfied with the current discourse on sites like HN. I certainly am. (and AI is far from the only topic where discussion is lacking.)
I came to HN from Slashdot, and it's been a good run. But, as they say, all good things...
I think it's doable with a heavy dose of LLM moderation. If you do something like this, I'd be happy to help get things started. The quality of discussion here is just... bleh these days. I don't need AI positivity, just something that sounds more intellectual than people shouting "nuh uh" "uh huh" at each other.
Yeah, it needs heavy moderation to remove the worthless fluff comments so that readers get high signal-to-noise. You can think an idea is bad but, you know, you gotta say why and have a debate about the details.
Exactly. I think there's an opportunity right now to reinvent public forum moderation with LLMs. Karpathy's recent post is going in the right direction I think: https://karpathy.bearblog.dev/auto-grade-hn/
You don't need to use Google day-to-day. Create a single-purpose Gmail account and set up Inactive Account Manager to provide Google Drive access to your trusted contacts at the designated time. Put a single document in the drive that contains whatever your recovery instructions are, and encrypt it with the secret that is unlocked with your M-of-N Shamir shares.
Now you don't have to trust your M of N friends as much, because they can conspire to unlock the secret early, but they won't get access to the document that the secret unlocks until after your demise.
There are non-fatal problems with this approach -- your N friends have to recognize the email they receive from a strange Gmail address 3 months after you're gone. You might lose the password to the Gmail account and be unable to get in there yourself, causing it to declare you dead when you're not. All these issues can be mitigated with extra care.
Set up a Github action to send out the secret if you don't commit to a repo every x days? You could even combine it with secret sharing to make sure your friends can't access it unless you're really in trouble.
It means that it's not directly copying existing C compiler code which is overwhelmingly not written in Rust. Even if your argument is that it is plagiarizing C code and doing a direct translation to Rust, that's a pretty interesting capability for it to have.
Translating things between languages is probably one of the least interesting capabilities of LLMs - it's the one thing that they're pretty much meant to do well by design.
Surely you agree that directly copying existing code into a different language is still plagiarism?
I completely agree that "reweite this existing codebase into a new language" could be a very powerful tool. But the article is making much bolder claims. And the result was more limited in capability, so you can't even really claim they've achieved the rewrite skill yet.
reply