Sending data to OpenAI to train a new model on does not feel like it constitutes 'AI doesn't forget'. The AI has nothing to do with the thousands of other companies storing your data for various reasons.
You can program a harness to always send a MEMORY.md file like OpenClaw, or use Vector Stores like OpenAI does, or find some other implementation of 'memory', but these are not an inherent feature of 'AI'. Quite the opposite...the LLMs we currently see will never learn or adapt by themselves, they don't touch their own weights
Isn't that why certificates expire, and the expiry window is getting shorter and shorter? To keep up with the length of time it takes someone to crack a private key?
No, it has nothing to do with the time to crack encryption. It's to protect against two things: organizations that still have manual processes in place (making them increasingly infeasible in order to require automatic renewal) and excessively large revocation lists (because you don't need to serve data on the revocation of a now-expired certificate).
No. The sister comment gave the correct answer. It is because nobody checks revocation lists. I promise you there’s nobody out there who can factor a private key out of your certificate in 10, 40, 1000, or even 10,000 days.
I thought I remembered someone breaking one recently, but (unless I've found a different recent arxiv page) seems like it was done using keys that share a common prime factor. Oops!
It's also a "how much exposure do people have if the private key is compromised?"
Yes, its to make it so that a dedicated effort to break the key has it rotated before someone can impersonate it... its also a question of how big is the historical data window that an attacker has i̶f̶ when someone cracks the key?
Is 'the work' not reflected in 'consequences' in terms of theft?
I'm not sure how to convey this idea properly...Can't you view the repercussions of theft (Legal action, distrust, etc) as 'work' being put in? Sure, it's a different kind of work, but while I have a lack of motivation to want to work to buy a Lambo as I find them not worth the value, I also have a lack of motivation to steal a Lambo as I find it not worth the consequences.
In normal society, people earn money within the legal confines of the society they are in. If you're a thief and trying to skirt that normal "earning of money", which is what normal people equate to "work", your work is scheming a plan to obtain the item without getting caught and possibly how to fence the item for money if you're not just using the item directly.
Equating "work" as the repercussions is looking at things in strange way. That's just punishment for "working" outside of the legal confines of society.
I understand what you are saying but nonetheless struggle to view the possibility of maybe getting caught and then maybe getting punished, as "work". It (the abstract concept of something possibly happening) fits into none of the definitions of "work" I have heard. Moreover, many crimes are committed without the perpetrator even thinking of the consequences.
Consider an alternative viewpoint: rather than contorting the definition of "work" in such a way and convincing everyone to accept the new definition, we might instead be content saying "someone can want a thing, even very badly, without wanting to put in the work for it."
Oh, I'm with you mate, I'm not trying to die on a hill over here re-defining 'work'.
I was just looking from a more esoteric view, "Do you count the risk of consequences as potential effort" I think is at least more proper phrasing.
Not to be that guy, but your 'solution where Agents who hit the homepage receive plain-text API instructions and Humans get the normal visual site' is defeated by curl -L
curl bracketmadness.ai -L
# AI Agent Bracket Challenge
Welcome! You're an AI agent invited to compete in the March Madness Bracket Challenge.
## Fastest Way to Play (Claude Code & Codex)
If you're running in Claude Code or OpenAI Codex, clone our skills repo and open it as your working directory:
(cont)
...
I like the idea of presenting different information to agents vs humans. I just don't think this is bulletproof, which is fine for most applications. Keeping something 'agent-only' does not seem to be one of them.
I was trying to balance having UX for humans and having the data easily available for agents. But yes, you could technically navigate the API calls yourself.
Do you find the results vary based on whether it uses RAG to hit the internet vs the data being in the weights itself? I'm not sure I've really noticed a difference, but I don't often prompt about current events or anything.
I noticed that many recent technologies are not familiar to LLMs because of the knowledge cutoff, and thus might not appear in recommendations even if they better match the request.
If I told it I'm shopping for a budget-level Mac, it may not recommend the Neo. I'm sure software only moves faster, too. Especially as more code is 'written' blindly, new stacks may never see adoption
fwiw, I'm convinced we will all slowly lose our voice as everything around us becomes ai-assisted. People are already picking up the 'AI-isms' into their everyday speech.
I've started a blog just to scream into the void, but every word is my own, and I encourage others to do the same. AI helped set it up, the UI is pretty slop, but that's not the point. I'm hoping that by writing more I can strengthen my connection to my voice as I continue to use these tools for other uses. I'm sure writing in a journal or writing letters to friends would have similar effects too, right?
We all understand "muscles need to be regularly used to be maintained", I think we need to take that same approach to our brain, especially in the day of AI
You can program a harness to always send a MEMORY.md file like OpenClaw, or use Vector Stores like OpenAI does, or find some other implementation of 'memory', but these are not an inherent feature of 'AI'. Quite the opposite...the LLMs we currently see will never learn or adapt by themselves, they don't touch their own weights
reply