I have gigabit synchronous fiber at home thanks to a group of local tech folk who built out the network. The biggest change for me is that I rely more on my NAS at home over a Wireguard tunnel for things I would have used the cloud or a hosting service for before.
Going to work? No worries about forgetting a USB stick or portable SSD. I can always just fire up Wireguard and grab it from home.
Sharing Jellyfin access with family and friends has also been fun.
I wouldn't call it "better", but the least-effort path among hobbyists and low end gear is often 12v or 24v sent over a pair with Gnd and a forgiving voltage regulator on the other end.
I'm the mythical customer who went from a 1700X in a B350 motherboard near launch day to a 5800X3D in the same board (after a dozen BIOS updates). Felt amazing. Like the old 486DX2 days.
Same! Kept checking back for bios updates and even years later they kept announcing more support! Truly crazy.
Other than the speed it’s a very good reason to go with amd, the upgrade scope is massive, on am5 you can go from a 6 core and soon all the way to a 24 core with the new zen6
I think there's demonstrably very little difference at all between human and AI outputs, and that's exactly what freaks people out about it. Else they wouldn't be so obsessed with trying to find and define what makes it different.
The Thesis of Everything is a Remix is that there is no difference in how any culture is produced. Different models will have a different flavor to their output in the same way as different people contribute their own experiences to a work.
> I think there's demonstrably very little difference at all between human and AI outputs
Bold claim, as the internet is awash with counterexamples.
In any case, as I think this conversation is trending towards theories of artistic expression, “AI content” will never be truly relatable until it can feel pleasure, pain, and other human urges. The first thing I often think about when I critically assess a piece of art, like music, is what the artist must have been feeling when they created it, and what prompted them to feel that way. I often wonder if AI influencers have ever critically assessed art, or if they actually don’t understand it because of a lack of empathy or something.
And relatability, for me, is the ultimate value of artistic expression.
In any case, as I think this conversation is trending towards theories of artistic expression, “AI content” will never be truly relatable until it can feel pleasure, pain, and other human urges. The first thing I often think about when I critically assess a piece of art, like music, is what the artist must have been feeling when they created it, and what prompted them to feel that way.
I recently watched "Come See Me in the Good Light", about the life and death of poet Andrea Gibson. I find their poetry very moving, precisely because it's dripping with human emotion.
Or at least, that's the story I tell myself. The reality is that I perceive it to be written by a human full of emotion. If I were to find out it was AI, I would immediately lose interest, but I think we're already at the point where AI output is indistinguishable from human output in many cases, and if I perceive art to be imbued with human emotion, the actuality of it only matters in terms of how it shapes my perception of it.
I'm not really sure where we'll go with that from here. Maybe art will remain human-created only, and we'll demand some kind of proof of its provenance of being borne of a human mind and a human heart. Or maybe younger generations will really care only about how art makes them feel, not what kind of intelligent entity made it. I really don't know.
> Bold claim, as the internet is awash with counterexamples.
What do you consider a counterexample? Because I've been involved in local politics lately, and can say from experience that any foundation model is capable of more rational and detailed thought, and more creative expression, than most of the beloved members of my community.
If you're comparing AI to the pinnacle of human achievement, as another commenter pointed to Shakespeare, then I think the argument is already won in favor of AI.
> I think there's demonstrably very little difference at all between human and AI outputs
Counterexamples range from em-dashes, “Not-this, but-that”, people complaining about AI music on Spotify (including me) that sounds vaguely like a genre but is missing all of the instrumentation and motifs common to that genre.
The rest of your comment I don’t even know how to respond to, to be honest.
You’re really going to make the claim that there are no counterexamples of human and AI output being indistinguishable on the internet? At least make the counterclaim that “those are from old models, not the newest ones”, that’s more intellectually invigorating than the comment you just provided.
> claim that there are no counterexamples of human and AI output being indistinguishable on the internet?
Is that a claim I've made? I don't see it anywhere. I think a lot of people think that because they can get the AI to generate something silly or obviously incorrect, that invalidates other output which is on-par with top-level humans. It does not. Every human holds silly misconceptions as well. Brain farts. Fat fingers. Great lists of cognitive biases and logical fallacies. We all make mistakes.
It seems to me that symbolic thinking necessitates the use of somewhat lossy abstractions in place of the real thing, primarily limited by the information which can be usefully stored in the brain compared to the informational complexity of the systems being symbolized. Which neatly explains one cognitive pathology that humans and LLMs share. I think there are most certainly others. And I think all the humans I know and all the LLMs I've interacted with exist on a multidimensional continuum of intelligence with significant overlap.
I hereby rebuff your crude and libelous mischaracterization of my assertion. How's that? :)
You said AI works were easily distinguishable via em-dashes and "not this, but that"
I said I have witnessed humans using that metric accuse other humans here on hackernews. Q.E.D.
You've asserted that they are easily distinguished. Practitioners in the field fail to distinguish using the same criteria. Is that not dispositive? Seems like it to me.
I claimed much earlier in the thread "I think there's demonstrably very little difference at all between human and AI outputs" which is consistent with "I think all the humans I know and all the LLMs I've interacted with exist on a multidimensional continuum of intelligence with significant overlap."
Two ways of saying the same thing.
Both of them suggesting that sometimes you may be able to tell it's the output of an AI or Human, sometimes not. Sometimes the things coming out of the AI or the Human might be smart in a way we recognize, sometimes not. And recognizing that humans already exist on quite a broad scale of intelligences in many axes.
I was not saying that LLMs cannot produce something like pinnacle of human achievement. I was saying we cannot quantify the difference between Shakespeare and something commonplace, because it requires the ability to feel.
> demonstrably very little difference at all between human and AI outputs
Is there "demonstrably" a lot of difference between Shakespeare and an HN comment?
The point is exactly that there is no such difference. And that it enables slop to be sold as art. And that exactly is the danger. But another point is we had the even before LLMs. And LLMs just make it more explicit and makes it possible at scale.
Conrad Gessner had the very same complaint in the 16th century, noting the overabundance of printed books, fretting about shoddy, trivial, or error-filled works ( https://www.jstor.org/stable/26560192 )
Agreed. It really requires an understanding of not just the software and computer it's running on, but the goal the combined system was meant to accomplish. Maybe some of us are starting to feed that sort of information into LLMs as part of spec-driven development, and maybe an LLM of tomorrow will be capable of noticing and exploiting such optimizations.
Absolutely. I have written a small but growing CAD kernel which is seeing use in some games and realtime visualization tools ( https://github.com/timschmidt/csgrs ) and can say that computing with numbers isn't really even a solved problem yet.
All possible numerical representations come with inherent trade-offs around speed, accuracy, storage size, complexity, and even the kinds of questions one can ask (it's often not meaningful to ask if two floats equal each other without an epsilon to account for floating point error, for instance).
"Toward an API for the Real Numbers" ( https://dl.acm.org/doi/epdf/10.1145/3385412.3386037 ) is one of the better papers I've found detailing a sort of staged complexity technique for dealing with this, in which most calculations are fast and always return (arbitrary precision calculations can sometimes go on forever or until memory runs out), but one can still ask for more precise answers which require more compute if required. But there are also other options entirely like interval arithmetic, symbolic algebra engines, etc.
One must understand the trade-offs else be bitten by them.
Back in the early, early days, the game designer was the graphic designer, who also was the programmer. So, naturally, the game's rules and logic aligned closely with the processor's native types, memory layout, addressing, arithmetic capabilities, even cache size. Now we have different people doing different roles, and only one of them (the programmer) might have an appreciation for the computer's limits and happy-paths. The game designers and artists? They might not even know what the CPU does or what a 32 bit word even means.
Today, I imagine we have conversations like this happening:
Game designer: We will have 300 different enemy types in the game.
Programmer: Things could be really, really faster if you could limit it to 256 types.
Game designer: ?????
That ????? is the sign of someone who is designing a computer program who doesn't understand the basics of computers.
I wrote the Intellivision Mattel Roulette cartridge game back in the 1970s. It was all in assembler on a 10 bit (!) CPU. In order to get the game to fit in the ROM, you had to do every feelthy dirty trick imaginable.
I wish I'd kept a listing of that and other projects I worked on. But that never occurred to me.
A friend of mine wrote the Mattel Intellivision poker game. I was playtesting it (a very boring job), and got suspicious. I walked over to his desk and said the program was cheating. It was looking at my hole cards. He sighed and asked how I knew, and I replied it was obvious. He said he didn't have room to add code to improve its play otherwise. I don't know if he fixed it or not.
Not all, or even most, games are made by billion dollar studios. Overlapping roles are still the norm in small studios. And even those that do have bespoke designer roles would likely benefit from telling them that computers have certain limitations where trade-offs in game design need to be selected for, because many AAA games run like shit. Many times for reasons other than the game design, sure, but also sometimes because of ways that could be worked around more easily if the game design were accomodating the tradeoffs.
P40 is Tesla architecture which is no longer receiving driver or CUDA updates. And only available as used hardware. Fine for hobbyists, startups, and home labs, but there is likely a growing market of businesses too large to depend on used gear from ebay, but too small for a full rack solution from Nvidia. Seems like that's who they're targeting.
I suppose if I rent a cloud GPU and just let it sit there dark and do nothing then I wouldn't have to move any data to it. Otherwise, I'm uploading some kind of work for it to do. And that usually involves some data to operate on. Even if it's just prompts.
So you also believe when you rent a server you are sharing your data with the cloud? AWS and GCP are copying all private data on servers? Give me a break. There's a big difference between renting a server and using an API.
> So you also believe when you rent a server you are sharing your data with the cloud [hosting provider]?
Only if you upload your data to that cloud server you rented. Then, by definition, you are.
> AWS and GCP are copying all private data on servers?
Every computer copies data when moving it. Several times, in fact. Through network card buffers, switches, system memory, disk caches, and finally to some form of semi-permanent storage.
I don't have to think Amazon is stealing my data to be aware that Amazon S3 buckets containing privileged information are routinely found open. I don't have to think that Google is spying on me to know that operating equipment my business owns on prem and does not share requires me to trust fewer people and less complex systems than doing the same work from the cloud.
You are very quick to make foolish assumptions and assign them to others.
My understanding is that Abraham Lincoln literally had all the nation's telegraph lines routed through DC during the civil war, and AT&T has been an honorary branch of the US government ever since.
Going to work? No worries about forgetting a USB stick or portable SSD. I can always just fire up Wireguard and grab it from home.
Sharing Jellyfin access with family and friends has also been fun.
reply