Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The difference for me personally: Web3/NFT/Crypto: what? why? DALL-E/ChatGPT: wow!


Web3/NFT/Crypto promises: Very clever and advanced tech, People will stop using the old stuff because they hate the government and want to save pennies.

Web3/NFT/Crypto realities: using it is expensive and hard, scams left and right and the government actually arrives but doesn't remedy the damages. People are not in to save pennies on transactions but to get rich quick.

DALL-E/ChatGPT promises: A statistical model that can generate text and images that are impressive but not always accurate. Also, the tech is not that magical, we just used so much data to train it.

DALL-E/ChatGPT reality: Wows everyone, people actually use it tirelessly for writing code, creating artwork, recreationally etc.

We will probably hit the limits soon and won't have GAI next year but the stuff already delivered is already useful. The crypto stuff might become useful but its nowhere near the hype.


> DALL-E/ChatGPT promises: A statistical model that can generate text and images that are impressive but not always accurate. Also, the tech is not that magical, we just used so much data to train it.

> DALL-E/ChatGPT reality: Wows everyone, people actually use it tirelessly for writing code, creating artwork, recreationally etc.

That's a pretty generous if not biased take, is it not? For me personally, ChatGPT is underwhelming and hasn't done anything remotely impressive when I have used it. And there's a lot of fervor around people using it, but are there interesting use cases outside of advertising? And are there not a ton of harmful use cases? The internet is going to be absolutely filled to the brim with GPT generated junk.


It's really good on things that are not hard but boring, which makes it extremely valuable.

For example, I needed a tag cloud in SwiftUI. Very easy but very boring task, I guess there are tons of libraries for it but I don't like using libraries for this kind of things.

As if it's my junior, I told ChatGPT to generate a tag cloud and it did. It got the main things right but I need it to loop through an array of custom Structs, gave the structure of the Struct and told it to modify the algorithm. ChatGPT did it very well, as if understood the structure of my custom Struct type.

But I needed the tags be clickable, told it to make it clickable and it did and it correctly guessed how to connect a property from my custom type to the click action. I told it to change the colors, size etc and it was able to do it all. Sometimes it generated incorrect syntax but that wasn't a problem because I told to fix it and it did.

If having a junior developer as an assistant isn't valuable, I don't know what is.

I also use it to drill down on my curiosities. Yesterday I was curious how a torrent client can connect to peers to share a file and I made chatGPT explain to me step by step how NAT works, what strategy developers use to overcome issues with it(If curious, apparently they use this thing called STUN server, which is basically a remote machine that the client can connect to learn its own remote IP address). So it's not just a junior but also an expert in a domain that can answer questions conversationally. Much better that Googling keywords because Google is ridden with spam and tries to be clever about your queries without actually being clever. So once I learned the technical keywords like STUN server, then Google became useful again.


My concern would be the same as with a Junior Dev. It's fine if they don't know something but please don't bullshit/make something up. Maybe tech questions are immune to this but it doesn't seem to have any conception of true and false.

This is a silly example but I asked for a list of Star Trek references in Seinfeld and it gave 6-7 examples that all sounded genuine (Jerry makes a joke about transporters in the Contest episode) but were 100% made up. If I wasn't super familiar with the show I wouldn't have been able to tell most were invented. With code generation that's less important because we have ways of testing code for "truth" but I would worry about relying on any factual statements from the thing.


Sure, humans are not obsolete just yet! People are needlessly freaking out about losing their jobs or even reason for existence to AI. The reason for employing developers or designers is not that we need someone to write loops and draw lines.

It's a tool that makes some things significantly more easier and it does have risks. It will replace only people who are doing jobs not suitable for people.


The problem is that most of us work for companies ACTIVELY TRYING to use this stuff to replace human beings, INCLUDING THE ACTUAL DECISION MAKING and slapping black boxes everywhere in their bureaucratic processes because "machine says no" is a SUPER beneficial thing to a business, especially the modern massive corporation that doesn't really have competition and is mostly profitable because it ignores problems.

I give it five years until even we here are unable to get the attention of a tech company employee to fix our wrongly locked out account because not only are the primary touch points """automated""" but the appeals and appeals of appeals are also entirely automated.

Everyone here will get to enjoy "machine says no" a hundred times more often. Every tech support or bill support or anything support will put you through half an hour of terrible "AI" interaction before you are even allowed to be routed to a human, and businesses will use this to get rid of even more call center employees.

Hate not being able to understand the thick accent in your support call? Get ready for 1% of the time the AI in the phone call puts together sounds that don't actually form words, or just straight up misunderstand what you say, and nobody will believe you because "it's so accurate". Get ready to be gaslit by your fucking phone.


The prevalence of “computer says no” or “machine says no” in our modern society makes me scream and can honestly send me into depression and anxiety. It creates the most helpless feeling.

I have an increasing pile of issues with companies’ services that just persist because fixing them requires getting in contact with a human and a human who actually knows something and isn’t just a terminal for the computerized system.


20 years ago, we could have had the same conversation about outsourcing customer support. It was rife with problems and limitations, just as AI today is.

But that didn't stop companies from doing it anyway. The C-suite isn't listening to researchers and the general public, they're looking at what their shareholders think they 'should' do. Once McKinsey and BCG whip out their "AI Digital Transformation" powerpoints, it's over.


This points out the most dangerous part of ChatGPT. It's a highly confident idiot. When I told it was explicitly wrong, it basically responded with "I'm sorry it wasn't clear".


So they invented the ultimate "YES man"


Not ChatGPT, but Github copilot just wrote me some very nice documentation for a library I'm working on that was surprisingly good. All I had to do was change some wording and clean up the formatting, probably saved me a couple of hours.


>The internet is going to be absolutely filled to the brim with GPT generated junk.

It is already filled with human generated SEO junk that is largely worse than GPT junk. Smart people who like to shit on this stuff forget how fucking dumb most people and things are.


I didn't forget, and yes of course it's already filled with junk. No reason to be excited about doubling-down because in the end it's humans being the impetus.


Humans already struggle to tease value out of the noise that is the SEO and ad and marketing filled internet. Making producing junk, noise, and spam easier can only possibly make that worse.

This is like turning on your microwave while playing a shooter on WiFi because "the space my router is in is already full of 2.4ghz noise".


We all (students and staff) use it a lot. It'll write lectures, help think up interesting worksheet ideas, help students code, help them think of things to write about, how to structure assignments.

I needed a lecture on regex, ChatGPT wrote it for my in 30 seconds. Then I asked it for some problems for the students to solve and it wrote those too.

It may not be relevant to you, but for some of us it is changing the way we work. That hasn't happened since the dawn of the internet, or social media, or mobile phones.

The next generation are using it, and using it a lot.


A good, as if the profession of teaching needed to be hollowed out even more. Now, instead of a carefully assembled and designed curriculum, students will be fed literal autogenerated nonsense.

If you do something in your area of expertise half-way with AI, someone somewhere WILL try to do it all the way and market replacing your expertise with just more black boxes for cheaper, and the people who pay you who aren't experts in your field, WILL be sold on that offer.

Nobody will listen to you when you talk about how it is problematic or wrong or error prone or anything. Think of how angrily people argue about Tesla's camera based self driving, think of all the bad takes people have about how "it's safer than a real person" despite the lopsided and bad statistics behind that claim. Now instead of arguing this with strangers on the internet about an extremely rare outcome that could theoretically kill a random person, now you are having this exact style of conversation with your boss about how the AI model he is saying will replace you has an entire class of errors that humans aren't familiar with and don't seem to notice very well and will absolutely hurt things, but only rarely, in the future.

Get ready for a future where pretty much everything inexplicably fails 1 out of 50 times and nobody will ever be able to tell you why, they won't be able to fix it, and companies prefer this anyway.


If it's anything like my experiences so far, that lecture is likely to be riddled with plausible but incorrect statements. Any chance you could paste it here?


I know how to code and how regex works (I do teach it after all) so I'm pretty ok with what it wrote.


I have never had ChaptGPT tell me anything correct, so consider me "wowed" by it that people are using it to write lectures and test knowledge.


> The next generation are using it, and using it a lot.

So we should expect the Flynn effect to reverse even harder the coming years? The modern classroom is already making kids dumber, this might actually reverse many of the effects of education and put us back very far.


> Then I asked it for some problems for the students to solve and it wrote those too.

And your students will use it to solve the problems! In the end no one will learn anything and nothing of value was created. Rinse and repeat.

I guess this is my concern - if everyone starts using LLMs for stuff like this, then what's the point of anything?


> It'll write lectures

Because higher education needed more shitty materials for students to parse. Truly a Visionary.


> The next generation are using it, and using it a lot.

Chat.openai.com is how you are using it in class? They haven't licensed it out in some fashion yet have they?


ChatGPT is pretty good at coding. I have been using its LaTeX capabilities, where bugs don’t matter.


OpenAI's actual promises are "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity."

A chatbot that's relatively accurate at interpreting and summarising stuff compared with previous generation chatbots and an image generation algorithm that's actually pretty good are a damned sight more useful than NFTs which is why I agree with others that the comparison isn't helpful, but I don't think it's realistic to characterise OpenAI and AI enthusiasts in general as under-promising


I agree, but it can be "wow that's good" without "wow that's going to replace very single creative job in the world by 2024" which a lot of people seem to expect

It's a bit like thinking autonomous parallel parking will automatically bring you fully autonomous cars soon


I don’t think it’s that people expect it. It’s that some people absolutely despise the idea of artists and have some irrational desire to erase their jobs and automate it all.

These people usually aren’t people signing checks—just angry people online, and oftentimes tied in with political motivations. I don’t think many companies are licking their lips at the idea of firing artists (yet). They’re probably thinking about how they can use this to assist artists to get more done faster and at higher quality.


>It’s that some people absolutely despise the idea of artists and have some irrational desire to erase their jobs and automate it all.

I’ve seen very few people who believe this. On the contrary, most prognostications about Stable Diffusion et al. killing artists’ livelihoods have come from artists themselves, e.g. https://waxy.org/2022/11/invasive-diffusion-how-one-unwillin...


> I talked to Hollie Mengert about her experience last week. “My initial reaction was that it felt invasive that my name was on this tool, I didn’t know anything about it and wasn’t asked about it,” she said. “If I had been asked if they could do this, I wouldn’t have said yes.”

The main issue here is that someone is using her name. Luckily she does have legal protections available to her! She can trademark her name!

> “I feel like AI can kind of mimic brush textures and rendering, and pick up on some colors and shapes, but that’s not necessarily what makes you really hireable as an illustrator or designer. If you think about it, the rendering, brushstrokes, and colors are the most surface-level area of art. I think what people will ultimately connect to in art is a lovable, relatable character. And I’m seeing AI struggling with that.”

Which is spot on! AI is fantastic at rendering (which tends to be everyone's least favorite part of the process) and not so great at everything else.

> “As far as the characters, I didn’t see myself in it. I didn’t personally see the AI making decisions that that I would make, so I did feel distance from the results. Some of that frustrated me because it feels like it isn’t actually mimicking my style, and yet my name is still part of the tool.”

Good art is and will always be about the end result after a series of creative decisions made by the artist.

Professional artists have had assistants that paint large portions of their works since at least the Renaissance. In these cases they rely on their assistants for basic rendering tasks like backgrounds and folds on clothing. What makes it their painting is that they made the creative decisions.


I come from a very creative family, but even I can see the writing on the wall. Join any Discord server for a high quality image network and you will see incredible art pieces appear all the time based on someones description on what they want. Here is a random image I found: https://cloud.nwcs.no/index.php/s/TmzWzBW6fae4pkp

And this is very important: We haven't had these for a very long time. Imagine what they can do in 10 years? Or 6 months?

What about ChatGPT, imagine it in 10 years? Every single task that simply requires reiterating known information can be replaced if desired. That doesn't mean I desire it. But I also know what kind of world we live in. The economy comes first, people second.

That people are angry I can completely understand. I can also understand trying to argue that "this means nothing", and "its a grift", like someone working in oil hearing about climate change. Personally I am in awe that this is possible, but also sad at what will inevitably happen.


I’m an artist and I’m not worried at all, because I see more potential in the hands of artists than I do amateurs with zero skill.

AI translation has been around for decades now. It’s pretty damn good these days. But translators still find plenty of work and most of them will tell you that they use AI translation to improve their work efficiency. As with any job, the last 10% of work is always the hardest. AI can get a pretty good “sketch” of what someone wants, but a real artist can polish out the details and make it even better.


The thing to realize is that this means translators are being replaced by AI. If a translator can reduce time per translation by 50% because they're just doing a proofread at the end, that means they can do 2x more work/half as many translators are needed.


In translation AI has replaced mainly use cases where a human translator would never have made financial sense. It replaced crappy automated translation.


People were cheering for automation in the 60s, saying we wouldn't have to work more than 3 hours a day by the 80s, I think people should reboot their crystal balls before prophesying the end of artists


The same thing will happen to artists that happened to the factory worker with the rise of factory automation: Most of their jobs will go away, and there will only be very few artists that can even make a cent off their work, worse than that exact problem exists today as now there won't even be a skill barrier to filter out people.

The artists that currently make x$ a year off their work will lose access to that income and have to find new careers and then a giant conglomerate will buy all the companies that create digital works and their PR arm will tell us how much better off we all are that Art is cheap and automated and once again we will see individual income not increase for fifty years as GDP doubles again and wealth inequality reaches even stupider heights.


I wish history didn't repeat like that this time. Oh well.


The main driver is technological advancement and the generated value is _obviously_ net positive.

Not saying that there aren't issues that need to be discussed, nor am I saying that there isn't any (unnecessary) hype. But the comparison to Web3 is a stretch.

The author addresses this:

> One last thought -- don't overindex on the web3 <> LLMs comparison. Of course web3 was pure hot air while LLMs is real tech with actual applications -- that's not the parallel I'm making. The parallel is in the bubble formation social dynamics, especially in the VC crowd.

So aren't they just saying "hype is hype"?

(edited an incomplete sentence)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: