Hacker Newsnew | past | comments | ask | show | jobs | submit | more bun_at_work's commentslogin

That's true - their revenue from VR/MR hardware is less than 2.5% of revenue though. Meanwhile, ads make up the other 97.5% of their revenue. 97.5% of everything Meta does is to hoover the data and sell it. It's effectively their entire business, while VR/MR stuff is a little side project.


> Users must take responsibility for knowing how software works and the motivations of its creators.

This doesn't seem reasonable. Let's try to apply the logic elsewhere:

> Patients must take responsibility for knowing how medicine works and motivations of its creators/prescribers.

Requiring everyone to have deep technical knowledge about anything they use would prevent everyone from using more than the things they are experts in. So, there needs to be either a technological regression, or something to help defend users from unethical practices. The only entity really in a position to do that is a government, for better or worse.


> Patients must take responsibility for knowing how medicine works and motivations of its creators/prescribers.

This is true. If you blindly trust whatever your doctor says, you are going to have a bad time in the current medical system. Doctors are incentivized to push pills because they get kickbacks from the pharma industry. This is pretty well known (https://www.propublica.org/article/doctors-prescribe-more-of...)

When it comes to Elective surgeries, perscriptions. etc. you need to do your own research to how these things work and make an informed decision for yourself. Ultimately, if you're an adult, you are responsible for your own body and your own equipment.

It's not a matter of deep technical knowledge, it's shallow technical knowledge and political knowledge of what institutions are trustworthy.


side-step flamebait like winnie the poo


Oh, bother


I really think the value of this for Meta is content generation. More open models (especially state of the art) means more content is being generated, and more content is being shared on Meta platforms, so there is more advertising revenue for Meta.


Meta makes their money off advertising, which means they profit from attention.

This means they need content that will grab attention, and creating open source models that allow anyone to create any content on their own becomes good for Meta. The users of the models can post it to their Instagram/FB/Threads account.

Releasing an open model also releases Meta from the burden of having to police the content the model generates, once the open source community fine-tunes the models.

Overall, this move is good business move for Meta - the post doesn't really talk about the true benefit, instead moralizing about open source, but this is a sound business move for Meta.


I am not sure I follow this.

1. Is there such a thing as 'attention grabbing AI content' ? Most AI content I see is the opposite of 'attention grabbing'. Kindle store is flooded with this garbage and none of it is particularly 'attention grabbing'.

2. Why would creation of such content, even if it was truly attention grabbing, benefit meta in particular ?

3. How would poliferation of AI content lead to more ad spend in the economy. Ad budgets won't increase because of AI content?

To me this is typical Zuckerberg play. Attach metas name to whatever is trendy at the moment like ( now forgotten) metaverse, cryptocoins and bunch of other failed stuff that was trendy for a second. Meta is NOT an Gen AI company ( or a metaverse company, or a cypto company) like he is scamming ( more like colluding) the market to believe. A mere distraction from slowing user growth on ALL of meta apps.

ppl seem to have just forgotten this https://en.wikipedia.org/wiki/Diem_(digital_currency)


Sure - there is plenty of attention grabbing AI content - it doesn't have to grab _your_ attention, and it won't work for everyone. I have seen people engaging with apps that redo a selfie to look like a famous character or put the person in a movie scene, for example.

Every piece of content in any feed (good, bad, or otherwise) benefits the aggregator (Meta, YouTube, whatever), because someone will look at it. Not everything will go viral, but it doesn't matter. Scroll whatever on Twitter, YouTube Shorts, Reddit, etc. Meta has a massive presence in social media, so content being generated is shared there.

The more content of any type leads to more engagement on the platforms where it's being shared. Every Meta feed serves the viewer an ad (for which Meta is paid) every 3 or so posts (pieces of content). It doesn't matter if the user doesn't like 1/5 posts or whatever, the number of ads still goes up.


> it doesn't have to grab _your_ attention

I am talking about in general, not me personally. No popular content on any website/platform is AI generated. Maybe you have examples that lead you believe that its possible on a mass scale.

> look like a famous character or put the person in a movie scene

what attention grabbing movie used gen ai persons


i'd say reddit is a pretty great example, twitter, even instagram or facebook comments, where bot generated traffic and comments are a norm.

you have plenty of bot or "AI/LLM" generated content, that is consumed -- up to and including things like "news".

as for the comment about movies, i'm confused -- CGI has been a thing for a long time, and "AI" has been used to convey aging or how a person might look given some conditions, on screen, as well as a whole host of things.

while this might not be an LLM, it is certainly computer generated, predictive, and artificially generated.


I think the biggest part of it is just that they were behind but also betting on it. This allowed them to get a lot of traction, support and be a notable player in the race whilst still retaining some control. Chances are if someone is going to have a frontrow seat monetizing this it's still them.


AI moderators too would be an enormous boon if they could get that right.


It would be good, but the cost per moderation is still really high for it to be practical.


Creating content with AI will surely be helpful for social media to some extent but I think it's not that important in larger scheme of things, there's already a vast sea of content being created by humans and differentiation is already in recommending the right content to right people at right time.

More important is the products that Meta will be able to make if the industry standardizes on Llama. They would have the front seat in not just with access the latest unreleased models but also settings the direction of progress and next gen LLM optimizes for. If you're Twitter or Snap or TikTok or compete with Meta on the product then good luck in trying to keep up.


> Meta makes their money off advertising, which means they profit from attention. This means they need content that will grab attention

That is why they hopped on the Attention is All You Need train


This is a great point. Eventually, META will only allow LLAMA generated visual AI content on its platforms. They'll put a little key in the image that clears it with the platform.

Then all other visual AI content will be banned. If that is where legislation is heading.


> "We want to make sure you understand that if you don't sign, it could impact your equity," one rep told an outgoing employee, according to Vox.

The article is doing a lot of click-bait stuff here. The quote, without the source, is shown at the top of the article, near Sam Altman's face and a title referencing Sam Altman. However, the quote is not from Sam Altman, which makes its placement a bit disingenuous.

I am not invested in defending Altman, but this type of journalism is trash clickbait and a far bigger issue than a tech company threatening employees' vested equity unless they comply in some sort of non-NDA NDA scheme.


That's looking purely at the headline and a single quote that they chose for it. There is a lot more if you read the actual article.

They make a case that Altman signed documents where it's quite clear the intention is to be able to claw back vested equity. And that quote in the headline isn't meant to try and attribute it directly to Altman -- but to show just how well-known the policy was.


I read the whole article, that's where I found the quote attribution. Like I said, I'm not here to defend Altman, or attack him. It's just disingenuous journalism - or click-bait journalism.


Until people complain that Apple is being anti-competitive by not making vision tracking open, or allowing third-party eye-tracking controls, etc. etc.


I have a similar view to you and not much to add to your comment, other than to reference a couple books that you might like if you enjoyed 'Thinking, Fast and Slow'.

'The Righteous Mind' by Jonathan Haidt. Here, Haidt describes a very similar 2-system model he describes as the Elephant-rider model.

'A Thousand Brains: A New Theory of Intelligence' by Jeff Hawkins. Here Jeff describes his Thousand Brains theory, which has commonality with the 2-system model described by Kahneman.

I think these theories of intelligence help pave the way for future improvements on LLMs for sure, so just want to share.


Don't people often fall into the "vaccines cause autism" trap from Google?


> OK I have a table in postgresql and I am adding a trigger such that when an insert happens on that table, an insert happens on another table. The second table has a constraint. What happens to the first insert if the second insert violates the constraint?

How can I get help with this now?

Google result 1: https://stackoverflow.com/questions/77148711/create-a-trigge...

Google result 2: https://dba.stackexchange.com/questions/307448/postgresql-tr...

Like 90% of my questions like this are going to ChatGPT these days.

I can figure it out via the docs, but ChatGPT is SO convenient for things like this.


Well, were it possible, I'd say go back in time and study your tools so that you're not spending the journeyman period of your career ricocheting between tutorials and faqs.

Failing that, read the documentation. Failing that, stand up a quick experiment.

Somehow, we survived before ChatGPT and even before saturated question boards. Those strategies are still available to you and well worth learning


The "good old days weren't always good". I'm tired of either limiting myself to the information I have on the top of my head, the LLMs are really helping allow me to be creative and stretch out to do things that are just beyond my bread and butter, or things that I do infrequently.


Exactly this. I -could- become an expert in the intricacies of every tool I touch, or I could use chat gpt and move on to solving the next problem.


LLMs are the great equalizer of our time.


I see your point but the world changes so fast. Back in my day you just needed to learn C, understand algorithms and so on and then you could get deeper in an area or two. Today, you need to understand and be able to proficiently use so many technologies that you can feel lost.

And this is what happens when, say, you loose a job you've been doing for 10-15 years. You need to re-learn the world. And a lifetime is not enough to do it the way we used to do it.


Yeah, not all of us have memories that work like that. I’ve studied my tools but often forget the little details. My productivity has increased since GPT has come out.


Stuff changes too. There are things that are worth learning and being fluent in. Regex, sql. But even then there are always edge cases or weirdness that someone has solved before. LLMs are just much better for this than wading through forum posts.


We also survived before the internet and indoor plumbing and fire, and yet life is so much better now.


I'd go straight to the experiment, create the tables on a local postgresdb and try to get it to work.


Agreed that chatgpt is great for this kind of thing - a coworker is working on this GPT specifically for postgres https://chat.openai.com/g/g-uXYoYQEFi-sql-sage

But with it being down, my biggest advice would be to try it and see. something like dbfiddle.uk is perfect for these kinds of tests.


So ChatGPT says -- to me, a minute ago, ymmv -- it will rollback the first insert. Now what? Do you believe it? Cool. I wouldn't. I would confirm its claim, either by Googling or by trying it myself.

Also, when I asked it "what if I use PostgreSQL's non-transactional triggers", which I thought I just made up, it told me it wouldn't roll back the first insert: Non-transactional triggers are executed as part of the statement that triggered them, but they don't participate in the transaction control. So now I don't know what to think.


> What happens to the first insert if the second insert violates the constraint?

Try it and see? Why do you need an AI to help with this?


Why do you use an internet search engine when you can walk to the library?


The question at hand is pretty easy to test manually and the information you get is much more useful. You will get to see the exact behavior for yourself, can easily build on the test case as related questions come up, and you know the information you are getting is correct rather than a hallucination.

Copying information from ChatGPT is the newer version of blindly copying answers from StackOverflow. It often works out ok and at times makes sense to do, but it can easily lead to software flaws and doesn't do much to build a better undersanding of the domain which is necessary to solve more difficult challenges that don't fit into a Q&A format well.


In my experience, I encounter more issues and waste more time when I fiddle on my own and try stuff compared to doing the same, but using chatGPT.

There is a lot of knowledge that I don’t want to have expertise with. Sure, I could carefully read the PostgreSQL documentation about triggers and implement it myself, or I could get the job done in a few minutes and procrastinate on HN instead.


> The question at hand is pretty easy to test manually and the information you get is much more useful.

This approach can be hazardous to the health of the product you're building. For example, if you take this approach to answer the question of "what happens if I have two connections to a MySQL database, start a transaction in one of them and insert a row (but don't commit) and then issue a SELECT which would show the inserted row", then you will see consistent results across all of the experiments you run with that particular database, but you could easily end up with bugs that only show up when the transaction isolation level changes from how you tested it.

Whereas if you search for or ask that question, the answers you get will likely mention that transaction isolation levels are a thing.

You might also be able to get this level of knowledge by reading the manual, though there will still be things that are not included in the manual but do come up regularly in discussions on the wider internet.


> you could easily end up with bugs that only show up when the transaction isolation level changes from how you tested it.

In fact it's very likely you would. You have to understand the transaction semantics and test with all the isolation levels and database platforms you intend to support. If you don't know this, you need to learn more about relational databases before building a product on top of them.


> If you don't know this, you need to learn more about relational databases before building a product on top of them.

And now extend this principle to everything in the stack.


You should at least have some basic ideas about your stack.


Agreed. But also if you suddenly find that the precise behavior of one of the parts of your stack matters, you would be well advised to search the internet about how exactly that bit works in practice and whether there are any nonobvious footguns in addition to your empirical testing and the stuff that the manual claims is true.


You can also, uh, just try it with a trivial test and see what happens.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: