Hacker Newsnew | past | comments | ask | show | jobs | submit | moofight's commentslogin

yes, and it's quite challenging for instance => https://sightengine.com/detect-ai-generated-images


Sightengine | Paris + France + Remote | Multiple roles | Full time

At Sightengine (https://sightengine.com) we build AI models for Trust & Safety. This includes multi-modal content moderation and abuse detection

We are hiring for multiple roles. We are growing the Computer Vision team right now, looking for software engineers / ML engineers with a solid experience.

https://sightengine.com/careers


Given a random set of realistic-looking real and AI images, we have found that humans usually score in the 65-80% accuracy rate. You can give it a try here: https://sightengine.com/ai-or-not


I was pretty dead on with photos of people. Especially if they're in color.

And it's not just a hand thing. There's often an element of surreal excess or a kind of uncanny valley/plasticy thing going on. If I had to point something out, it would be skin. AI seems to be bad at generating skin, it has a slightly cartoony look to it. If I were to venture a guess, it's because of the number of photos out there filtered to shit.

I was the worst at macro(?) landscape photography. I think that's what it is. Whatever it is when you essentially take a picture from far away, but zoom in and focus so the foreground and background are both in focus. That's close to 50/50.


Yes, zooming in on skin usually helps a lot to recognize raw GenAI outputs.

The examples chosen in this test were not collected to be very adversarial, and no additional processing was done.


There are many aspects to this, but it seems that the most lasting and strong effects are due to visual content (especially raw content, violence...) rather than text (even though text can be violent).

Which is why automated Image/Video moderation solutions (such as Vision, Rekog, Sightengine.com, Hive) will continue to grow. Not only because it is cheaper/faster, but because it becomes a necessity. Or at least as a first filter to weed out the "worst" content.


At a $previousJob I had some tangential contact with professionals who track child pornography, trying to identify and free the kids (people involved in catching https://en.wikipedia.org/wiki/Christopher_Paul_Neil). They felt that automation was of little help for what they were doing, and that every image had to be looked at by at least one human (most of the images by more than 1). They had a few tricks (apparently looking at the image in B/W helped lessen the trauma) but they did not find value in the automated tools we tried to build to help them.

Now, they felt much more empowered than what Facebook was doing: they kept going because the goal was to stick cuffs on the wrists of the guys who were doing this, and get those kids away from him, and they could put up with all of the rest for that goal. They were treated as rockstars by the rest of the people they interacted with, because they were the ones who got kids away from the predators. They had frequent opportunities to take breaks and could set their own schedule, with only the guilt that came from the longer they delayed, the more time passed with the kids in the predators hands to drive them.

Ultimately, feeling empowered to make a difference in the world is key, and if Facebook treated screening as an important job and gave their moderators more power to set their own working conditions I suspect that it would improve their mental health by quite a bit.


Good point about empowerment.

I hope they are investing in an army of shrinks /psychologists/sociologists to study,improve and supervise these centers, cause this stuff is not going away by just deleting content.


[flagged]


I used to take down pedofile rings online with a few associates. This was strictly a black hat endeavor and I never had to look at anything. My motivation was that its fun to use these skills to ruin someones day (or life in this case) but it's only really moral to do these sort of things against people like that.

I once found a confirmed child porn possessing target with a smart home I was able to access. That guy must of thought he was in a black mirror episode.


That seems like a bad faith accusation with no evidence or proof for an organization focused on catching child predators.


They told us that robots would save humans from doing dangerous work in hostile environments. Who knew the danger and hostility would be entirely psychological!?


We just saw Tumblr try that and discover that trying to automate it can destroy your platform.

The problem is that context is even more important in visual content than in textual content, and we still don’t have any algorithms that can parse context as successfully as humans can.


it seems like you would essentially need a general ai to detect stuff like cp. there are (non-digitized) photos of me taking a bath as a small child that were taken by family. if I stumble across one sifting through photos at my house, it's an innocent document of my childhood. if theyre being passed around some cp forum, it changes the nature of the images quite a bit. we're a long way from having an algorithm that understands why it matters who is holding a photo.


Disrupt their appeal and outreach by:

- no longer showing the faces / names / manifestos of the suicide bombers on media

- showing the people who stand up, who demonstrate, who fight for our values rather than showing the panic, chaos, people running for their lives

would that help?


What would help is stop making such a giant fuzz about it every time something happens. The media just "likes" these kinds of events because they make emotions run high.

In reality terrorism accounts for less than 0.01% of premature deaths, yet nations spend magnitudes more surveilling their own citizens and fighting stupid wars (to no apparent avail) than they spend on fighting various other things that kill more than a thousand times (> 1000x) more people.

Here's a nice chart visualizing the disproportional response: https://i1.wp.com/thinkbynumbers.org/wp-content/uploads/2008...


>In reality terrorism accounts for less than 0.01% of premature deaths,

Sure death by terrorism is only a blip. But the amount of money spent on spreading terrorist idealology is insane.

The problem is, our ally, Saudi Arabia, is spending Billions on Wahhabist Terrorism Propaganda aka Petro-Islam. In fact, money trails show KSA funded 90% of Wahhabist Terrorism Propoganda (Petro-Islam) around the world through mosques and literature.

From Wikipedia:

>Wahhabism has been accused of being "a source of global terrorism", inspiring the ideology of the Islamic State of Iraq and the Levant (ISIL), and for causing disunity in Muslim communities by labelling Muslims who disagreed with the Wahhabi definition of monotheism as apostates (takfir) and justifying their killing. It has also been criticized for the destruction of historic mazaars, mausoleums, and other Muslim and non-Muslim buildings and artifacts.

>Saudi Arabia is called the "cradle of Wahhabist Terrorism". In fact, Saudi Arabia funded an estimated "90% of the expenses of the entire faith [wahhabism]", throughout the Muslim World, according to journalist Dawood al-Shirian.

>It extended to young and old, from children's madrasas to high-level scholarship. This spending has done much to overwhelm less strict local interpretations of Islam, according to observers like Dawood al-Shirian and Lee Kuan Yew, and has caused the Saudi interpretation (sometimes called "petro-Islam") to be perceived as the correct interpretation – or the "gold standard" of Islam – in many Muslims' minds.

>The Salafi movement is often described as being synonymous with Wahhabism.


Then you change the media from capitalist goals to propaganda.

While everyone agrees that media that cares for views and nothing else can be nasty, it's the inly 'objective' measure of what media should show that we can get. When media editors start using their influence for (in this case, very noble) political goals, we find ourselves at their mercy.

There's a constant theme in modern democracy that when a certain institution tries just to be a projection of it's customers wishes, it can reveal that these wishes aren't that good. Like Airbnb hosts being racists - is it the problem with individual hosts, or should Airbnb be responsible? Same with media and views: if people react better to sensationalizm and terrorist manifestos, should the media filter their disguisting desires, like a big caring brother, that knows what's best for us? Or should it do whatever we want, showing us all the gore and uglyness?

I don't have the answers, and most importantly, I don't think I have a logical framework for these problems. Do you?


> - no longer showing the faces / names / manifestos of the suicide bombers on media

The UK authorities tried that approach through the 1980s regarding the situation in Northern Ireland; PM Mrs Thatcher's infamous "oxygen of publicity" policy was a response to a general feeling that the media were advertising for the terrorists.

However it was worse than ineffective; no-one receiving their primary news of the situation from TV was likely to volunteer for action, and it became symbolic of the Government's seeming powerlessness, that they had to hide the bad news behind censorship.

Aside: I remember being disappointed when hearing Gerry Adams' actual voice for the first time, having heard him dubbed by an actor throughout my childhood. The actor sounded more authorative!


Self-censuring the media to me is completely wacko, it's just gonna fuel conspiracies and increase reach of far-right ideology.

Keep calm and keep going.. also means keep reporting.


In my opinion, much more useful than a terrible caption, a standard "keep calm and carry on" would be much more hurtful to the terrorist's aims than spreading the news and the "terror" (which is what they want...).


underlying tech is the following:

- reprogram cells (could be any cells, here they are considering skin cells as they are easy to get) to become embryonic stem cells capable of growing into different kinds of cells

- use signaling factors that occur in nature to guide those stem cells to become eggs or sperm

This has been done with mice


Did it "work" with mice? Ie. were the created eggs and sperm able to combine and start multiplying when put inside the appropriate part of a mouse?

That would be great, however the backlash from the more... conservative population will be equally huge, I guess :)


The work by Hayashi published last year had derived precursor egg cells, but the sperm was from normal mouse testes as usual. As described in the article, they did successfully create healthy offspring, though the success rate was very small.


tl;dr

- this is not a Google announcement but a Google patent. Not sure if this will be implemented

- the idea would be to use two devices at same time to improve Google Assistant's understanding of voice commands (reduce transcription mistakes, "hear" the difference between "of" and "off" etc...)


Still quite interesting if it becomes a reality.

Let's see what other thinks.


Let's say I have an EBS volume of 500GB with 300GB of data. What happens if I mistakenly resize the EBS volume to 200GB? Do I get an error message or does part of my data get wiped out?


Good news: you won't lose any data from resizing your EBS down.

Bad news: that's because you can't make your EBS smaller.

  > You can now increase volume size, adjust performance, or change the volume type while the volume is in use. 
Note the absence of 'increase OR decrease volume size'.


Yes, you are absolutely right. "EBS volumes can only be increased, not decreased".


There are third party solutions that support automatic/transparent rightsizing - both increase/decrease (FittedCloud).


And I think that's fair. Decreasing would be way too risky.


I fell there are better ways to protect your customers from data loss than forbidding a potentially-destructive action.

I'm quite happy to lose data, or manage my data's physical location on disk, and do online decreases… but I can't!


I suspect it's the underlying systems that aren't good at handling online decreases. But the good news is, if you're happy to lose data, you can just delete the volume and create a smaller one.

(Yes I know that's probably not what you meant, but it does highlight the question of what exactly is non-valuable data?)


I'm too lazy to attach new storage, sync data, and swap their mount points in place.

  > what exactly is non-valuable data?
Caches, mirrors, backing volumes for redundant data stores or processing infrastructure that indicates to try again on another node on failure.


For those who don't have the time / capacity to develop this internally, and who feel they need to moderate visual content, we have a SaaS offering. We flag offensive images and videos (including live-streams in private beta) using Deep Learning.

We are Sightengine (https://sightengine.com)


Sightengine | Junior Developer | Paris France | INTERN, https://sightengine.com

Sightengine is an Artificial Intelligence company that helps developers and businesses Moderate and Understand user-submitted images and videos.

Our powerful technology is built on proprietary state-of-the-art Deep Learning systems and is made available through simple and clean APIs.

We are looking for a Junior Developer to help us build and ship small projects and tools that will help our community of users and customers make the most of their content.

This is a great opportunity to get exposed to multiple challenges - both front-end and back-end - while getting a hands-on experience with popular languages such as Python, Go, Node.JS, PHP.

Apply here: https://sightengine.com/apply?position=2p_a1fkeA


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: