Hacker Newsnew | past | comments | ask | show | jobs | submit | more pera's commentslogin

Meanwhile MI6 offers an onion service for secure communications:

mi6govukbfxe5pzxqw3otzd2t4nhi7v6x4dljwba3jmsczozcolx2vqd.onion

https://www.youtube.com/watch?v=OYB129pGq0k


Yeah so that they can MITM it.


Please provide us with:

As many personal details as possible



Remind me to never look at twitter replies again, by far most counterproductive threads I've seen


70% is bot traffic, the rest is brain damaged terminally online human shells.

It really isn't representative of the real world average human intelligence and capacity to debate or even discuss ideas.


I hope the bots get the vote eventually. /s


In a sane world I would have agreed but in the US at least I am not certain this is still true: In Bartz v. Anthropic, Judge Alsup expressed his views that the work of an LLM is equivalent to the one of a person, see around page 12 where he argues that human recalling things from memory and AI inference are effectively the same from a legal perspective

https://fingfx.thomsonreuters.com/gfx/legaldocs/jnvwbgqlzpw/...

To me this makes the clean-room distinction very hard to assert, what am I missing?


If a human reads the code and then writes an implementation this is not clean room and the LLM would in most cases be equivalent to that.

Clean room requires the person writing the implementation do have no special knowledge of the original implementation.


Could you share a source for this definition? As far as I know it means no having access to the code only during the implementation of the new project


Clean room reverse engineering always involves a wall of some kind between the person figuring out how the tech works, and the person creating the new tech. Usually the wall is only allowing one-way communication via a specification of the behavior of the old tech, perhaps reviewed by a lawyer to ensure nothing copyrighted leaks across.

https://en.wikipedia.org/wiki/Clean-room_design https://en.wikipedia.org/wiki/Chinese_wall


Thanks for those links, I read a couple of the cited comments on those cases and still cannot find any mentions of restrictions for engineers who at some point in the past had access to the code.

Nordstrom Consulting v. M&S Technologies, which is possibly the most relevant case, describes a process for developing under a clean room environment and from what I understand it seems to focus on isolation of engineering teams and resources (except when required for interoperability). I did not find mentions of assessing the cohort of engineers for prior access to the copyrighted material but if I have missed that please let me know.

I also wanted to say that I am not asking this because I am thinking to start an unethical license laundering business, I am only trying to understand the meaning of making LLMs legally equivalent to human workers.


That's strange, which OS? I am on Arch and also on 145 and I get the "Ask an AI Chatbot" in the context menu. The settings used to work in the past so I am not sure what's going on.

I believe these are all the settings I have disabled for AI:

browser.ml.chat.enabled

browser.ml.chat.menu

browser.ml.chat.page

browser.ml.chat.page.footerBadge

browser.ml.chat.page.menuBadge

browser.ml.chat.shortcuts

browser.ml.chat.sidebar

browser.ml.enable

browser.ml.linkPreview.enabled

browser.ml.pageAssist.enabled

browser.tabs.groups.smart.enabled

browser.tabs.groups.smart.userEnable

browser.tabs.groups.smart.userEnabled

extensions.ml.enabled

sidebar.notification.badge.aichat

Am I missing anything?


Seems whatever I had disabled earlier is still disabled on my install of FF 145.

I do have these additional settings.

browser.ml.chat.maxLength=0 browser.ml.chat.prompt.prefix="{}" browser.ml.chat.prompts.0="{}" browser.ml.chat.prompts.1="{}" browser.ml.chat.prompts.3="{}" browser.ml.chat.prompts.4="{}" browser.ml.chat.shortcuts.custom=false browser.ml.linkPreview.longPress=false browser.ml.modelHubRootUrl="example.com"


As far as I can see, that's it. Or at least I'm not seeing anything else related that I've disabled.


I had to go out. When I'm back home in a few hours, I'll try to look up all I've disabled.


There is a fundraising for that organised by their union (IWGB Game Workers):

https://actionnetwork.org/fundraising/support-rockstar-worke...


Why does this look horribly wrong to me. Why does a union need a fundraiser? Shouldn't they have hold tight belt and have significant war chest for this? Or take extra fees from the new members?


It does feel wrong because in our society having access to more financial resources often translates to better representation in the courtroom. This is similar to how donating to organizations like the EFF can provide more justice to those who are not multibillion-dollar corporations.


I really wished they would also let you disable those very annoying modal popups announcing yet-another-chatbot-integration twice a week: My company is already paying for your product, just let me do my work ffs...


That's just like your opinion man, I see it through rose coloured glasses as a poem from more naive times back when some folks still had some hope... This was way before vulture capitalism fucked everything up you know, or at least that's how I remember it but I was like 10.

Not everyone was into this hopeful vision of cyberspace though, Masters of Doom comes to mind.


You’re right (as someone a bit older but also with rose-tinted glasses).

There was a feeling of hope on the Internet at the time that this was a communication tool that would bring us all together. I do feel like some of that died around 9/11 but that it was Facebook and the algorithms that really killed it. That is where the Internet transitioned from being about showcasing the best of us to showcasing the worst of us. In the name of engagement.


s/Doom/Deception/



Preprint back from May:

https://arxiv.org/abs/2405.03675


Heh stockholders are not hallucinating: They know very well what they are doing.


retail investors? no way. The fever-dream may continue for a while but eventually it will end. Meanwhile we don't even know our full exposure to AI. It's going to be ugly and beyond burying gold in my backyard I can't even figure out how to hedge against this monster.


Yeah no I didn't mean retail investors, OpenAI is not publicly traded, but yeah I do share your concern...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: