Hacker Newsnew | past | comments | ask | show | jobs | submit | roflmaostc's commentslogin

Good initiative!

The problem is: publication is based on reputation. Reputation takes time and effort from the entire community.

I feel like modern infrastructure (Google Scholar, AI research, LinkedIn, etc) helped to decrease the importance of high-impact journals such as Nature, etc. Researchers don't rely on highly curated printed journals in their physical mailbox to get informed what's happening. You can just use tools to scrape content much faster.

But still: It can be career decisive if a reseachers lands a publication in a for-profit journal such as Nature.

The CS community has a much nicer publishing pipeline where most top journals/proceedings are attached to non-profit conferences and the fee is 0 (beside a conference fee).

I wish more fields would work like this: you publish with a conference proceeding and talk on the conference about your paper.

Researchers are themselves responsible for typesetting, advertising, etc. This and removing for-profit stakeholders can reduce the costs a lot.


A difficulty, too, is that choice of publishing venue is based on visibility and readership. And in my experience, EU-administered projects around scholarly publishing like these are well-meaning, but make baffling choices about focus, organization, and scope that hobble them.

Consider that this is a journal whose scope is defined not by field, but by funding initiatives. It places an astoundingly small emphasis on making research visible: contrasted with most major journals, with websites that might be split between research articles proper and editorial articles, but are still heavily focused on presenting articles, Open Research Europe doesn't have a single non-truncated article title on its front page, and devotes the vast majority of the page to journal administration and self-advertisement. The current lead highlight of PNAS is a section of rotating blurbs about articles, both research and editorial, for example. The current highlight of Open Research Europe is a description of Open Research Europe and logos of associated groups, including a second copy of the European Commission logo, in addition to the one on the top of the page. For that matter, the journal has a three-letter domain name, ore.eu, that it uses entirely to talk about itself, with only a single, small, text link to the journal itself. Why publish at a journal where your research seems to be far down their list of priorities?

With that said, I'm hopeful that CERN taking this over is a good sign. Zenodo is a great asset to the research community, and I feel like CERN is better situated to understand what will make a journal where researchers will want to publish. And I'd note, unlike Open Research Europe, Zenodo's front page is primarily a list of recent uploads, complete with partial abstracts.

>Researchers are themselves responsible for typesetting, advertising, etc. This and removing for-profit stakeholders can reduce the costs a lot.

That can depend on how the proceedings are published. Dagstuhl Publishing, for example, does do some typesetting and proofreading work for proceedings they publish, they just have it arranged in an extremely efficient way (everyone submits LaTeX using their class, so they're mostly fixing mistakes). They also do charge (an extremely small) publishing fee to the conference.


Conference papers should be abolished, because they require international travel, which is getting more difficult every year. 10–20 years ago, when travel was cheap and easy, it was mostly just people from developing countries who could often not attend. Typically because they could not afford it or get a visa in time. But today almost everyone is impacted by wars, international tensions, travel restrictions, immigration policies, and the overall uncertainty.

I've attended three international conferences in the past year. In each of them, there were plenty of people missing. People who would usually have attended but could not, due to issues that did not exist in the 2010s.


Partially agree. However, this problem has existed with scam e-mails since the 90s.

For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.

Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.


How do you prove the signature isn't fake?

Ultimately ID requires either a government ID service, a third party corporate ID service, or some kind of open hybrid - which doesn't exist.

All of those have their issues.


I think he was referring to a cryptographic signature, possibly using the "web of trust" to get the key. I'm not convinced we need central authority to solve this.

people at my org were gleeful when they learned they could hook LLMs into Slack. Even if we had some reliable, well-used signature system, I think people would just let AI use it to send emails on their behalf.

If the AI age has taught me anything, it's that most people do not care what their output is. They'll put their name on anything, taste or quality does not matter in the least. It's incredibly depressing.

Enshittification never stopped we just stopped talking about it because it became normal. Quality does not matter anymore. I agree its depressing, seeing AI Slop being pushed and no one even putting the time or effort in to say this is bad and you should feel bad.

That's a different problem though. It's doing it on their behalf, not on behalf of a scammer who's impersonating them.

Until their computer is taken over....

Well we should treat that as their own output. If it's crap, treat it the same way you would if they produced the crap themselves.

> Ultimately ID requires either a government ID service, a third party corporate ID service,

These are valid approaches to the problem, but they are not necessary.

> or some kind of open hybrid - which doesn't exist.

PGP exists for decades. It doesn't have a great UX, it isn't used outside of its narrow niches, but it exists and does exactly this.


Picture this: your grandma calls you in a panic, and you tell her, "Drop me your public PGP key so I can verify the signature".. PGP is dead outside of niche geek circles exactly because key management is basically an unsolvable problem for the average person

> PGP is dead outside of niche geek circles exactly because key management is basically an unsolvable problem for the average person

Can this problem be solved with better software?

I believe it can, it is just average person doesn't need PGP. No demand for software solving this problem, therefore no software for that.

The problem can be solved, like a storage for known PGP public keys with their history: like where the key was acquired, and a simple algo that calculated trust to the key as a probability of it being valid (or what adjective cryptographers would use in this case?).

You can start with PGP keys of people you know, getting them as QR codes offline, marking them as "high trust" and then pull from them keys stored at their devices (lowering their trust levels by the way). There are some issues how to calculate probability, because when we pull some keys from different sources we can't know are their reported trust levels are independent variables or not, but I believe you can deal with it, by pulling the whole chain of transfers of the key, starting from the owner of the key and ending at your device.

It is just a rough idea, how it can be made. Maybe other solutions are possible. My point is: the ugliness of PGP is a result of PGP was made by nerds and for the nerds. There is no demand for PGP-like solutions outside of nerd communities. But maybe LLM induced corrosion of trust will create demand?


PGP works if you vouch for keys in person, both of you are honest and can be trusted to act in good faith when not in person, have good key chain and rotation hygiene, and the private keys can't be exfiltrated.

Yeah, there is no silver bullet solving the problem of trust completely and perfectly. People can lie and we can make them stop, while everything else is just a workaround.

The point of GP was that there any such system will require a central authority, PGP shows that you don't need it. I didn't claimed that PGP is a perfect or good enough solution, just that it exists and works for some people.

> both of you are honest and can be trusted to act in good faith when not in person

I believe it is not strictly necessary for the scheme to work. It is a limitation of OpenPGP and other implementations that they do not allow convert multiple independent observation of a public key (finding it from different sources, or encountering them used to sign messages) into a measure of trust to the key.

It is not a silver bullet either, but it can alleviate the problem and make it tractable.

The only doubts I have is how this system will stand against multiple actors trying to undermine it, but still I believe you can get something that will be better than nothing, and probably better than a central authority.


Same way security cameras prove that they are authentic camera recordings that have not been modified. If modified, the video will no longer match the signature that was generated with it.

> If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.

In the interview scenario, generating an email signature is hardly beyond what an AI can do.

You have no prior knowledge of this person or his signature, it's not some government issued ID, it's in essence just random data unless you know the person to be real.


As with any problem, scale changes its nature.

With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.

With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.

At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.


> (or have transactions of up to certain size)

And by that you mean tens of millions to billions right? Bank transfer scamming/fraud is a thing.


The highlighted parallel is usually drawn between cryptocurrency and cash, not between cryptocurrency and banks. With both cash and cryptocurrency, as is the idea behind the analogy, 1) there’s no intermediary and 2) once it’s gone, it’s gone. Obviously, the banking system is not immune to fraud (not sure why you think I made that claim, unless your definition of “cash” includes electronic transfers), but banks and/or payment systems can (and do) resolve these cases and have certain KYC requirements.

> It is AI generated, then we would loose trust in that person

You are assuming that only you can generate fake AI videos of yourself.


OP was talking about journalists attesting to the authenticity of video they produce

Spam emails in the 90’s don’t come remotely close to the operations people can set up by themselves with AI now. It doesn’t even compare.

There are people hosting agents online to talk to other agents etc. on their behalf. How difficult is it to just instruct such an agent to do the tasks you mentioned? You're assuming it's done by "bad actors" while it's most likely just going to be done by "everyone" that knows how to do it.

I mean emails were and still are a huge security risk. Sometimes I'm more scared of employees opening and engaging with emails than I am than anything else.

It doesn't surprise me it happens within the Elsevier ecosystem. Elsevier has a long tradition of scientific misconduct and scientifically immoral behavior (see Wikipedia).

The operating margin of Elsevier is around 40% which is huge! At the end mostly paid by tax-payer money.

Personally, I never review or publish with Elsevier.


You are in very very good company. The British mathematician Timothy Gowers famously boycotts Elsevier also

https://gowers.wordpress.com/2012/01/21/elsevier-my-part-in-...


Huge numbers of academics have signed up to the Elsevier boycott, see http://thecostofknowledge.com/


I am skeptical it is a problem isolated to Elsevier. Given the LLM craze now prioritizes open access, https://andrewpwheeler.com/2025/08/28/deep-research-and-open..., it would not surprise me people start gaming MDPI in the same way for example.


MDPI is gamed by design, I think that while Elsevier is awful, MDPI is even worse with 100s of special issues where you are guaranteed to land publication in journals with quite nice IF (which is inflated by publishing large proportion of reviews and less original research).


I wonder if the term "published" as a binary distinction applied to a piece of writing is a term and concept that is reaching the end of its useful life.

"Peer reviewed" as a binary concept might be as well, given that incentives have aligned to greatly reduce its filtering power.

They might both be examples of metrics that became useless as a result of incentives getting attached to them.


Both metrics are supposedly binary but in reality have always depended heavily on surrounding context. Archival journals have existed all along. Publication is useful as an immutable entry in the public record made via a third party. Blog posts have a tendency to disappear over time.


I'm certain that the comment you responded to never claimed that it was "isolated to Elsevier" in the first place, nor is it very compelling to speculate about how in the future something even worse might emerge.

Right now Elsevier is by far the biggest offender and also happens to the be the topic of the conversation and the article.


Exactly. Elsevier is a dominant company. Of course it's going to have a huge share of anything that goes into journals. They probably also have a huge share of the Nobel prize winning papers too.

That being said, I'm happy to encourage open access.


One of the reasons why in Germany universities were able to collectively negotiate better open publishing deals with Wiley and Springer, but Elsevier just flat out refused to agree to any better terms for three years.

(See Project DEAL: https://deal-konsortium.de/en/agreements/elsevier)


Happened in other countries as well, see e.g. https://www.timeshighereducation.com/news/elsevier-boycott-l...


I’m not sure why I’ve never really concerned myself with Elsevier, but that makes a lot of sense, knowing a rather vile and slimy con artist snake that works/ed for them.


I remember recent discussions on the somewhat rudimentary physical server infrastructure. I would be a bit scared for a serious large project

https://news.ycombinator.com/item?id=46132901


This can have pros and cons. They will get much more vcpu / dollar on bare metal. And they can develop great operational discipline if they do it right.

On the downside, I don’t see them yet taking ops seriously. They are getting a lot of attention, but not yet establishing SLAs (at least not publicly). And their donations don’t seem to be scaling to the continued and expected demand.


I am not so skeptical about AI usage for paper writing as the paper will be often public days after anyways (pre-print servers such as arXiv).

So yes, you use it to write the paper but soon it is public knowledge anyway.

I am not sure if there is much to learn from the draft of the authors.


lol, at 0:15 someone is literally testing the vapes with their mouth. I hope they don't do that all day long

Later at 6:45 they show more people testing them


It’s hard to know for sure what’s acceptable when it comes to working conditions in China. The information we get is incredibly limited. Most of what makes it through is propaganda.

That said, it wouldn’t surprise me if he does it all day long, 6 days per week.


They are, there's a video on YouTube you can find where they interview someone with that job and they test 10,000 a day. Then they mention that they go home and vape some more


Isn't that what happens in Europe with most rooted phones and banks too? At least I can remember my banking apps stopped working.


There's no laws banning this in any European countries that I'm aware of, except maybe Hungary? It's just banks being stupid, consumer-hostile, and anti-competitive.


Well, I've built a bunch of mobile banking apps and we did detect if the phone was rooted, was in dev mode, etc. and it is not because we were "stupid, consumer-hostile, and anti-competitive".

If someone steals the secrets from a rooted phone and steals customer's money the bank is on the hook, so banks do everything they can to minimize this risk.

There is no way to store customer's secrets in a PC browser securely, so all the "dangerous" transactions were outright prohibited in the web app or made available only via temporary QR login.

All this is just is a negative side effect of customer protection laws.


These practices are strengthening the Google/Apple hegemony and are ultimately damaging user freedoms and consumer protections. I'm sure that's not your employer's intention, but it is a negative thing that they're contributing to. And because of how essential banking is, banks have a big thumb on this particular scale, and I wish they'd use it for good rathet than for enriching and entrenching evil.


I understand (but vehemently oppose) the argument for root detection. What risks to banks see from having developer settings enabled?


Great, so the no-name iPhone clone in China passes your test but EOS doesn't.

There's no way to assess the security of a rom from an app and it's about time that banks learn this reality.

Software on mobile is even more fragmented and less standardized than on desktop


> If someone steals the secrets from a rooted phone and steals customer's money the bank is on the hook, so banks do everything they can to minimize this risk.

Now that's just not true now, is it? Sure the lawyers told you that (the ones that get paid to tell you that), but nowhere in EU was a bank actually fined for not root checking a device.

They were plenty fined by being utterly incompetent with security practices and doing them poorly - like trying to inject wierd .SOs to do the root detection you're defending.


Literally three days ago: https://www.complianceweek.com/regulatory-policy/eu-agrees-r...

"Payment service providers (PSPs) operating in the EU will have to cover customers’ losses from fraud if their fraud protection regimes are inadequate or poorly implemented under new EU rules."

Other places like the UK had such rules already.


Note how this says nothing about root lockout.

The fact that no root lockout means "inadequate protection" is something you projected onto this statement and that's the part I'm addressing in my comment.

No one actually got fined for root protection specifically.


Regulators love vague standards like "inadequate protection" because it means they can implement a ratchet effect without needing to understand anything or constantly rewrite the laws. If someone gets hurt they just look around at whatever the competition is doing, pick the most extreme thing, and declare that any other standard is inadequate.

So sure, if you want to not use security tactics your competitors are using and then try to lawyer out of it by arguing, "it didn't specifically say we had to do that" in front of the EU Commission, go ahead. But don't blame the banks that are more realistic about how this works.


Yeah, so you admit there's no real legal basis for those kind of restrictions.

Which anyone of us who worked with banks, mobile, banking security and their legal already knew. They're a source of greatest security hits like "let's use SMS for only auth for web banking" after all.

But what's really hiding behind all your fluff is something else: Abusing users with root lockouts is EASY for the programmers at banks. The auditors have a checkbox "root lockout" and they tick the box. Legal ticks the box. CISO ticks the box. All happy, who cares about user. That's what this is all about. The insulting thing is trying to sell it like some kind of security feature.


The regulations are the "real" legal basis. The fact you don't like them or how they're written doesn't make them any less real. And you're not arguing with me or my "fluff", you're arguing with the entire banking industry.

If you really think this is all just fluff, by all means, go get yourself employed inside a bank's security team and convince them to turn all this stuff off. Let us know how it goes.


No bank got fined for not root checking, correct. However banks are on the hook for unauthorized transactions. And "unauthorized" means different thing in different countries.

In some jurisdictions if bank can prove that transaction was made with customer's key then customer can not demand their money back. That's the best case, but there are only few of such jurisdictions and even there the burden of proof is on the bank and it costs a lot.

In other jurisdictions bank must reverse a transaction even if it was proven that the transaction was signed with a legitimate key, but the key _may_ have been stolen.

In some jurisdictions (i.e U.S.) banks are required to reverse a transaction at a customer’s request, even if the customer does not dispute having made the transaction.

In any case dealing with all this is too expensive and risky.


> In any case dealing with all this is too expensive and risky.

[Citation needed]

How much does it cost? How risky?


Let's say you are a bank and you make $10 on each $100K transfer. If customer disputes a transaction and you must return the money, you lose the whole amount and twice as much on lawyers, internal audit, compliance people working on the case. With this math you can't afford the risk if it is more than 1 in 30000.

For many European banks the math is even more brutal.


Why don't banks just make desktop computer applications?


Practically impossible to store secrets in a desktop app too. Besides, customers would not willing to install a desktop app. And those who would, will require support.


PC platforms don't have remote attestation infrastructure working.


And surprisingly I can pay securely using my PC, fully rooted, on FOSS software. Hardware tokens have been a thing for decades. There are more second (or third) factor authentication and signing solutions than I can enumerate.

Do peope get defrauded using online banking? Sure. But usually not in a way that would be stopped by secure attestation.


The hardware token is itself a form of remote attestation. The reason you need extra hardware is because the PC can't do it.


Most banks don't know hardware tokens are a thing. They want everyone to use their app.


Is this yet more evidence of how utterly broken US banks are? Assuming you are referring to US banks.

For the past 20 or so years, every bank I've been with in Belgium has provided me with one of three types of hardware token:

1. An OTP token that's just a screen that displays a new 6 digit token every couple of seconds (haven't seen one of these in a few years now). This was used to supplement username/password on login and to verify every bank transfer.

2. A token with a screen and a display, which generates OTPs based on input. E.g. for a payment the bank would tell me to enter the amount + the last N digits of the bank account, the token then generates an OTP, which I can use to confirm the payment. That's what 2 of my 3 banks currently use. They have separate modes for logging in, for signing bank transfers, for signing 3D Secure online payments, etc.

3. A card reader where where I just slot in my card. I can then log in or sign payments using the card's chip & pin. This is what my third bank uses. There are a couple of variants on this, such as models which connect with USB and models which can read QR codes from your screen so you don't have to tap in anything except for your PIN.


They used to, and some still kind of do, but no longer for consumers.


Most banking apps use a third party security solution . They then often implement Google play integrity .


Beef (red meat) is classified as a probable carcinogen, while chicken (white meat) is safe according to current research.


Have fun eating 2kg of broccoli to get 50g of protein.


there's also lots of water to wash then.

The problem is the same, the relative concentration of oxygen in air is less than 0.05% (~450pars per million). In water much less.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: