Hacker Newsnew | past | comments | ask | show | jobs | submit | devilsbabe's commentslogin

Funny how if you kept reading before commenting, they addressed that point specifically

> We were cautious to only run after each model’s training cutoff dates for the LLM models. That way we could be sure models couldn’t have memorized market outcomes.


Very cool project! It would be really nice to have support for the other assistants that Microsoft released to use in place of Clippy (I'm particularly fond of the dolphin that was used in the Japanese version of Windows) https://en.wikipedia.org/wiki/Office_Assistant#Assistants


Is that why the name changed to simply Hands. Makes sense but I hadn't realized the company was sold. I haven't noticed it affect the stores in any way yet.


Yes, once Tokyu sold Tokyu Hands the store dropped the Tokyu name. I also haven’t noticed any changes in terms of product selection or customer service, with the caveat that I’m a tourist who goes to Japan 2-3 times per year, not a resident, so there may be changes I’m unaware of.


That's only income tax. There's also social contributions including pension, health insurance, and unemployment insurance. This adds up to around an extra 20% tax (although pension contributions are capped so could be less for very high salaries).


Also, VAT


re your last sentence: why do you disapprove of funding research into AI safety


Not OP, but I do work in reducing harms of deployed AI systems at a big tech company

When most EA/Rationalist folks discuss AI Safety. They’re talking about AGI, a hypothetical construct that we have no way of proving is even possible

Now there’s nothing inherently wrong with funding research of that sort.

The problem comes is that many EA/Rationalist folks begin downplaying legitimate tangible risks on the horizon (I’ve had prominent EA folks tell me that Climate Change is not an existential risk) or that many of these AI Safety researchers actively ignore how to reduce harms of the AI we have deployed today.

This existential measuring contest is just so odd and unnecessary. I really don’t care if you think AI Safety is a legitimate problem. I personally don’t but it’s when they begin to downplay others actual catastrophes that many of us take issue.


The atom bomb was once a hypothetical construct. The Einstein-Szilard letter was talking about a danger that they had no way of proving was even possible. Proof is irrelevant outside of mathematics; the only relevant question is cost to expected benefit - or expected harm.

With dangerous research, when a conclusive demonstration of possibility is achieved, it is usually too late to do much in the way of prevention. That's why we don't expect the FDA to "prove" that harm from a new medication "is even possible" before regulating it; rather, we expect the pharmaceutical company to provide evidence for its expected harmlessness before we even make the attempt. Think of the FDA what you like, this is certainly a better standard than putting the burden of proof on harmful outcomes.


Part of the problem with this is that conclusive demonstration that AGI is possible is not even in the same ballpark as a conclusive demonstration that it would have consequences like the EA folks seem to believe.

From everything I've read about their views, they seem to believe that either the moment an AGI is created or very, very soon thereafter, it will achieve the Singularity and "ascend" to something near godhood from our perspective, and it will perceive humanity as a threat to it and immediately seek our destruction—which, because of its near-godhood, it will be able to achieve far faster than we have the ability to respond to. Thus, what we have to do is either prevent AGI from ever coming to pass (because as soon as it exists, it's effectively too late), or make absolutely certain that by the time it does, we have strong measures in place to either fight back against it, or get humanity the hell out of Dodge.

This is built on such a foundation of shaky assumptions that giving it credibility is ludicrous.


I agree with your description of the AGI/ASI position.

My take on it is the other way around: every step of the argument seems plainly obvious.


Plainly, that is not the case, as you are choosing to not spend your every waking moment to stop the antichrist AI coming into existence.

So either it's less obvious than you're claiming, or you are taking the threat of creating an evil AGI not seriously enough according to your own beliefs.


I don't spend every waking moment stopping lots of things that could kill us all. Nuclear war, climate change, gain-of-function research.

I just don't care that much about things that kill everyone. I don't think almost anyone does. In the Cold War, the vast majority of the population was not doing absolutely everything to achieve bilateral disarmament. This would lead you to believe that they didn't really expect nuclear war. But maybe they were just apathetic and helpless.


The atom bomb was once a hypothetical construct. The Einstein-Szilard letter was talking about a danger that they had no way of proving was even possible.

This isn't an accurate representation, really. The letter was written exactly because it was clear that a fission bomb is very likely within the short-term reach of current technological and industrial capability. This is very much not the case with AGI.


That's a difference of degrees, not kind. You can always ask for a higher class of proof.


How is it a difference of degrees? This is just plain inaccurate:

The Einstein-Szilard letter was talking about a danger that they had no way of proving was even possible.

Supporting your argument with an inaccurate thing is not 'a difference of degrees'. It's a difference between a good argument and a bad one.


The disagreement is about the difficulty of "proving". I could say that unfriendly AI is clearly possible because evil humans exist. You could say that unfriendly AI is not proven possible until one has actually been built. But if you applied that standard to the letter, it would likewise fail it. Now, I agree that the case for AI is weaker, maybe much weaker, than the case for nuclear fission when the letter was written - there is, for instance, no clear research agenda that is held by preeminent researchers in the field to lead to it - though there are research groups, such as Deepmind, that have AGI as their explicit goal - but that also has nothing to do with being "proven" possible.

My argument is that for contested and dangerous outcomes, proof of possibility is an unreasonable standard, in good part because it massively underspecifies the goalposts.


The disagreement is about the difficulty of "proving"

It's not, it's just a turn of phrase you've latched on to. There's no discussion of a standard of proof, etc. The comparison between the Szilard Einstein letter and 'AGI safety' is specious and it's one you brought up! You have to outright misrepresent what the letter was about just to make it. It was not a letter about a 'hypothetical danger they had no way of proving'. There just isn't any reasonable reading of the letter, the context, the history in which that's an accurate statement.


My whole point and the emphasis on why I brought it up was that yes, the letter was about a "hypothetical" danger they had no way of "proving".

By any reasonable standard, at that point the possibility of nuclear fission was well established. But the argument is that there is no central committee that defines the reasonable standard and checks theories against it. It's all just opinion, and that specific opinion can be arbitrarily goalshifted.

I'm not saying the letter was about a hypothetical danger they had no way of proving, I don't think that. But that's specific to my interpretation of those terms, and I could easily see the exact same argument levered against the letter if that debate had been done in public.


> I’ve had prominent EA folks tell me that Climate Change is not an existential risk

Have you seen anything to indicate otherwise? The IPCC reports don't contain even a whiff in that direction.

To the best of my knowledge zero people who know anything about climate change think its existential.


Is there an accepted unambiguous definition of "existential" here?

Is it:

1. All life on the planet dies

2. All advanced life on the planet dies?

3. All humans die?

4. Advanced civilization is destroyed irretrievably?

5. Advanced civilization is destroyed for a long period of time?

etc

And I'm not sure the group "people who know anything about climate change" are neccesarily better equipped to answer this question - it's a complex systems question.

I'm not actually sure who is equipped to answer it accurately.


It looks like there are research works trying to define precisely this notion of existential risk in the context of climate change, see e.g. https://link.springer.com/article/10.1007/s10584-022-03430-y


I'm not sure if this would actually count as "existential", but I think that it's well worth being very, very concerned about a consequence on the order of

"Millions to billions of people die; large percentages of the rest are displaced and/or have their lives made significantly shorter, harsher, and less certain as war, famine, disease, and natural disasters become vastly more common around the globe."

And there's really very little question that that's where climate change is leading if we don't get it under control soon.


It's always 2 or 3 in my experience.


> The problem comes is that many EA/Rationalist folks begin downplaying legitimate tangible risks on the horizon (I’ve had prominent EA folks tell me that Climate Change is not an existential risk) or that many of these AI Safety researchers actively ignore how to reduce harms of the AI we have deployed today.

I'm sure some people do this, and it's bad.

But to be clear, when you say some researchers are actively ignoring how to reduce harms of AI today... I'm not sure what you mean by "actively" ignore, but don't most researchers just by default ignore this, except for the few who are active in this field?

In other words, I think it's totally legit to focus on long-term AGI risk, and totally ignore current short-term AI risk, as a personal career-choice, just like it's ok to not even work on AI at all. Unless someone is actively trying to cause less resources to go into short-term AI risk, what's the problem?

> They’re talking about AGI, a hypothetical construct that we have no way of proving is even possible

I mean, we exist as (one would hope) intelligent beings. Why wouldn't AGI be possible? Your frame makes it seem like the default is that it isn't possible, which is wrong IMO.


They are right. Climate change is absolutely not an existential risk.

“In a high-emissions scenario where little is done to curb planet-heating gases, global mortality rates will be raised by 73 deaths per 100,000 people by the end of the century”

https://www.theguardian.com/us-news/2020/aug/04/rising-globa...


> Climate change is absolutely not an existential risk.

That's an assessment based on what we know, but there's a lot we don't. For instance, it's possible that climate change could lead to a large scale collapse of food chains that could pose an existential risk to humanity.

It's also plausible that climate change will lead to resource shortages that result in wars between nuclear powers, and the nuclear calamity that follows kills us all.


"By the end of the century" is doing a lot of work in that statement.


> They’re talking about AGI, a hypothetical construct that we have no way of proving is even possible

I really don't understand AGI skeptics. Unless you think reality has some non-physical character that can't be reproduced outside of biology, it obviously follows that a physical machine can reproduce what another physical machine (human body) is doing.


The biggest threat of AGI IMO is people wanting and desiring to make the leap from human to digital. It is less The Matrix and more like plastic surgery or some other lifestyle enhancement. At first they wont, like people wouldn’t jump in an unlicensed taxi because that is silly, right!


At this point I think that giving $5 to a homeless person in SF (even if they go straight to doing another shot) might be more productive than this self-aggrandizing vapid BS about some theoretical risk that might come in the future

Right now what's threatening the most people is war, famine and diseases. Climate change is a tangible future threat but I'd bet a couple $ that the JSO organizers are full EA proponents and are flying private jets to convince people to throw soup into works of art


> BS about some theoretical risk that might come in the future

Like increased rate of hurricanes due to climate change?

One does need to seriously consider low probability, high impact events, like pandemics, in one's future planning and resource allocation.


> Like increased rate of hurricanes due to climate change?

This is a bad analogy. The mechanisms and risks of climate change are reasonably well understood as are the solutions available to us now and even some possible future solutions.

When it comes to AGI, what's the risk? Something bad might happen? The mechanisms of AGI arising is also unknown since it has never happened before and it's not even known if it's possible. Finally, what's the solution? Prevent AGI research? Make sure every AGI project has a killswitch? How do you prevent something when you don't know if it is possible nor how it will arise?

Compare this to the concrete, measurable things we can do to slow down climate change and "AGI safety" seems like a make-work money pit.


> why do you disapprove of funding research into AI safety

This is what I'm replying to. And yes Climate Change is a worrying problem, but it won't be solved by people throwing tantrums in museums


I'm not the poster but there is no reason to believe that we possess the tools to contemplate AI safety worse it may be a fools errand to contemplate how to control something much smarter than you are.

Ergo at this point in time money spent to pay people to contemplate it is almost certainly entirely wasted.


It’s much like spending millions on studying how to save humanity from the sun when it expands in 5b years.


Interestingly, the sun would kill us much sooner than that: in less than 1 billion years the sun's increased output will heat up the Earth's surface to the point of becoming unlivable.

Related to the topics of AI apocalypse and putting things off because they seem far away: I personally find it amusing to contemplate a strong AGI being developed in 2035 and wiping out humanity. Why? Because it would eliminate the Y2038 problem, retroactively vindicating all the engineers who only allocated 32 bits to timestamps. It would mean the justification of "oh, it's so far off, the world will have changed unimaginably by then", would for once turn out to be correct!


I quite like the idea of strong AI being developed in 2038 just in time for its plans to be thwarted by time disorientation...


Maybe we should try to acquire the tools before we build systems much smarter than we are


Current AI seems to be just a box of math churning out correlations that we can put to effective use but not understand wherein the ability to actually act is entirely reliant on us deliberately wiring the machine into the world in a way it can even have negative effects or any effect at all wherein said negative effects are entirely and trivially predictable and avoidable. This is true whether its a robot that accidentally breaks a kids finger or a system that bakes in the inherent racism of prior decisions into future sentencing or mortgage approvals. In most cases the logical thing is just don't use AI for that.

Its not at all clear that future actual AI will be GPT7 now with 10,000x as much processing power. In fact not much seems to be clear at all. It would seem that the insight to control future tools will come from the experience in building said tools with little chance that it will just accidentally suddenly become skynet. If we doom ourselves thus it will be a long laborious process with years of striving, setbacks, and thousands of people involved.

We should focus on the basic science and let it become clear what avenues exist over the probable decades it will take to even reach the vicinity of our goal. Money spent specifically on paying PhDs to imagine how to secure a technology we don't have and don't remotely understand is probably wasted.


None of this seems to argue against focusing on safety tools now, given the enormous downside risk. And maybe you're wrong about the probable decades.

The possible outcomes here are "probably we waste a lot of money" and "the world ends."


I've never heard of this. What's the long tail pipe argument?

edit: I see. I definitely have heard this, just not called as such https://en.wikipedia.org/wiki/The_long_tailpipe


> Say the first thing that comes to mind when you think “Coca Cola.” Say it out loud, don’t edit

sugary drink


> no gold selling or similar shenanigans

So the marketplace you're describing is not RMT then? You're facilitating in-game trades with in-game currencies, and making money some other way? (ads?)


I've been working for a large US company in Tokyo for several years now. No need to know Japanese for work; I know a lot of people that speak it very poorly (though I wouldn't recommend that if you're staying here long term).

The pay is good (~35M JPY/300K USD at L5) but there are very few companies with this level of compensation so the real cost is in the limited opportunities.

Like everywhere, there's good and bad parts to living in Tokyo, but I've enjoyed it for the most part. It's friendly, clean, safe, and relatively cheap. On the other hand, there's sometimes discrimination against foreigners (especially in housing) and I don't appreciate how cramped apartments and houses are.


Are you in FAANG? That level of pay (300k) is also in the higher range for SV. Seems really rare to land such a role given that all the other reported salaries are around 60k. Would love to know if your situation is a fluke or if it's very possible for me to land such a role myself?


The high-paying tech companies in Tokyo that I'm aware of are Google, Amazon, Indeed, Woven Planet, Stripe, and Doordash. Those last two are just starting out here so still have a limited presence.

Those are basically your options. Outside of these, TC will go down quickly


Very cool work! Paradoxically, I've been slacking off on practicing Japanese diligently ever since I moved to Tokyo. I've been meaning to get back into a good routine so I'll give jpdb a shot.

I'm curious how you did this: "We have 16785 prebuilt decks with vocabulary from 1124 different anime waiting for you." Did you write a script that calls subs2srs?


Well, essentially yes, it is based on text analysis, but it's a lot more complex than "a script that calls subs2srs". (:

(My whole codebase is over 100k lines of code, all written by me.)

I have a unified morphological analysis engine that I use for every type of media that I have in my database, and I use it to generate stats and vocabulary lists.


I didn't mean to imply that what you're doing is simple; I was just curious.

I'll be honest, I have no idea what a unified morphological analysis engine is. Something to look into tonight


I didn't take it that way, so don't worry. (:

Sorry, I might have gotten carried away with all of the fancy words. (: The "morphological" part comes from linguistics (see the "Morphology (linguistics)" article on wikipedia), by "unified" I meant that I apply it to every kind of texts, and an "engine" is just another word for "software" (think "game engine" - a reusable piece of software that can be used for many things in a single domain).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: