Hacker Newsnew | past | comments | ask | show | jobs | submit | joshellington's commentslogin

I assume this is already happening. Incompetence within state actor systems being the only hurdle. The incentive and geopolitic implications is too high to NOT do it.

I just pray incompetence wins in the right way, for humanity’s sake.


To throw two pennies in the ocean of this comment section - I’d argue we still lack schematic-level understanding of what “intelligence” even is or how it works. Not to mention how it interfaces with “consciousness”, and their likely relation to each other. Which kinda invalidates a lot of predictions/discussions of “AGI” or even in general “AI”. How can one identify Artificial Intelligence/AGI without a modicum of understanding of what the hell intelligence even is.


The reason why it’s so hard to define intelligence or consciousness is because we are hopelessly biased with a datapoint of 1. We also apply this unjustified amount of mysticism around it.

https://bower.sh/who-will-understand-consciousness


I don't think we can ever know that we are generally intelligent. We can be unsure, or we can meet something else which possesses a type of intelligence that we don't, and then we'll know that our intelligence is specific and not general.

So to make predictions about general intelligence is just crazy.

And yeah yeah I know that OpenAI defines it as the ability to do all economically relevant tasks, but that's an awful definition. Whoever came up with that one has had their imagination damaged by greed.


All intelligence is specific, as evidenced by the fact that a universal definition regarding the specifics of "common sense" doesn't exist.


Common is not the same as general. A general key would open every lock. Common keys... well they're quite familiar.


My point was that all intelligence is based on an individual's experiences, therefore an individual's intelligence is specific to those experiences.

Even when we "generalize" our intelligence, we can only extend it within the realm of human senses & concepts, so it's still intelligence specific to human concerns.


So if you encounter an unknown intelligence, like I dunno some kind of extra dimensional pen pal with a wildly different biology and environment than our own... Would you be open to the possibilities:

- despite our difference we have the same kind of intelligence

- our intelligences intersect, but there are capacities that each has that the other doesn't

?

It seems like for either to be true there would have to be some place of common ground into which we could both generalize independently of our circumstance. Mathematics is often thought to be such a place for instance, there's plenty of sci fi about beaming prime numbers into space as an attempt to leverage that common ground. Are you saying there aren't such places? That SETI is hopeless?


It's certainly possible that we may encounter other alien lifeforms whose intelligence intersects our own.

It's just not guaranteed.


If we assume this about intelligence:

> Even when we "generalize" our intelligence, we can only extend it within the realm of human senses & concepts, so it's still intelligence specific to human concerns.

...then we might fail to recognized them as intelligent when we meet them. Same goes for emergent artificial doohickeys. A theory that allows for generalization might never fine an example of it, but it's still better than a theory which doesn't because the second sort surely won't.


When you make the term "general intelligence" so broad that it expands beyond the realm of human senses & concepts, statements about it become unfalsifiable because you, a human, can't conceive of a way to test said statement.

Unfalsifiable statements are worthless because they can't be tested.

So, at the very least, there's no point in humans trying to theorize about intelligence so general that it expands beyond human comprehension.

Basically, in the context of universal intelligence, I'm an atheist & you're agnostic.


A universal definition of “chair” is pretty hard to pin down, too…


What are your sources for that claim?


Ontology

https://en.wikipedia.org/wiki/Ontology

Or: just try, then try your best to find ways your definition fails. You should find it challenging, to put it mildly, to create a bulletproof definition, if you’re really looking for angles to attack each definition you can think of. They’ll end up being too broad, or too narrow. Or coming up short on defining when exactly a non-chair becomes a chair, and vice-versa, or what the boundaries of a chair are (where chairness begins and ends).

And if that one is tricky…


How would I know when my definition is too broad?


Exactly. Do exactly what you’re doing now, but to your own definitions of “chair”. You get it.


Hold on. You're the one saying that a definition can be too broad & acting like that actually means something important.

So I'm asking how you define a definition as "too broad".

Because my perspective is that definitions that are in fact too broad are unimportant because no one uses them.


Useful definitions! Yes, easy.

Universal definitions? Extremely hard.


Do you know of a human culture in which a chair is defined as something else that an elevated seat with a back?


This so much this. We don’t even have a good model for how invertebrate minds work or a good theory of mind. We can keep imitating understanding but it’s far from any actual intelligence.


I'm not sure we or evolution needed a theory of mind. Evolution stuck neurons together in various ways and fiddled with it till it worked without a master plan and the LLM guys seem to be doing something rather like that.


LLM guys took a very specific layout of neurons and said “if we copy paste this enough times, we’ll get intelligence.”


mmm, no because unlike biological entities, large models learn by imitation, not by experience


> we still lack schematic-level understanding of what “intelligence” even is or how it works. Not to mention how it interfaces with “consciousness”, and their likely relation to each other

I think you can get pretty far starting from behavior and constraints. The brain needs to act in such a way as to pay for its costs. And not just day to day costs, also ability to receive and give that initial inheritance.

From cost of execution we can derive an imperative for efficiency. Learning is how we avoid making the same mistakes and adapt. Abstractions are how we efficiently carry around past experience to be applied in new situations. Imagination and planning are how we avoid the high cost of catastrophic mistakes.

Consciousness itself falls from the serial action bottleneck. We can't walk left and right at the same time, or drink coffee before brewing it. Behavior has a natural sequential structure, and this forces the distributed activity in the brain to centralized on a serial output sequence.

My mental model is that of a structure-flow recursion. Flow carves structure, and structure channels flow. Experiences train brains and brain generated actions generate experiences. Cutting this loop and analyzing parts of it in isolation does not make sense, like trying to analyze the matter and motion in a hurricane separately.


I did the math some years ago on how much computing is required to simulate a human brain - a brain has around 90 billion neurons with each neuron having an average of 7,000 connections to other neurons. Lets assume thats all we need. So what do we need to simulate a neuron, one cpu? or can we fit more than one in a CPU, lets say 100 so we're down to one billion cpu's and 70 trillion messages flying between them every what? mSec?.

Simulating that is a long way away - so the only possibility is that brains have some sort of redundancy and we can optimise that away. Though computers are faster than brains so its possible maybe, how much faster? So lets say a neuron does its work in a mS and we can simulate this work in 1uS, ie a thousand times faster - thats still a lot. Can we get to a million times faster? even then its still a lot. Not to mention the power required for this.

Even if we can fit a million neurons in a CPU thats still 90 million CPU's. Only 10% are active say, still 9 million CPU's, a thousand times faster - 9,000 cpu's nearly there but still a while away.


We don't even have an accurate convincing model of how the functions of the brain really work, so it's crazy to even think about its simulation like that. I have no doubt that the cost would be tremendous if we could even do it, but I don't even think we know what to do.

The LLM stuff seems most distinctly to not be an emulation of the human brain in any sense, even if it displays human-like characteristics at times.


That would require philosophical work, something that the technicians building this stuff refuse to acknowledge as having value.

Ultimately this comes down to the philosophy of language and of the history of specific concepts like intelligence or consciousness - neither of which exist in the world as a specific quality, but are more just linguistic shorthands for a bundle of various abilities and qualities.

Hence the entire idea of generalized intelligence is a bit nonsensical, other than as another bundle of various abilities and qualities. What those are specifically doesn’t seem to be ever clarified before the term AGI is used.


> I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description ["<insert general intelligence buzzword>"], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the <insert llm> involved in this case is not that.

https://en.wikipedia.org/wiki/I_know_it_when_I_see_it


Without going to deep into the rabbit hole, one could argue that at the first-order, intelligence is the ability to learn from experience towards a goal. In that sense, LLMs are not intelligent. They are just a (great) tool at the service of human intelligence. And so we’re just extremely far from machine intelligence.


Without an enemy (or two) the entire post-1945 identity construct of the USA (and its allies) does not have a purpose. It’s pretty plain and simple.


There's some truth in that, but it's a much more apt description of post-Soviet Russia. They celebrate their WW2 win like a bona fide religious event every year.


Yes but then that would require consensus on protocol, encoding/decoding, etc., aka the things large companies are the worst at aligning on.


Plus you have to defend yourself from the neighborhood kid who thinks that it'd be a laugh riot to randomize everyone's clocks at 5:15 in the morning.


That would have absolutely been myself, wardriving at 16 (back in the early aughts). Sending nonsensical print jobs and then setting everyone’s clocks to 4:20.


I'm in my mid 30s and I still regularly tell unattended Google Homes and Alexa's to set an alarm for 4 AM.


I'm not that malicious, I just like picking up friends' unattended phones and snapping a goofy selfie for them to discover later.


There is only one government


I get it, the service model always shines the brightest in the eye of the revenue calculator. But I have immediate skepticism they’ll be able to execute at a competitive level. Their core competency has always been manufacture and production, not service-based things. It’s a big rock to push up a tall hill.


Genuine question (I don't know much about cloud stuff): how is providing a cloud service/platform (at scale) even remotely as hard as designing, manufacturing and selling GPU's (including drivers and firmware) at massive scale?

It feels like reading that setting up something like Facebook would be extremely challenging for a company like SpaceX.


It's certainly possible to start new cloud providers - there are a bunch of smaller-scale VM providers. In the GPU cloud business there's companies like lambdalabs and runpod.

But the fattest profit margins are in selling to big corporations. Big corporations who already have accounts with the likes of AWS. They already have billing set up, and AWS provides every cloud service under the sun. Container registry? Logging database? Secret management? Private networks? Complicated-ass role management? Single-sign-on integration? SOC2/PCI/HIPAA compliance? A costs explorer with a full API? Everything a growing bureaucracy could need. Getting your GPU VMs from your existing cloud provider is the path of least resistance.

The smaller providers often compete by having lower prices - but competing on cost isn't generally a route to fat profit margins. And will folks at big corporations care that you're 30% cheaper, when they're not spending their own money?

nvidia could definitely launch a focused cloud product, that competes on price - but would they be happy doing that? If they want to get into the business of offering everything from SAML logon integration to a managed labelling workforce with folks fluent in 8 languages - that could be a great deal of work.


I like this question. You raise a very good point. Occam's Razor tells me the simplest explanation is "core competency". Running AI-SaaS is just a very different business from creating GPUs (including the required software ecosystem). As a counterpoint: Look at Microsoft. Traditionally, they have been pretty good at writing (and making money from) enterprise software, but not very good at hardware. (XBox is the one BIG exception, I can think of.)


Except for the place where they're good at hardware, they're not good at hardware? I mean that's true, but a bit of a twist of logic, wouldn't you say?


Microsoft's mice and keyboards also were exceptionally good


Genuine answer: Setting up Facebook WOULD be extremely challenging for a company like SpaceX. There's a reason Facebook is worth about 10x what SpaceX is worth, and most of that value doesn't come from the ability to build software. Facebook isn't even particularly good at building software.

To give an example in a closer domain: Look at how long Google lost money on cloud services through 2022 (over $15B in loses), and now only makes money by creative accounting (bundling "cloud services" together versus breaking out GCP; Microsoft does something similar with Office 365 and Azure).

Like many potential customers, I would not consider GCP because:

1) Google "support" is a buggy, automated algorithm which randomly thwacks customers on the head

2) Google randomly discontinues products

3) I've seen a half-dozen to a dozen instances where buying from Google was penny-wise and pound-foolish, and so have many other engineers I've worked with.

Google's overall attitude is that I'm a statistic defined by my value to Google. Google can and will externalize costs onto me. That attitude is 100% right for adwords and search, which are defined by margins, but not for something like GCP. If I am going with a cloud service/platform, I'll go with Amazon, Microsoft, or just about anyone else, for that matter.

That's not that Google is a bad company. Google actually did have the skill set to build the software and data centers for a very, very good cloud provider. It's just that Google's core competencies lie very far from providing reliable service to customers, customer support, or all the things which go into providing me with stability and business continuity.

"Fixing" this would require a wholesale culture, value, and attitude change, and developing a core competency very far from what Google is good at.

I put "fixing" in quotes since if you develop too many core competencies, you usually stop being good at any of them. Focus is important, and there's a reason many businesses spin out units outside of their domains of focus. If Google is able to become good at this, but in the process loses their edge in their current core competencies, that's probably a bad deal.

FWIW: I haven't yet formed an opinion on NVidia's cloud strategy. However, their core competencies appear to be very much in the "hard" domains like silicon, digital design, machine learning rather than "soft" ones. Another relevant example for what can happen when hard skills are de-emphasized at engineering-driven companies is Boeing (if you've been following recent stories; if not, watch a documentary).


>There's a reason Facebook is worth about 10x what SpaceX is worth, and most of that value doesn't come from the ability to build software. Facebook isn't even particularly good at building software.

One is a publicly listed business with as much of an objective look at real time "worth" as possible in today's world, and the other is a private business with confidential financials.

Seems like you would be unable to even calculate SpaceX's net worth, much less compare them to a business with the most objective measure of "worth".


SpaceX raised $750M at a valuation of $137B in January 2023.

A private investment at this scale should have a lot more transparency and due diligence than disclosures from a SEC disclosures. If I were investing $750M, I'd have engineers under NDA review SpaceX technologies, financial auditors, legal auditors, etc.

Secondary sales place it a little bit higher (but those typically have all the issues you describe).


Fair enough, didn’t know about that recent round. Still, I would assume that number is higher than it would be if the business were publicly listed, but the $140B should be close enough.


Eh I mean Google moved GCP's revenue around because Microsoft was doing that to make Azure look bigger than GCP. If you can't beat em, join em. Google's got long term contracts with a lot of companies and the government, so GCP isn't going to shut down anytime soon. Their consumer products division has problems with product longevy, but we're not paying them corporation level money or signing serious contracts when buying a Stadia subscription. So it's just business.

What I've heard is Azure is a pain in the ass, and things take three times as long to set up there, for some reason. There's also Oracle cloud but you hear way less about them. out there.


Having been down this road a few times, this:

> What I've heard is Azure is a pain in the ass, and things take three times as long to set up there, for some reason.

Doesn't matter. The cost here is a rounding error. What does matter is something like this:

https://developer.chrome.com/docs/extensions/develop/migrate...

https://workspaceupdates.googleblog.com/2021/05/Google-Docs-...

https://workspace.google.com/blog/product-announcements/elev...

https://killedbygoogle.com/

https://www.tomsguide.com/news/g-suite-free-shutdown

Etc.

These sorts of behaviors take out whole swaths of businesses wholesale. It's random, and you never know when it will happen to you.

It's the difference between managing a classroom with:

- an annoying kid throwing spitballs every day (Azure)

- the quiet kid who, one day, brings an assault rifle, a few extra mags, and starts spraying bullets into the cafeteria (Google).

Yes, one is a constant source of annoyance, but really, it's very manageable when you consider the alternative.

(Oracle, in the school analogy, is the mean kid who spreads false rumors about you. As far as I can tell, there is never a sound, long-term business reason to pick Oracle. Most of the reason Oracle is chosen is they're very good at setting up offerings which align to misaligned incentives; they're very often the right choice for maximizing some quarterly or annual objective so someone gets their bonus. In return, the firm is usually completely milked by Oracle a few years down the line. By that point, the decision maker has typically collected their bonus, moved on, and it's no longer their problem.)


Name one product that was killed by Google while they had an agreement with the US government not to. Who cares about random, unprofitable products like Google+ rightfully being shuttered? There is no killedbyapple because they barely take risks on products, especially software products.


I share your intuition, perhaps unfairly, that it's indeed not as hard in absolute terms. However, it certainly requires a different set of skills and organisational practices.

Just because an organisation is extremely good at one thing doesn't mean it can easily apply that to another field. I would guess that SpaceX probably does have the talent on hand to throw together a Facebook clone, but equally I think they would struggle to actually complete with Facebook as a business at scale.


Well, motivation would be an obvious thing lacking. People who want to work on rockets, I would guess, would find working on facebook to be a “boring” solved problem.


This doesnt seem to be true. They have been running Geforce Now for a long time and its one of the best gaming streaming services. It seems they are doing it in partnership with other regional companies but nobody says they cant use same partners. Running games with small latency seems more complicated than llms on cuda.


Hey, if it doesn’t work out they can always return back to what they are, a consumer first company but now with world class hardware/software.

AI graphics service that renders games in the cloud (photorealistic), concluding its epic journey of being an amazing graphics card company.

Kinda cool when you think about it.


They have been building partnerships with ISP's and service providers all over the world with their GeForce now game streaming service. They could continue and expand this by providing a similar backend for LLM services.


Clickbait marketing blog post. Doesn’t belong on FP IMO.


My immediate assumption was the marketing team has defined a “conversation” session time very broadly - “time tab was open” etc


Having written production CSS for 15+ years across everything from global e-commerce Magento sites to new age Next.js RSC projects, I like Tailwind (after much conflict). The piece I feel like is missing in most explanations of why it “works” is the flow you can achieve with it. It’s a distinct difference. The idea of essentially hammer utility classes, all provided in one package, provides a novel experience that does stick.

As many will say, it’s biased towards component-based code, which I agree is where it shines the brightest. But even in my legacy Rails projects, I can’t shake the want for more utility/generalized classes.


Also at 10, my parents put me on a cross-country flight by myself from PDX to Atlanta (to visit family). Note, this was in 1999, not some bygone era. I imagine there would be an Amber Alert nowadays if someone noticed me sitting by myself on an airplane.


Airlines literally have a special thing that costs a ton extra and results in the kid being monitored constantly by an airline employee. (I got to see the jet bridge being "driven" up to the airplane because the guy who was watching me never had his relief show up!) https://www.aa.com/i18n/travel-info/special-assistance/unacc...


https://www.aa.com/i18n/travel-info/special-assistance/unacc...

It still exists, and there are policies around it.

Amtrak does the same but starts at 12: https://www.amtrak.com/unaccompanied-minors-policy


My parents divorced when I was very young and my dad ended up getting a job in a different state. So by age 4-5 I was flying alone between parents semi-regularly. This was in the late 80s to early 90s.


1999 was pre 9/11 and also 23 years ago. Most people didn't have cellphones and many people didn't even have the internet.

It's a bygone era.


This is very common.


Wow, they're using Stripe for payments. Here's their API key: pk_live_1vI9jQQVPUd9XXtXEXxRBMDL

Just reported them through the generic Stripe contact form (all I could quickly find).


In these cases it might be better to commit it to a public GitHub repo which has real-time secret scanning and partnerships with a lot of providers to immediately invalidate detected secrets.


I think this is the public one that's generally posted in the html for the client side stripe portion, not a secret.


Yeah definitely. The public keys start with pk_, the private ones start with sk_.


My first question was how they were collecting these "high risk" payments.

In general, Stripe describes a 7-14 day payout schedule, but has shorter ones for many countries.

Presumably it takes a fair amount of identity info to get to the 2 business day accelerated payout speed available to low-risk businesses in the US.

https://stripe.com/docs/payouts#payout-schedule


I would be really surprised if the scam is taken down in just 14 days (without the media's attention), so they're typically able to get a couple of payouts at least.

Maybe this is just a single occurrence in a large scheme with lots more websites & separate payment providers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: