Hacker Newsnew | past | comments | ask | show | jobs | submit | ChuckMcM's commentslogin

Pretty awesome. The only thing I would change is to put a USB battery between the usb wall power and the D1 mini. That way for power outages of < a couple of days or so you're clock will be fine.

Why bother? Just get NTP time when the power comes back. It's syncing every 15 minutes anyway.

Because you want to know what time it is when the power is out?

Okay I'm thinking of a very Shenzen kind of gizmo for your car that projects a bright red laser "keep out" box on the road in front of your car which is adjusted in size for your current speed.

We have something like that in eu with road markings. Both for clear weather and fog/rain. They mark some of the lines differently, and tell you how many lines you should have between you and the car in front. I think they were first trialed and then printed in several places.

Cool. But I'm thinking this box floats in front of your car on the road in real time. See you're driving and ahead of you on the road is this box. At night it might interfere with your night vision, might have to workshop that a bit.

There's a couple of bits of motorway in England with that, I'm pretty sure the M6 and the M1. There are white chevrons painted on the road and you keep two of them between you and the car in front.

Also "Keep Two Chevrons Apart" is going to be the name of my specialist Citroën breaker's yard.


And on the M5 between Gloucester and Bristol.

I think a lot of people would just consider that a challenge.

On the occasion when I am towing our travel trailer, it is really incredible how unsafe that makes other drivers act around me. They will jam themselves in front of me at all costs, with no consideration for physics.


I see this happen to semi trucks on the highway. People interpret big open space as a place to merge. As you say, people have no consideration for why there might be a large space in front of a semi. A 50k lb+ truck hitting the back of a ~4k lb vehicle is not pretty.

See for a truck it could say "DEATH ZONE KEEP CLEAR" which would be accurate. Given that it's projected it could rotate through various languages too.

There was a bike light that projects a bike lane onto the road, not sure why they are not more popular.

Can't wait to get blinded by lasers when cars are going over bumps and speed humps.

I know you were probably writing tongue in cheek, but that is one of those "solutions" that doesn't stop bad actors and makes good actors more miserable than usual.


Like LED headlights :-). It would kind of be a concern except that geometry in in your favor. The angle down they would have to shine + the size related to speed would result in the lasers pretty much always hitting the street except perhaps if you were at the top of Gough[1].

[1] SF drivers will get that.


I really reasonate with this post, I too appreciate "Good Code"(tm). In a discussion on another forum I had a person tell me that "Reading the code that coding agents produce is like reading the intermediate code that compilers produce, you don't do that because what you need to know is in the 'source.'"

I could certainly see the point they were trying to make, but pointed out that compilers produced code from abstract syntax trees, and the created abstract syntax trees by processing tokens that were defined by a grammar. Further, the same tokens in the same sequence would always produce the same abstract syntax tree. That is not the case with coding 'agents'. What they produce is, by definition, an approximation of a solution to the prompt as presented. I pointed out you could design a lot of things successfully just assuming that the value of 'pi' was 3. But when things had to fit together, they wouldn't.

We are entering a period where a phenomenal amount of machine code will be created that approximates the function desired. I happen to think it will be a time of many malfunctioning systems in interesting and sometimes dangerous ways.


> you could design a lot of things successfully just assuming that the value of 'pi' was 3. But when things had to fit together, they wouldn't.

Apt analogy. I’m gonna steal it!


I like to think of it as the period when software engineering finally grows up to be a respected discipline with a body of actual theory

I love this thread so much.

I expect this is the crux of the problem.

There aren't any "AI" products that have enough value.

Compare to their Office suite, which had 100 - 150 engineers working on it, every business paid big $$ for every employee using it, and once they shipped install media their ongoing costs were the employees. With a 1,000,000:1 ratio of users to developers and an operating expense (OpEx) of engineers/offices/management. That works as a business.

But with "AI", not only is it not a product in itself, it's a feature to a product, but it has OpEx and CapEx costs that dominate the balance sheet based on their public disclosures. Worse, as a feature, it demonstrably harms business with its hallucinations.

In a normal world, at this point companies would say, "hmm, well we thought it could be amazing but it just doesn't work as a product or a feature of a product because we can't sell it for enough money to both cover its operation, and its development, and the capital expenditures we need to make every time someone signs up. So a normal C staff would make some post about "too early" or whatever and shelve it. But we don't live in a normal world, so companies are literally burning the cash they need to survive the future in a vain hope that somehow, somewhere, a real product will emerge.


For most software products I use, if the company spent a year doing nothing but fixing P2 bugs and making small performance improvements, that would deliver far, FAR more value to me than spending a year hamfistedly cramming AI into every corner of the software. But fixing bugs doesn't 1. pad engineer's resumes with new technology, or 2. give company leadership exciting things to talk about to their golfing buddies. So we get AI cram instead.

I think it is more externally driven as well, a prisoners dilemma.

I don't want to keep crapping out questionable features but if competitors keep doing it the customer wants it -- even if infrastructure and bug fixes would actually make their life better.


Last time I saw results of a survey on this, it found that for most consumers AI features are a deciding factor in their purchasing decisions. That is, if they are looking at two options and one sports AI features and the other doesn’t, they will pick the one that doesn’t.

It’s possible AI just seems more popular than it is because it’s easy to hear the people who are talking about it but harder to hear the people who aren’t.


I think this may have been Dell?

Dell reveals people don't care about AI in PCs (https://www.techradar.com/computing/windows-laptops/dell-rev...)


Consumers is nice, but far more important are the big corporate purchases. There may be a lot of people there too who don't want AI, but they all depend on decisions made at the top and AI seems to be the way to go, because of expectations and also because of the mentioned prisoner's dilemma, if competitors gain an advantage it is bad for your org, if all fail together it is manageable.

My job is like that, although it's mostly driven by my direct boss and not the whole company, but our yearly review depends on reaching out to our vendors and seeing if an AI solution is available for their products and then doing whatever is necessary to implement it. Most of the software packages we support don't have anything where AI would improve things, but somehow we're supposed to convince the vendor that we want and need that.

>It’s possible AI just seems more popular than it is because it’s easy to hear the people who are talking about it but harder to hear the people who aren’t.

I think it's because there's a financial motivation for all the toxic positivity that can be seen all over the internet. A lot of people put large quantities of money into AI-related stocks and to them any criticism is a direct attack on their wealth. It's no different from crypobros who put their kids' entire college fund into some failed and useless project and now they need that project to succeed or else it's all over.


I’m not sure that really explains how people get onto hype trains like this in the first place, though. I doubt many people intentionally stake their livelihoods on a solution in search of a problem.

My guess is that it’s more of a recency bias sort of thing: it’s quite easy to assume that a newer way of solving a problem is superior to existing ways simply because it’s new. And also, of course, newfangled things naturally attract investment capital because everyone implicitly knows it’s hard to sell someone a thing they already have and don’t need more of.

It’s not just tech. For example, many people in the USA believe that the ease of getting new drugs approved by the FDA is a reason why the US’s health care system is superior to others, and want to make it even easier to get drugs approved. But research indicates the opposite: within a drug class, newer drugs tend to be less effective and have worse side effects than older ones. But new drugs are definitely much more expensive because their period of government-granted monopoly hasn’t expired yet. And so, contrary to what recency bias leads us to believe, this more conservative approach to drug approval is actually one of the reasons why other countries have better health care outcomes at lower cost.


Currently if someone posts here (or in similar forums elsewhere) there is a convention that they should disclose if they comment on a story related to where they work. It would be nice if the same convention existed for anyone who had more than say, ten thousand dollars directly invested in a company/technology (outside of index funds/pensions/etc).

A browser plugin that showed the stock portfolios of the HN commenter (and article-flagger) next to each post would be absolutely amazing, and would probably not surprise us even a little.

That’s because so much experience with ai is completely crap and useless.

The perception may be that anything AI related will be obsolete in months. So why pay to have it built into a laptop?

I doubt obsolescence anticipation has anything to do with it. That’s how tech enthusiasts think, but most people think more in terms of, “Is this useful to me?” And if it’s doing a useful thing now then it should still be doing that useful thing next year as long as nobody fucks with it.

I would guess it’s more just consumer fatigue. For two reasons. First, AI’s still at the “all bark and no bite” phase of the hype cycle, and most people don’t enjoy trying a bunch of things just to figure out if they work as advertised. Where early adopters think of that as play time, typical consumers see it as wasted time. Second, and perhaps even worse, they have learned that they can’t trust that à product will still be doing that useful thing in the future because the tech enthusiasts who make these products can’t resist the urge to keep fucking with it.


I strongly felt this way about most software I use before LLMs became a thing, and AI has ramped the problem up to 11. I wish our industry valued building useful and reliable tools half as much as chasing the latest fads and ticking boxes on a feature checklist.

This is exactly what I was thinking about my current place of employment. Wouldn't all of our time be spent better working on our main product than adding all these questionably useful AI add ons? We already have a couple AI addons we built over the years that aren't being used much.

To you – yes. But have you thought about the shareholders?

100% agree. Office and Windows were hugely successful because they did things that users (and corporations) wanted them to do. The functionality led to brand recognition and that led to increased sales. Now Microsoft is putting the horse before the cart and attempting to force brand recognition before the product has earned it. And that just leads to resentment.

They should make Copilot/AI features globally and granularly toggleable. Only refer to the chatbots as "Copilot," other use cases should be primarily identified on a user-facing basis by their functionality. Search Assistant. Sketching Aid. Writing Aid. If they're any good at what they do, people will gravitate to them without being coerced.

And as far as Copilot goes, if they are serious as me it as a product, there should be a concerted effort to leapfrog it to the top of the AI rankings. Every few weeks we're reading that Gemini, Claude, ChatGPT, or DeepSeek has broken some coding or problem-solving score. That drives interest. You almost never hear anything similar about Copilot. It comes off as a cut-rate store brand knockoff of ChatGPT at best. Pass.


>Now Microsoft is putting the horse before the cart and attempting to force brand recognition before the product has earned it. And that just leads to resentment.

I'm surprised that they haven't changed the boot screen to say "Windows 11: Copilot Edition".


I thought Copilot was just ChatGPT - isn't that the whole point of Microsoft's massive investment in OpenAI ?

they somehow made it worse and use a less capable version with smaller context window.

The only potential upside for businesses it that it can crawl onedrive/sharepoint, and acts as a glorious search machine in your mailbox and files.

That's the only thing really valuable to me, everything else is not working as it should. The outlook integration sucks, the powerpoint integration is laughably bad to being worthless, and the excel integration is less useful than Clippy.

I actually prefer using the "ask" function of github copilot through visual studio code over using the company provided microsoft copilot portal


Someone somewhere understands that ChatGPT as a brand is too valuable to have it ruined by middle management. Hence Copilot.

Depends on the flavor. Now has Claude, as well. And Copilot Studio can extend to any model AI Foundry supports.

I think this is a really good take, and not one I’ve seen mentioned a lot. Pre-Internet (the world Microsoft was started for), the man expense for a software company was R&D. Once the code was written, it was all profit. You’d have some level of maintenance and new features, but really - the cost of sale was super low.

In the Internet age (the likes of Google and Netflix), it’s not much different, but now the cost of doing business is increased to include data centers, power, and bandwidth - we’re talking physical infrastructure. The cost of sale is now more expensive, but they can have significantly more users/customers.

For AI companies, these costs have only increased. Not only do they need the physical infrastructure, but that infrastructure is more expensive (RAM and GPUs) and power hungry. So it’s like the cost centers have gone up in expense by log-units. Yes, Anthropic and OpenAI can still access a huge potential customer base, but the cost of servicing each request is significantly more expensive. It’s hard to have a high profit margin when your costs are this high.

So what is a tech company founded in the 1970s to do? They were used to the profit margins from enterprise software licensing, and now they are trying to make a business case for answering AI requests as cheaply as possible. They are trying to move from low CapEx + low OpEx to and market that is high in both. I can’t see how they square this circle.

It’s probably time for Microsoft to acknowledge that they are a veteran company and stop trying to chase the market. It might be better to partner with a new AI company that is be better equipped to manage the risks than to try to force a solo AI product.


> cost of doing business is increased to include data centers, power, and bandwidth

Microsoft Azure was launched in 2010. They've been a "cloud" company for a while. AI just represents a sharp acceleration in that course. Unfortunately this means the software products have been rather neglected and subject to annoying product marketing whims.


They've had cloud products for a long time, but I don't think that Microsoft fundamentally changed. I still see them organized and treated as an Enterprise software company. (This is from my N=1 outside perspective.)

ChatGPT says that "productivity and business processes" is still the largest division in Microsoft with 43% of revenues and 54% of operating income (from their FY2025 10K). The "intelligent cloud" division is second with 38% revenue and 35% operating income. Which helps to support my point -- their legacy enterprise software (and OS) is still their main product line and makes more relative profits than the capital heavy cloud division.


Yeah. Hyperscalers who are building compute capacities became asset heavy industries. Today's Google, MSFT, META are completely different than 10 years ago and market has not repriced that yet. These are no longer asset light businesses.

ITT: we assume that "computer rooms", mainframes, and other dev tools weren't a thing for software companies pre-cloud

I se no one that assumes that.

They bet the company on AI. If their AI push fails, everything else does not matter anymore. What you are seeing is desperation and Hail Marys.

My guess is every team's metric is probably reduced to tokens consumed through the products owned.


take it a step further: the global market is stagnant, and the big gains of the 90s-2010s are gone.

you either hail mary AI or you watch your margins dwindle; captialism does not allow for no-growth.


> But with "AI", not only is it not a product in itself, it's a feature to a product, but it has OpEx and CapEx costs that dominate the balance sheet based on their public disclosures. Worse, as a feature, it demonstrably harms business with its hallucinations.

I think it depends on how the feature is used? I see it as mostly as yet another user interface in most applications. Every couple of years I keep forgetting the syntax and formulas available in Excel. I can either search for answers or describe what i want and let the LLM edit the spread sheet for me and i just verify.

Also, as time passes the OpEx and CapEx are projected to reduce right? It maybe a good thing that companies are burning through their stockpiles of $$$ in trying to find out the applicability and limits of this new technology. Maybe something good will come out of it.


The thing about giving your application a button that costs you a cent or two every time a user clicks on it is, then your application has a button that costs you a cent or two every time a user clicks on it.

For the usecase of "How do I do thing X in Excell" you could probably get pretty far with just adding a small, local LLM running on the user's machine.

That would move the cost of running the model to the end user but it would also mean giving up all the data they can from running prompts remotely.

It would probably also make Office users more productive rather than replacing them completely and that's not the vision that Microsoft's actual customers are sold on.


Fair. But I sure wish we could instead solve this problem the way we did 20 years ago: by not having Web search results be so choked off by SEO enshittification and slop that it’s hard to find good information anymore. Because, I promise you, “How do I do thing X in Excel?” did not used to be nearly so difficult a question to answer.

To be fair. MS Office product defects should be regarded just as harmful as hallucinations. Try a lookup in excel on fields that might have text.

For coding,ai is amazing and getting better.

Spell checking is also good, grammar better then me lol

And pumping out fake news and propaganda, way worth it when you do it


Your premise that the leaders of every single one of the top 10 biggest and most profitable companies in human history are all preposterously wrong about a new technology in their existing industry is hard to believe.

AI is literally the fastest growing and most widely used/deployed technologies ever.


Yup, I've been here before. Back in 1995 we called it "The Internet." :-) Not to be snarky here, as we know the Internet has, in fact, revolutionized a lot of things and generated a lot of wealth. But in 1995, it was "a trillion dollar market" where none of the underlying infrastructure could really take advantage of it. AI is like that today, a pretty amazing technology that at some point will probably revolutionize a lot of things we do, but the hype level is as far over its utility as the Internet hype was in 1995. My advice to anyone going through this for the first time is to diversify now if you can. I didn't in 1995 and that did not work out well for me.

The comparison to the dotcom bubble isn't without merit. As a technology in terms of its applications though I think the best one to compare the LLM with is the mouse. It was absolutely a revolution in terms of how we interact with computers. You could do many tasks much faster with a GUI. Nearly all software was redesigned around it. The story around a "conversational interface" enabled by an LLM is similar. You can literally see the agent go off and run 10 grep commands or whatever in seconds, that you would have had to look up.

The mouse didn't become some huge profit center and the economy didn't realign around mouse manufacturers. People sure made a lot of money off it indirectly though. The profits accrued from sales of software that supported it well and delivered productivity improvements. Some of the companies who wrote that software also manufactured mice, some didn't.

I think it'll be the same now. It's far from clear that developing and hosting LLMs will be a great business. They'll transform computing anyway. The actual profits will accrue to whoever delivers software which integrates them in a way that delivers more productivity. On some level I feel like it's already happening, Gemini's well integrated into Google Drive, changes how I use it, and saves me time. ChatGPT is just a thing off on the side that I chat randomly with about my hangover. Github Copilot claims it's going to deliver productivity and sometimes kinda does but man it often sucks. Easy to infer from this info who my money will end up going to in the long run.

On diversification, I think anyone who's not a professional investor should steer away from picking individual stocks and already be diversified... I wouldn't advise anyone to get out of the market or to try and time the market. But a correction will come eventually and being invested in very broad index funds smooths out these bumps. To those of us who invest in the whole market, it's notable that a few big AI/tech companies have become a far larger share of the indices than they used to be, and a fairly sure bet that one day, they won't be anymore.


I started working in 1997. Cisco was one of our big customers so I knew a lot of engineers there. Cisco stock hid $80 in 2000. In 2002 it was at $10.

https://finance.yahoo.com/quote/CSCO/

I knew people who purchased their options but didn't sell and based on the AMT (Alternative Minimum Tax) had tax bills of millions of dollars based on the profit IF they sold on the day they purchased it. But then it dropped to $10 and even if they sold everything they couldn't pay the tax bill. They finally changed the law after years but those guys got screwed over.

I was young and thought the dot com boom would go on forever. It didn't. The AI bubble will burst too but whether it is 2026, 27, 28, who knows. Bubble doesn't mean useless, just that the investors will finally start demanding a profit and return on their investment. At that point the bubble will pop and lots of companies will go fail or lose a lot of money. Then it will take a couple of years to sort out and companies have to start showing a profit.


I have zero doubt that AI will eventually make many people lots of money. Just about every company on earth is collecting TBs of data on everyone and they know they're sure they can use that information against us somehow, but they can't possibly read and search through it all on their own.

I have quite a few doubts that it'll be a net positive for society though. The internet (for all of its flaws) is still a good thing generally for the public. Users didn't have to be convinced of that, they just needed to be shown what was possible. Nobody had to shove internet access into everything against customer's wishes. "AI" on the other hand isn't something most users want. Users are constantly complaining about it being pushed on them and it's already forced MS to scale back the AI in windows 11.


What do you mean exactly by "diversify"? Money/investment-wise?

Sell the risky stock that has inflated in value from hype cycle exuberance and re-invest proceeds into lower risk asset classes not driven by said exuberance. "Taking money off the table." An example would be taking ISO or RSU proceeds and reinvesting in VT (Vanguard Total World Stock Index Fund ETF) or other diversified index funds.

Taking money off the table - https://news.ycombinator.com/item?id=45763769 - October 2025 (108 comments)

(not investing advice)


What tomuchtodo said. When I left Sun in 1995 I had 8,000 shares, which in 1998 would have paid off my house, and when I sold them when Oracle bought Sun after a reverse 3:1 split, the total would not even buy a new car. Can be a painful lesson, certainly it leaves an impression.

Heh, I was at Netscape when the Sun-Netscape Alliance was created. Tip of the hat to a fellow gray beard. ;)

How do you diversify now? I presume you don't refer to stock portfolio, do you?

Stocks are fine for diversification, just stocks that have a different risk factors. So back in the 90's I had been working at Sun then did a couple of startups, and all of my 'investment' savings (which I started with stock from the employee purchase plan at Sun) were in tech of one kind or another. No banking stocks, no pharmaceutical stocks, no manufacturing sector stocks. Just tech, and more precisely Internet technology stocks. So when the Internet bubble burst every stock I owned depreciated rapidly in price.

One of the reasons I told myself I "couldn't" diversify was because if I sold any of the stock to buy different stock I'd pay a lot of capital gains tax and the IRS would take half and now I'd only be half as wealthy.

Another reason was my management telling me I couldn't sell my stock during "quiet" periods (even though they seemed too) and so sometimes when I felt like selling it I "couldn't."

These days, especially with companies that do not have publicly traded stock, that is harder than ever to diversify. The cynic in me says they are structured that way so that employees are always the last to get paid. It can still work though. You just have to find a way to option the stock you are owed on a secondary market. Not surprisingly there are MBA types who really want to have a piece of an AI company and will help you do that.

So now I make sure that not everything I own is in one area. One can do that with mutual funds, and to some extent with index funds.

But the message is if you're feeling "wealthy" and maybe paying your mortgage payments by selling some stock every month, you are much more at risk than you might realize. One friend who worked at JDS Uniphase back in the day just sold their stock and bought their house, another kept their stock so that it could "keep growing" while selling it off in bits to pay their mortgage. When JDSU died they had to sell their house and move because they couldn't afford the mortgage payments on just their salary. But we have a new generation that is getting to make these choices, I encourage people in this situation to be open to the learning.


The blockchain hype bubble should probably be pretty near in memory for most people I would suspect. I thought that was a wild, useless ride until Ai took it over.

no one has ever used blockchain. consumer ai apps have billions of MAUs how is this even remotely comparable dude

> at some point will probably revolutionize a lot of things we do

The revolution already happened. I can't imagine life without AI today. Not just for coding (which I actually lament) but just in general day to day use. Sure it's not perfect but I think it's quite difficult to ignore how the world changed in just 3-4 years.


It makes me sad trying to imagine what it's like to not being able to imagine life without AI

That's just so strange to me. In my experience, it hallucinates and makes things up often, and when it's accurate, the results are so generic and surface level.

Yes but I use it as a substitute friend, gf, therapist, dumb questions like "how 2 buy clothes and dress good and is this good and how to unclog my toilet shits"

> Your premise that the leaders of every single one of the top 10 biggest and most profitable companies in human history are all preposterously wrong about a new technology in their existing industry is hard to believe.

Their incentives are to juice their stock grants or other economic gains from pushing AI. If people aren't paying for it, it has limited value. In the case of Microsoft Copilot, only ~3% of the M365 user base is willing to pay for it. Whether enough value is derived for users to continue to pay for what they're paying for, and for enterprise valuation expectations to be met (which is mostly driven by exuberance at this point), remains to be seen.

Their goal is not to be right; their goal is to be wealthy. You do not need to be right to be wealthy, only well positioned and on time. Adam Neumann of WeWork is worth ~$2B following the same strategy, for example. Right place, right time, right exposure during that hype cycle.

Only 3.3% of Microsoft 365 users pay for Copilot - https://news.ycombinator.com/item?id=46871172 - February 2026

This is very much like the dot com bubble for those who were around to experience it.

https://old.reddit.com/r/explainlikeimfive/comments/1g78sgf/...

> In the late 90s and early 00s a business could get a lot of investors simply by being “on the internet” as a core business model.

> They weren’t actually good business that made money…..but they were using a new emergent technology

> Eventually it became apparent these business weren’t profitable or “good” and having a .com in your name or online store didn’t mean instant success. And the companies shut down and their stocks tanked

> Hype severely overtook reality; eventually hype died

("Show me the incentives and I'll show you the outcome" -- Charlie Munger)


Your premise that the leaders of every single one of the top 10 biggest and most profitable companies in human history are all preposterously wrong about a new technology in their existing industry is hard to believe.

It's happened before.

Your premise that companies which become financially successful doing one thing are automatically excellent at doing something else is hard to believe.

Moreover, it demonstrates both an inability to dispassionately examine what is happening and a lack of awareness of history.


> It's happened before.

source?


Seriously? Have you just emerged from a hundred-year sleep in a monastery on the top of a mountain?

should be really easy to conjure up examples then. where every single business leader has been wrong about a new technology to the tune of hundreds of billions of dollars.

I find it very easy to believe. The pressures that select for leadership in corporate America are wholly perpendicular to the skills and intelligence for identifying how to leverage novel and revolutionary technologies into useful products that people will pay for. I present as evidence the graveyard of companies and careers left behind by many of those leaders who failed to innovate despite, in retrospect, what seemed to be blindingly obvious product decisions to make.

The product is the stock price, not Office or Windows. From that perspective they are doing it right.

And this is the broken mindset tanking multiple large companies' products and services (Google, Apple, MS, etc). Focus on the stock. The product and our users are an afterthought.

Someone linked to a good essay on how success plus Tim Cook's focus on the stock has caused the rot that's consuming Apple's software[0]. I thought it was well reasoned and it resonated with me, though I don't believe any of the ideas were new to me. Well written, so still worth it.

0. The Fallen Apple - https://mattgemmell.scot/the-fallen-apple/


Microsoft has done the worst of any Mag 7 stock since the day before ChatGPT's release: https://totalrealreturns.com/n/AAPL,MSFT,AMZN,GOOGL,META,TSL...

Is sacrificing everything for short term gains really the right move in any situation?

Dunno, hard question, but I think the payoff to executives is tied to stock performance in such a way that messes with the equation a lot.

What is on stake in the long term? Their legacy? Both in term of feel-good and getting the next job if they are not in the end of their career.


That's an excellent question, but the answer would depend on goals and the evaluation system used.

It seems to me that CEOs have a different opinion than anyone who cares instead about actual people.


The investor being the customer rather than actual paying customers was something I noticed occurring in the late 90s in the startup and tech world. Between that shift in focus and the influx of naive money the Dot Bomb was inevitable.

Sadly the fallout from the Dotcom era wasn't a rejection of the asinine Business 2.0 mindset but instead an infection that spread across the entirety of finance.


In particular it's the short term stock price. They'll happily grift their way to overinflated stock prices today even though at some point their incestuous money shuffle game will end and the stocks will crash and a bunch of people who aren't insider trading are going to be left with massive losses.

Stock price increases that don't lead to higher dividends eventually are indistinguishable from Ponzi schemes after the fact.

Buybacks lead to stock price increases and are indistinguishable from dividends in theory, and in practice they are better than dividends because of taxation.

The problem I have with that logic is that it still doesn't really give any sensible reason for why the stock should have any economic value at all. If the point is that the company will pay for it at some point, it makes more sense for it to be a loan rather than a unit of stock. I stand by my claim that selling a non-physical item that does nothing other than hopefully get bought again later for more than you sold it for is indistinguishable from a scam.

> top 10 biggest and most profitable companies in human history are all preposterously wrong

There's another post on the front page about the 2008 financial crisis, which was almost exactly that. Investors are vulnerable to herd mentality. Especially as it's hard to be "right but early" and watch everyone else making money hand over fist while you stand back.

https://news.ycombinator.com/item?id=46889008


this was the top 10 companies in the s and p in 2008

Rank,Company 1,Exxon Mobil 2,General Electric (GE) 3,Microsoft 4,Procter & Gamble 5,Chevron 6,Johnson & Johnson 7,AT&T 8,Walmart 9,JPMorgan Chase 10,Berkshire Hathaway

1 financial institution.

8 of the top 10 currently are tech companies. its completely different


every time these companies make a mistake and waste billions of dollars it is well-publicized. so there is plenty of data that they are frequently and preposterously wrong.

name a technology that every single top tech company has invested billions of dollars in and then has flopped. the metaverse does not count unless google, amazon, microsoft etc was also throwing billions into it.

weird goalpost

by that logic financial crashes wouldn't happen


Were you around in 2008?

This industry has seen several bubbles in its existence. Many previously top companies didn't even survive them.

The mistake is simple. It is like the difference between giving you many tools to use vs making you the tool.

https://www.youtube.com/watch?v=LRq_SAuQDec


I get the feeling that a lot of people using AI, feeding it their private data, and trusting what it tells them are certainly being tools.

Doesn't matter what the leaders think if the users hate it and call it slop

https://futurism.com/artificial-intelligence/microsoft-satya...


right because copilot is bad, that must mean no one uses chatgpt, or claude code, or gemini. they only have billions of MAUs, people must really hate it

Sadly the media calls the lawful use of a warrant a 'raid' but that's another issue.

The warrant will have detailed what it is they are looking for, French warrants (and legal system!) are quite a bit different than the US but in broad terms operate similarly. It suggests that an enforcement agency believes that there is evidence of a crime at the offices.

As a former IT/operations guy I'd guess they want on-prem servers with things like email and shared storage, stuff that would hold internal discussions about the thing they were interested in, but that is just my guess based on the article saying this is related to the earlier complaint that Grok was generating CSAM on demand.


It is a raid in that it's not expected, it relies on not being expected, and they come and take away your stuff by force. Maybe it's a legal raid, but let's not sugar coat it, it's still a raid and whether you're guilty or not it will cause you a lot of problems.

I mean it's not like people get advanced notice of search warrants of that police ask pretty please. I agree that the way people use the term it's a fine usage but the person using is trying to paint a picture of a SWAT team busting down the door by calling it that.

> I'd guess they want on-prem servers with things like email and shared storage

For a net company in 2026? Fat chance.


Agreed its a stretch, my experience comes from Google when I worked there and they set up a Chinese office and they were very carefully trying to avoid anything on premises that could searched/exploited. It was a huge effort, one that wasn't done for the European and UK offices where the government was not an APT. So did X have the level of hygiene in France? Were there IT guys in the same vein as the folks that Elon recruited into DOGE? Was everyone in the office "loyal"?[1] I doubt X was paranoid "enough" in France not to have some leakage.

[1] This was also something Google did which was change access rights for people in the China office that were not 'vetted' (for some definition of vetted) feeling like they could be an exfiltration risk. Imagine a DGSE agent under cover as an X employee who carefully puts a bunch of stuff on a server in the office (doesn't trigger IT controls) and then lets the prosecutors know its ready and they serve the warrant.


Part of the prosecution will be to determine who put the content on the server.

Under GDPR if a company processes European user data they're obligated to make a "Record of Processing Activities" available on demand (umbrella term for a whole bunch of user-data / identity related stuff). They don't necessarily need to store them onsite but they need to be able to produce them. Saying you're an internet company doesn't mean you can just put the stuff on a server in the Caribbean and shrug when the regulators come knocking on your door

That's aside from the fact that they're a publicly traded company under obligation to keep a gazillion records anyway like in any other jurisdiction.


> publicly traded company

Which company is publicly traded?


> They don't necessarily need to store them onsite but they need to be able to produce them.

... within 30 days, right? The longest "raid" in history.


Who has on prem servers at an office location?

I'm guessing you're asking this because you have a picture of a 'server' as a thing in a large rack? Nearly every tech business has a bunch of machines, everything from an old desk top to last year's laptop, which have been reinstalled with Linux or *BSD and are sitting on the network behaving, for all intents and purposes, as a 'server.' (they aren't moving or rebooting or having local sessions running on them, Etc).

I've worked in several companies and have never seen this. Maybe for a small scale startup or rapidly growing early stage company. I would be pretty shocked to see an old desktop acting as a server nowadays.

I'm curious about why they delisted it. When running operations for Blekko (another search engine that would back fill with Yahoo/Bing results when we didn't get a lot of hits in our own index). Of course people like DDG could index it themselves like they do Wikipedia and some other sites.

While Blekko was active there really were one three reasons we could be "forced" to de-index a site, it was being used by a 'bad' country (N. Korea, Iran, Etc.), it was serving up CSAM, or was participating in Ad fraud. Now Microsoft also would delist places that were in the crime underworld so they wouldn't index the <random-string>.ru sites and things like that. They should be able to give you an answer though unless they have an NSL that says they can't talk about it.

That makes me wonder if web sites that have "anti government / anti ICE" content will start getting delisted by US web indexes.


Yeah, the user 'o4c' appears to be a bot that reposts things that have been previously popular.


Hello, first of all sorry. I am not bot, a human user.

I was searching for open source DIY microscope projects and found the OpenFlexure Microscope as the first search result. After reading through the project and finding it technically interesting, I submitted it to Hacker News. Fortunately, it reached the front page approximately five days after submission.

If you search for the term “open source microscope,” you will see the same link appearing as the top result.

https://www.google.com/search?q=open+source+microscope&oq=op...

From my observation, information related to precision engineering is not widely known and can be difficult to find. Because of this, overlapping submissions can sometimes occur. I apologize if this caused any repetition. Detailed teardowns of precision instruments such as gauges, metrology tools, and scientific equipment are relatively rare, which contributes to this situation.


Wecome to HN


That's a solid set of lessons. My favorite is that Software doesn't advocate for you, people do.


Discussion systems all the way down :-). This is a fair assessment of the github issues system. I suspect that because git(1) can be a change control system for anything there is never any hope of making an effective issue tracker for a particular thing it is being used to manage change on. The choice the project made to allow the developers to determine when something was an issue is essentially adding a semantic layer on top of issues that customizes it for this particular corpus of change management.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: