Vibe coding in my opinion is analogous to say borrowing on a credit card to gamble on a startup.
Occasionally, in IRL you hear the feel good story how Fred smith gambled the last $5,000 to save FedEx and so on, but most people with that mindset end up crashing out.
Vibe coding a product runs the risk of acquiring too much tech debt before project is successful.
Product Market Fit is very hard, you need to keep enough room for pivots. Changes in direction will always accumulate debt, even when tech is well written. It is far more difficult when you accumulate debt quickly.
The counterpoint being that procrastinating and over-engineering prematurely or building lot of unrelated tooling and loosing focus can also bring the product down quickly or never let it start .
Being able to vibe code POCs etc is a great tool if done in a controlled limited well defined way.
Just as borrowing cash on your credit card is not always bad, it just usually is.
Maybe vibe coding gets so good that we completely trash what was written and build from zero.
I've seen that with badly written code as well, easier to rewrite than fix the un-goldly mess.
Yes, if you know what you want exactly it is not difficult to rewrite. Writing is the easiest part of coding.
The challenge is knowing exactly what is needed. No matter how bad the code, it is never easy to justify a rewrite .
In a large and old enough code base, documentation is always incomplete or incorrect, the code becomes is the spec.
Tens or hundreds of thousands of hours would have been expended in making it "work". A refactor inevitably breaks things, because no single person can fully understand everything.
There is a reason why it is a well know principle don't fix what is not broken. Same reasons why we still have banking or aviation systems still running mainframe and COBOL from 70s.
A rewrite requires the same or likely more number of hours of testing and adoption in a typically much shorter span of time in ironing out the issues [1]. Few organizations either private or public have the appetite to go through the pain and if its money facing or public facing component it is even harder to get buy-in from leadership or the users of app even.
---
[1] During the original deployment the issues(bugs or feature gaps) would have been incrementally solved over many years or decades even. During the rewrite you don't have 10-20 years, so you not only have to expend the same or more number of hours you will have to do it much quicker as well.
Workloads emerge with higher capacity not other way around. Lossless media, to virtual reality applications all scale better with more available bandwidth.
An average AAA game is 100-200GB today. That is not by accident, The best residential internet of 1Gbps dedicated it is still 30 minutes of download, for the average buyer it is still few hours easily.
A 2TB today game is a 5 hour download on 1 Gbps connection and days for median buyer. Game developers can not think of a 2TB game if storage capacity, I/O performance, and bandwidth all do not support it.
Hypothetically If I could ship a 200TB game I would probably pre-render most of the graphics at much higher resolutions/frame-rates than compute it poorly on the GPU on the fly.
More fundamentally, we would lean towards less compute on client and more computed assets driven approach for applications. A good example of that in tech world in the last decade is how we have switched to using docker/container layers from just distributing source files or built packages. the typical docker images in the corporate world exceed 1GB, the source files being actually shipped are probably less than 10Mb of that. We are trading size for better control, Pre built packages instead of source was the same trade-off in 90s.
Depending on what is more scarce you optimize for it. Single threaded and even multi-threaded compute growth has been slowing down. Consumer internet bandwidth has no such physics limit that processors do so it is not a bad idea to optimize for pre-computed assets delivery rather than rely on client side compute.
I'll assume by "game servers" you mean "video game binary and asset distribution servers that support game stores like Steam and Epic and others".
When I paid Comcast for 1.5Gbit/s down, Steam would saturate that downlink with most games. I now pay for service that's no less than 100mbit symmetric, but is almost always something like 300->600mbit. Steam can -obviously- saturate that. Amusingly, the Epic Games Store (EGS) client cannot. Why?
Well, as far as I can tell, the problem is that -unlike the Steam client- the EGS client single-threads its downloads and does a lot of CPU-heavy work as part of those downloads. Back when I was running Windows, EGS game downloads absolutely pegged one of my 32 logical CPUs and left a ton of download bandwidth unused. In contrast, Steam sets like eight or sixteen of my logical CPUs at roughly half utilization and absolutely saturates my download bandwidth. So, yeah... if you're talking about downloads from video games stores it might be that whatever client your video game store uses sucks shit.
OTOH, if you're talking about video game servers where people play games they've already installed with each other, unless those servers are squirting mods and other such custom resources at clients on initial connect, game servers usually need like hundreds of kbps at most. They're also often provisioned to trickle those distributed-on-initial-connect custom resources in an often-misguided attempt to not disturb the gameplay of currently-connected clients.
Game downloads, whether on a console or a PC, come from a CDN. The difference is that Steam has a lot of capacity. They can have millions of players all downloading the same game on the same day at gigabit speeds. Console makers invariably cheap out and cannot reach the same level of service.
Hell, it might be the case that console manufacturers are doing the same stupid shit that EGS is doing. Perhaps they wrote their download code back when 50mbit/s was a dreadfully fast download speed for the average USian to have and they haven't updated it since. (And why would they? What's a consumer's alternative other than "Pay 1k or more for a gaming machine that can run games delivered through Steam" or "Don't play video games"?)
You can still survive without using generative tools. Just not writing crud apps .
There is plenty of code that require proof of correctnesss and solid guarantees like in aviation or space and so on. Torvalds in a recent interview mentioned how little code he gets is generated despite kernel code being available to train easily .
My code base is two monorepos 10M+ lines. I have the same experience as you - run 3-6 agents with remote devcontainers and tmux and rarely break the 75% usage, never had the weekly limit stop me.
My observations are these things impact both quality and token consumption a lot.
- Architecture matters really- how messy code is and how poorly things are organized makes a big difference
- how context window is managed especially now with default 1M window.
- How many MCP servers are used. MCP burn a lot, CLI tools are easy , quicker and good ones don't even need any additional harness like skills etc, just prompt to suggest using them.
- Using the right tool matters a lot
- What can be done with traditional deterministic tools have to be done that way with agent controlling (or even building) the tool not doing the tool's work with tokens.
- for large refactors codemod tools AST parsers etc are better than having the agent parse and modify every module/file or inefficiently navigate the codebase with grep and sed.
- How much prep work/planning is put in before the agents starts writing code. Earlier corrections are cheaper than later after code is generated
Typically my starting prompts for the plan phase are 1-2 pages worth and take 30-60m to even put in the cli text box. With that, first I would generate detailed ADRs, documentation and breakdown issues in the ticketing system using the agent, review and correct the plan several times before attempting to have a single line written.
---
It is no different as we would do with a human, typing the lines of code was always easy part once you know what you want exactly.
It feels faster to bypass the thinking phase and offload to agent entirely to either stumble around and getting low res feedback or worse just wing it, either way is just adding a lot of debt and things slow down quickly .
SpaceX are widely reported to be planning to raise $75Billion in new capital. It may seem small a % for the valuation target. However that is about 3 times previous highest raise of $29B when Saudi Aramco went public few years back. The market simply may not be that deep[1]
There is a good chance this one becomes the Wework of this decade. The valuation, amount being raised, cooling interests in AI, and middle eastern capital changing priorities, interest rate outlook for the rest of the decade. These are all strong head winds to overcome even when not raising the largest ever amount in an IPO.
That is not say that it is destined to fail, Elon is excellent salesman of vision when fundamentals are weak, There is no better proof than Tesla P/E .
It is by no means clear this would be successful or not. The valuation, funds being raised, future growth potential are all not based on just SpaceX core businesses which would have been an easy sell.
---
[1] i.e. it could be still under-subscribed even if everyone buys into the vision, growth projections, is comfortable with valuation gets fully onboard including retail.
Even in this best case scenario SpaceX would have to sell at the lower end of the target range or go even lower and still end up being short matter what, because there could simply be not enough money in the market.
I think you have to temper the skepticism a bit though.
SpaceX has dramatically lowered the cost of launching things into space. They are still the leader here. They can put a kg into orbit cheaper than anyone, even heavily subsidized state operations (EU and China).
Their order book continues to be full. Every single launch vehicle they roll off the line was pre-sold years ago, including its re-use flights.
I agree that Elon is their biggest potential problem and a big risk but their launch business is sound and wildly successful. If you believe access to space will be a growing segment of the economy in the future it isn't exactly a bad investment.
I remember all the people putting Tesla down when they IPO'd. I bought $4k of stock (all I could afford at that time). Sold $100k of it a few years ago, still have the other half worth near $220k. Their numbers at IPO time were garbage and it wasn't clear they would even survive. Then they started shipping hundreds of thousands then a million cars.
YMMV, consider all sides and make your own judgement. Just be careful about trusting the anti-SpaceX case. Even if everyone is technically correct about them it can still be a huge miss not to invest! The future is not static and if they can put the raised capital to productive use the IPO could end up being a fantastic deal. And FWIW I also agree the largest immediate risk is they are over-valued. Only time will tell on that front.
In 2024 66% of their launches were for Starlink. So it’s not quite correct to suggest there’s a vibrant external market for their product, a lot of it is sort of self dealing.
> it’s not quite correct to suggest there’s a vibrant external market for their product
There is a very large demand for launch services. SpaceX balances launching customers and launching Starlink. It's not like they give every launch slot to customers and then launches Starlink whenever there's an opening they couldn't fill.
This is missing the point of their valuation. SpaceX will internally use their launch capabilities to build industries that no one else can. Starlink is already their main revenue stream. Starship will open up new realms of possibilities.
Tesla IPO'd at 1.7B and is worth 1.4T today. Giving you the benefit of doubt since it would be closer to a 1000x gain rather than the 100x gain that you didn't buy right at IPO, I will point out that there's a world's difference buying in at 1.7B, because there's still room for the stock to 1000x, but there's not much gain to be had buying in at 1.7T valuation.
Even the most highly company in the world Nvidia is less than 3x that valuation, so it's not a good comparison with Tesla's IPO.
At current launch numbers it may not be worth 1.5+ trillion but valuations aren't about current, they're about discounted future cash flows.
It seems logical that there could/will be far more demand for launch if the price were lower. Prices are quite extreme currently, a standard 3U cubesat (loaf of bread size) is $300k and that's just for orbit.
There could be lots of startups that want to try robotic space mining but launch costs just make that mostly impossible currently so there are only a select few. It's like valuing the Dutch East India company based on the trade volumes in 1603. Of course people are not going to be buying much pepper or nutmeg if it costs them weeks of labor, but build lots of reusable ships, and with each voyage, more people can afford your pepper and nutmeg until it's a common household item.
discounted future cash flows is discounted by risk. There is a lot of risk on growing future revenue is the point.
>seems logical that there could/will be far more demand for launch if the price were lower.
This thesis hasn't played out much in the 10 years since Falcon landed in first 2015.
The non Starlink component of revenue has not massively grown beyond what size the market in 2015 to today. SpaceX isn't lowering launch price to induce demand beyond out being the cheapest just by enough, they would be going lower if cost was the only barrier for more revenue.
It not that businesses aren't possible there at lower launch prices. Starlink is testament that it is.
The problem is that rest of the world is not able to innovate fast enough to take advantage of it even after 10 years. The industry struggles with things like manufacturing satellites at scale or raising money for it, or executing on innovation etc.
What that means for SpaceX is that even if launch costs are cheaper than now, the launch market simply may not grow quick enough for the valuation number to make sense. They would need to enter a lot of new markets directly and be their own launch customer beyond Starlink. This comes with its own set of execution, regulatory and other risks. The data-center[1] in space play is an attempt to do this.
Either DC play or something else, they will need to find and sustain a large business to grow, maybe they will, maybe not.
It is not very clear now and that is a lot of risk so any future cash flow projection has to be discounted heavily.
---
[1] I am not qualified to comment on the technical feasibility, however to analyze the company finances that is not needed, it is just one more risk factor, depending on how you feel you can assign 0 or 1 or anything in between.
> This thesis hasn't played out much in the 10 years since Falcon landed in first 2015.
It did play out: there are many more launches today, it's 5x in 20 years. The 75% of SpaceX starlink launches (which account for nearly 50% of all launches) were quietly financed by their other launch customers, exactly because the real cost to launch dropped so much.
That doesn't mean you're wrong, but you do seem to forget that SpaceX, as its own customer, knows the number of launches is going to rise exponentially. They obviously choose to manufacture for where the market _will be_, while you don't see the market before its there. Which is good for them.
I think you have to temper the glazing a bit though.
These people and their endeavors are thoroughly, irredeemably corrupt. It’s nice you got a taste, but their impact on society has been calamitous, and will take decades to recover (if at all).
>There is a good chance this one becomes the Wework of this decade.
It's very different from WeWork which was basically just subletting office space with beer taps. At least SpaceX had done significant stuff with the rockets and Starlink.
The comparison was not about the strength of the business, it was about how the attempt to IPO and the original S-1 was the trigger for more realistic price discovery for Wework
My comment was that it is possible that by trying/becoming public SpaceX also will go through that same
process once their numbers become available.
They aren't reporting anything yet. What we hearing is just from news media who get their leaks/info from investors who get some form of IR reports/ presentation.
Both will do public reporting only when they IPO[4] and have regulatory requirement to do so every quarter.
For private companies[1] reporting to investors there are no fixed rules really[3]
Even for public companies, there is fair amount of leeway on how GAAP[2]expects recognize revenue. The two ways you highlight is how you account for GMV- Gross Merchandise Value.
The operating margin becomes very less so multiples on absolute revenue gets impacted when you consider GMV as revenue.
For example if you consider GMV in revenue then AMZN only trades at ~3x ($2.25T/$~800B )to say MSFT($2.75T/$300B) and GOOG ($3.4T/$400B) who both trade at 9x their revenue.
While roughly similar in maturity, size, growth potential and even large overlap of directly competing businesses, there is huge (3x / 9x) difference because AMZN's number includes with GMV in retail that GOOG and MSFT do not have in same size in theirs.
---
[1] There are still a lot of rules reporting to IRS and other government entities, but that information we (and news media) get is from investors not leaks from government reporting - which would be typically be private and illegal to disclose to public.
[2] And the Big 4 who sign off on the audit for companies prefer to account for it.
[3] As long as it is not explicit fraud or cooking the books, i.e. they are transparent about their methods.
[4] Strictly this would be covered in the prospectus(S-1) few weeks before going public and that is first real look we get into the details.
Does the GAAP accounting matter if everyone passively buys shares due to the new fast entry rules, which corruptly will force us all to buy into these companies? The fundamentals and true value seem less relevant than ever:
For other readers, I want to add some context here. NASDAQ is pondering whether or not to change their NASDAQ 100 index membership rules for IPOs. Currently, there is a three month waiting rule for IPOs. They are proposing (not sure if passed/agree/completed yet) to remove this waiting rule for IPOs.
Real question: What is the real impact of this rule change? To me, it seems so minor. Three months is just a blip in time for any long term investor.
> which corruptly will force us all to buy into these companies
Why is this "corrupt"? That term makes no sense here.
Also, if you don't like the NASDAQ 100 rules, then you don't have to invest in securities that track it. You can trade the basket yourself minus the names that you don't like.
Finally, I would say that S&P 500 index is far more important than NASDAQ 100. To join the S&P 500 index, the name must be profitable for the most recent year. (four quarters). Recall that Uber IPO'd in 2019, but was not profitable until 2023. OpenAI probably will not be profitable when it goes public; thus, it will not join the S&P 500 immediately.
I think the bigger story is SpaceX. It will likely IPO very close to a 1T USD market cap (with a small float: ~10%). And, thanks to StarLink, I assume that SpaceX is now wildly profitable.
The "corruption" allegation is that for, yes, SpaceX, index funds will effectively be "forced" to buy in right away at their IPO price, rather than seeing where they settle before getting the money in. Given that most people have most of their money in index funds, it's sort-of an automatic buy and raises some hackles about a fixed game.
Saying "you can trade the basket yourself minus the names you don't like" is not a real counterargument. Most of us are not going to do that, I'm not going to do that and I'm writing this post right now. John Doe is certainly not doing that.
> Also, if you don't like the NASDAQ 100 rules, then you don't have to invest in securities that track it.
Isn't the idea with the indexes that they allow you to intentionally not take an activist position in the market? The exposure is not tied to any underlying market hypothesis. In other words, if we make people form a market hypothesis in order to decide whether or not to hold this index, it has failed in its purpose.
Diluting the index entry rules, only devalues the index utility. When it becomes a bigger problem, other indices with higher quality controls will out compete the current ones and be used by asset managers seeking safety.
More likely than not, most of us are already holding stock in these companies one way or another. All the Mag 7 hold a major chunk of OAI and Anthropic stock anyway, slower entry does not make it less risky for us.
Even if the big tech companies did not hold any stock, they are still the biggest vendors and their own order books is hugely impacted by the AI demand from these two ( and others in this space), either way we are all in this together.
> When it becomes a bigger problem, other indices with higher quality controls will out compete the current ones and be used by asset managers seeking safety
I personally find this is the correct solution, since indexes are over-inflated either way, this brings much needed sanity to the index. Your index is now worth much more or much less based on how you view the AI bubble and you are forced to understand and correct your forward looking investments accordingly.
Passive investments are good, but if taken too far as they clearly have been in the last decade they become a scam. Everyone is SIPing into it, and there is infinite liquidity. Until one big whale finally decides they are booking it, then all hell will break loose on the same damn day.
You can just choose not to play the accounting game, and only choose the ones that actually gaap viable as investment opportunities. For example mag7 - tesla are all relatively cheap when they dip.
Some times the best play is just not to play. If you think they are too risky, walk away. There are enough good oppotunities
> mag7 (minus) tesla are all relatively cheap when they dip
I asked ChatGPT for a list of Magnificent 7 stocks and their most recent price to earnings (PE) ratios.
Company Ticker P/E Ratio
Apple Inc. AAPL ~33
Microsoft Corporation MSFT ~25
Alphabet Inc. GOOGL ~29
Amazon.com Inc. AMZN ~30
NVIDIA Corporation NVDA ~38
Meta Platforms Inc. META ~28
Tesla Inc. TSLA ~378
In the last 50 years, I think the median PE ratio for S&P 500 index is about 15. Seven and below is considered rock bottom, and 30 and above is very high. These PE ratios look pretty damn high to me.
How much do these names need to "dip" for you to consider them cheap?
There are a few things to consider if you are in the investment space:
- Growth rate: you can't compare them to the average single digit growth companies or dividend focused companies. Most of these tech companies revenue are still growing at double digit with good moat. Pe is a good measure but it's not absolute. If you believe they sustain their growth then it's a good bet. And you can choose not to buy in their growth stories too. At the end of the day investment is about judgement call
- History benchmark: some of their pe is at historical low. So they are actually cheaper than before.
- Pe ttm and forward pe: how much pe ttm are they at? how much forward pe are they projecting? If forward pe is significantly lower, that means the current analysts consensus is that they will grow in future
- Pe is the a number but it's not everything. You need to consider multiple things to decide if that's undervalued for you. It's highly subjective as different interpretations are common.
- This post is about if you want to play the gaap game with private tech companies. My point is that there are still many public companies that are cheap at certain point. You just need to be patient and be willing to research and wait. For example, meta at around 500 was a buy for me, but since then it has rebounded it's still good but not as undervalued as a few days ago
They aren't reporting anything yet. What we hearing is just from news media who get their leaks/info from investors who get some form of IR reports/ presentation.
The $24b figure is literally in OpenAI's announcement.
The $19b ARR and $6b added in Feb came directly from Anthropic CEO recently.
> The $24b figure is literally in OpenAI's announcement.
And? That's not a legislated report; they can use whatever mechanism they want to, without disclosure, to produce numbers.
Lets wait until they are regulated as a public company, then their mechanism has to be both aligned with what legislation requires as well as clearly documented in their report.
> they can use whatever mechanism they want to, without disclosure, to produce numbers.
That would be fraud against whoever participated in this round, so no. Just because they aren't regulated doesn't mean they are literally free to do whatever they want to close the round.
The fact that in all the rounds I have been involved in all public announcements related to the round go through the legal team to check for possible material misstatements that could cause exactly this kind of problem.
I am reminded of the "I declare bankruptcy" meme from the 2000's TV series Office.
When we say reporting it means there are statutory submissions with an auditor signing off, with legal liability. As the other reply referenced consequences for doing this incorrectly can be severe - Arthur Anderson is no more after all because of Enron.
A Press Release (of a private entity) does not have to satisfy this high bar.
Press release does mean no constraints, for public companies, disclosure of important information by officers and other insiders have strong controls. Even if its the just a rocket/poop emoji on a casual social media platform. Lawyers have to refile with the SEC in the expected format. Even private companies have restrictions on not claiming things fraudulently to investors, but these are accredited investors with lesser controls than retail.
Not employees are hired though, a fair amount are added through acquisitions. Reducing staff could be just streamlining redundancies .
Not the case here [1], but it not always bad planning.
Even without acquisitions, business conditions can change rapidly things like tariffs, interest rates , war or competition due to newer tools etc , as an investor you can would want leadership to move fast and course correct, rather than be held by the sunk cost fallacy.
All else being equal, the way investors would see a change like this - is now the company is no longer wasting 7.5B/yr in the future and their current cost was already priced in.
However all else is rarely the same, there could be other factors, like slowing sales growth projections which can bring down the multiples .
Oracle is still trading at 28x P/E historically they typically traded at 15x, given the growth and risk profile a more realistic number .
Since 2022 (ignoring 2020 spikes) the number has been going up are basis the expectation that their cloud business will really benefit from AI significantly.
If the market no longer has the confidence —- it has already cooled a bit since October then stock will keep dropping, layoffs will only slow it down a bit .
The timing is critical, because leverage/sale of Oracle stock is how the Warner Bros Discover acquisition is being funded .
The increasing doubts about that financial viability is why that stock risk premium is increasing on Warner .- Currently trading at 27 although acquisition price is 31 and it was trading at 29 a month back . Also senior executives like Zaslav are selling now at 27 which they less likely to if they believed deal will close at 31 soon.
TLDR; this 30k layoff is an attempt to strengthen/save the other acquisition Oracle is indirectly financing.
[1] although the Cerner acquisition added 30k employees to Oracle 3 years back. This doesn’t seem related to that. Oracle did not have a strong overlapping BU, there were/are some redundancies as in any acquisition but certainly not 30k
In the context of knowledge workers, It is really about Claude Cowork against Microsoft Copilot suite for all their applications, which is what the OP is referencing ?
Github Copilot can use Claude APIs and has its own problems and challenges.
Microsoft AI performance is primarily not being affected by Github - while significant is much much smaller part of the enterprise revenue stream and their DAU compared to their Office suite apps.
Same for their PR exposure. It is lot more likely to here about Copilot in the office context than Github outside of small niche's like this forum.
First I thought they were AWS lambda functions, perhaps possible if they are over-provisioned for very concurrency or something similar $25k/month is in realm of possibility.
But no, the the post is talking about just RPC calls on k8s pods running docker images, for saving $300k/year, their compute bill should be well above $100M/year.
Perhaps if it was Google scale of events for billions of users daily, paired with the poorest/inefficient processing engine, using zero caching layer and very badly written rules, maybe it is possible.
Feels like it is just an SEO article designed to catch reader's attention.
Occasionally, in IRL you hear the feel good story how Fred smith gambled the last $5,000 to save FedEx and so on, but most people with that mindset end up crashing out.
Vibe coding a product runs the risk of acquiring too much tech debt before project is successful.
Product Market Fit is very hard, you need to keep enough room for pivots. Changes in direction will always accumulate debt, even when tech is well written. It is far more difficult when you accumulate debt quickly.
The counterpoint being that procrastinating and over-engineering prematurely or building lot of unrelated tooling and loosing focus can also bring the product down quickly or never let it start .
Being able to vibe code POCs etc is a great tool if done in a controlled limited well defined way.
Just as borrowing cash on your credit card is not always bad, it just usually is.
reply