A Junior programmer is a total waste of time if they don't learn. I don't help Juniors because it is an effective use of my time, but because there is hope that they'll learn and become Seniors. It is a long term investment. LLMs are not.
It’s a metaphor. With enough oversight, a qualified engineer can get good results out of an underperforming (or extremely junior) engineer. With a junior engineer, you give the oversight to help them grow. With an underperforming engineer you hope they grow quickly or you eventually terminate their employment because it’s a poor time trade off.
The trade off with an LLM is different. It’s not actually a junior or underperforming engineer. It’s far faster at churning out code than even the best engineers. It can read code far faster. It writes tests more consistently than most engineers (in my experience). It is surprisingly good at catching edge cases. With a junior engineer, you drag down your own performance to improve theirs and you’re often trading off short term benefits vs long term. With an LLM, your net performance goes up because it’s augmenting you with its own strengths.
As an engineer, it will never reach senior level (though future models might). But as a tool, it can enable you to do more.
> It writes tests more consistently than most engineers (in my experience)
I'm going to nit on this specifically. I firmly believe anyone that genuinely believes this either never writes tests that actually matter, or doesn't review the tests that an LLM throws out there. I've seen so many cases of people saying 'look at all these valid tests our LLM of choice wrote' only for half of them to do nothing and half of them misleading as to what it actually tests.
It’s like anything else, you’ve got to check the results and potentially push it to fix stuff.
I recently had AI code up a feature that was essentially text manipulation. There were existing tests to show it how to write effective tests and it did a great job of covering the new functionality. My feedback to the AI was mostly around some inaccurate comments it made in the code but the coverage was solid. Would have actually been faster for me to fix but I’m experimenting with how much I can make the AI do.
On the other hand I had AI code up another feature in a different code base and it produced a bunch of tests with little actual validation. It basically invoked the new functionality with a good spectrum of arguments but then just validated that the code didn’t throw. And in one case it tested something that diverged slightly from how the code would actually be invoked. In that case I told it how to validate what the functionality was actually doing and how to make the one test more representative. In the end it was good coverage with a small amount of work.
For people who don’t usually test or care bunch about testing, yeah, they probably let the AI create garbage tests.
I don't see anything here that corroborates your claim that it outputs more consistent test code than most engineers. In fact your second case would indicate otherwise.
And this also goes back to my first point about writing tests that matters. Coverage can matter, but coverage is not codifying business logic in your test suite. I've seen many engineers focus only on coverage only for their code to blow up in production because they didn't bother to test the actual real world scenarios it would be used in, which requires deep understanding of the full system.
I still feel like in most of these discussions the criticism of LLMs is that they are poor replacements for great engineers. Yeah. They are. LLMs are great tools for great engineers. They won’t replace good engineers and they won’t make shitty engineers good.
You can’t ask an LLM to autonomously write complex test suites. You have to guide it. But when AI creates a solid test suite with 20 minutes of prodding instead of 4 hours of hand coding, that’s a win. It doesn’t need to do everything alone to be useful.
> writing tests that matters
Yeah. So make sure it writes them. My experience so far is that it writes a decent set of tests with little prompting, honestly exceeding what I see a lot of engineers put together (lots of engineers suck at writing tests). With additional prompting it can make them great.
That seems like the kind of feature where the LLM would already have the domain knowledge needed to write reasonable tests, though. Similar to how it can vibe code a surprisingly complicated website or video game without much help, but probably not create a single component of a complex distributed system that will fit into an existing architecture, with exactly the correct behaviour based on some obscure domain knowledge that pretty much exists only in your company.
> probably not create a single component of a complex distributed system that will fit into an existing architecture, with exactly the correct behaviour based on some obscure domain knowledge that pretty much exists only in your company.
An LLM is not a principal engineer. It is a tool. If you try to use it to autonomously create complex systems, you are going to have a bad time. All of the respectable people hyping AI for coding are pretty clear that they have to direct it to get good results in custom domains or complex projects.
A principal engineer would also fail if you asked them to develop a component for your proprietary system with no information, but a principal engineer would be able to so their own deep discovery and design if they have the time and resources to do so. An AI needs you to do some of that.
I also find it hard to agree with that part. Perhaps it depends on what type of software you write, but in my experience finding good test cases is one of those things that often requires a deep level of domain knowledge. I haven’t had much luck making LLMs write interesting, non-trivial tests.
This has been my experience as well. So far, whenever I’ve been initially satisfied with the one shotted tests, when I had to go back to them I realized they needed to be reworked.
I guess everyone dealing with legacy software sees code as a cost factor. Being able to delete code is harder, but often more important than writing code.
Owning code requires you to maintain it. Finding out what parts of the code actual implement features and what parts are not needed anymore (or were never needed in the first place) is really hard. Since most of the time the requirements have never been documented and the authors have left or cannot remember. But not understanding what the code does removed all possibility to improve or modify it. This is how software dies.
Churning out code fast is a huge future liability. Management wants solutions fast and doesn't understand these long term costs. It is the same with all code generators: Short term gains, but long term maintainability issues.
Do you not write code? Is your code base frozen, or do you write code for new features and bug fixes?
The fact that AI can churn out code 1000x faster does not mean you should have it churn out 1000x more code. You might have a list of 20 critical features and it have time to implement 10. AI could let you get all 20 but shouldn’t mean you check in code for 1000 features you don’t even need.
I write code. On a good day perhaps 800-1000 "hand written" lines.
I have never actually thought about how much typing time this actually is. Perhaps an hour? In that case 7/8th of my day are filled with other stuff. Like analysis, planning, gathering requirements, talking to people.
So even if an AI removed almost all the time I spend typing away: This is only a 10% improvement in speed. Even if you ignore that I still have to review the code, understand everything and correct possible problems.
A bigger speedup is only possible if you decide not to understand everything the AI does and just trust it to do the right thing.
Maybe you code so fast that the thought-to-code transition is not a bottleneck for you. In which case, awesome for you. I suspect this makes you a significant outlier since respected and productive engineers like Antirez seem to find benefits.
Sure if you just leave all the code there. But if it's churning out iterations, incrementally improving stuff, it seems ok? That's pretty much what we do as humans, at least IME.
I feel like this is a forest for the trees kind of thing.
It is implied that the code being created is for “capabilities”. If your AI is churning out needless code, then sure, that’s a bad thing. Why would you be asking the AI for code you don’t need, though? You should be asking it for critical features, bug fixes, the things you would be coding up regardless.
You can use a hammer to break your own toes or you can use it to put a roof on your house. Using a tool poorly reflects on the craftsman, not the tool.
Just like LLMs are a total waste of time if you never update the system/developer prompts with additional information as you learn what's important to communicate vs not.
That is a completely different level. I expect a Junior Developer to be able to completely replace me long term and to be able decide when existing rules are outdated and when they should be replaced. Challenge my decisions without me asking for it. Being able to adapt what they have learned to new types of projects or new programming languages. Being Senior is setting the rules.
An LLM only follows rules/prompts. They can never become Senior.
The issue with this type of motor is that it is part of the unsprung weight since it is inside the wheel. This is probably why savings here matter a lot more (or at least in a very different way) than the battery weight.
Ok, now I understand why this motor is only used in supercars - installing four (or even only two - according to https://www.mercedes-benz.de/passengercars/technology/concep..., even the AMG GT-XX has "only" three of them) hub motors with twice the power of a Tesla Model 3 in any other car would be ridiculous. So, the actual challenge is to make this motor even smaller while keeping the same power to weight ratio, so it can also be used for regular cars? That is, if they want to build something for the mass market, not only for an exclusive clientele?
I don't think their motors are axial flux, they're just large and narrow to fit inside wheels. Or at least all the images on their website depict radial flux designs.
Do e-bikes really need significantly more power than they have? They already run arguably dangerously fast for their application. Is efficiency not the primary target there?
e-bikes don't necessarily need more power but they could benefit from a smaller and lighter motor. If it becomes small enough to "disappear" in the pedal assembly for example, it would allow more design/parts commonality with normal bikes and fit more people's aesthetic criteria.
The lower weight would be definitely welcome, my ebike is comically heavy compared to a normal one and sometimes I have to carry it up flights of stairs (some German railway overpasses, grr).
Also in scooters it could fit in the wheel (since the wheel is tiny and has to spin quite quickly - no reduction gear needed vs a bike with 26-28" rims) allowing a simpler design and cost savings. But maybe in scooters they're already using in-wheel motors, I'm a bit ignorant there.
There are some advantages to hub motors in an e-bike, and if the motor and an appropriate gearing system could be made light enough the disadvantages would be reduced.
Oddly, a very large majority of current fully suspended e-bikes with rear cargo racks have those racks unsprung, which suggests that most e-bike manufacturers don’t actually care about the handling of anything other than their pure e-MTBs.
While more power may not make sense, less weight is an easy way to get more efficiency. And if you can keep the same power at a lower weight, that's a win.
Hmm. I am NOT an expert (though I ride and have owned 3 traditional motorcycles). IIUC, reducing unsprung weight is really crucial for handling -- which is why so-called "inverted" forks / front shock absorbers became basically the standard.
They don’t need this motor, but if it can be scaled down… at over 10kW/kg sustained, one could wish/hope to get 200W at 50g (disclaimer: I have no idea how this scales with size). Combine that with 1kg of a 600Wh/kg battery (https://news.ycombinator.com/item?id=45797452. Again, I have no idea how realistic that is), and you have a bicycle that’s only a little heavier than a non-electric one, but gives you a boost for 3 hours (more if you use it sparingly. If you’re cycling at leisure, 100W already is a lot of power)
Yeah, you kind of shouldn't use a Raspberry Pi to blink an LED, though. Great "Hello World" project. But there are so many ways that are cheaper, lighter, smaller and more reliable (and don't require a lengthy boot-up).
Ah not to worry we can make it a web service and host it on the cloud and of course you wouldn't want to run without a authentication so you'd need that and also a database and what if you want to blink the led securely so you'll need to use a homorphic database which is very computationally expensive so just need a couple of VMs and anyway you should start with https://www.npmjs.com/package/blinking and go from there.
From Wikipedia on Axial Flux Motors:
>"Mercedes-Benz subsidiary YASA (Yokeless and Segmented Armature) makes AFMs that have powered various concept (Jaguar C-X75), prototype, and racing vehicles. It was also used in the Koenigsegg Regera, the Ferrari SF90 Stradale and S96GTB, Lamborghini Revuelto hybrid and the Lola-Drayson.[9] The company is investigating the potential for placing motors inside wheels, given that AFM's low mass does not excessively increase a vehicle's unsprung mass.[10] "
I think they misspoke when they said "in" the wheel, but supercars can have a separate motor for each wheel, and the closer they are to the wheel the better the torque as it's not also driving a longer shaft. The smaller the motor, the closer you can get.
I guess if you can make the motor and a suitable reduction box lighter than the equivalent bearing and driveshaft combination you could make the suspension arms mechanically simpler.
By using motors at each wheel you'd eliminate the need for a differential, saving a good 40-50kg or so. Of course, if you kept the drive shafts and put the motor and reduction box in the middle, you'd be able to use inboard brakes and save a lot of unsprung weight!
There are cars with inboard brakes, although not recently. From a packaging point of view putting them out at the wheel makes sense, since there's a lot of space you're not using otherwise.
It's hard to fit inboard brakes to front wheel drive cars because there's so little space but Citroën managed it with the 2CV and various derivatives, and the GS/GSA/Birotor family. They had an inline engine with a very compact gearbox behind, with the brake discs (drums, on very early 2CVs) right on the side of the gearbox.
You got lower unsprung weight and possibly more usefully the kingpin was aligned with the centre of the tyre, so when you steered the tyre turned "on the spot" rather than rotating through a curve.
Some old Jags and Alfas had inboard discs on the rear axle, which was of course rear wheel drive. They were a bit of a pain to get at.
I’ve generally assumed that brakes are in the wheel because they’re not all that massive, they get decent cooling airflow in the wheel, and they can produce enormous amounts of torque.
it would be really interesting if it became possible to do electronic only breaks. I'm sure the regulatory system isn't there yet, but it would let you shave a whole bunch more parts and complexity
YASA doesn't call it a hub motor specifically but that's one place where it helps to save as much weight as possible. And for the cars most likely to have 1000+HP weight matters too. A Tesla motor weighs 100-200lbs, so saving that much weight down to 28lbs on a supercar is highly desirable.
I think large drones will be another place where a downsized version of this motor will make a huge difference, assuming the power scales nicely with size.
I might be wrong, but I don’t think these motors are intended to be used inside the wheel. That would add a ton of additional requirements in terms of physical durability as well as constrain optimal torque and RPM of the motor design.
I believe the Aptera was originally going to have motors in the wheels... My understanding is the the first version will forego that, as there were challenges i guess, but i think they still to eventually do that.
> This is probably why savings here matter a lot more (or at least in a very different way) than the battery weight.
Wouldn't that make it worse or just ... different. Before this the unsprung weight wouldn't have had a motor in there and now it does. Increasing the unsprung weight doesn't seem a like a good thing.
What current mass production EVs use hub motors? It seems a lot more sensible to have the motors inboard, mounted to the chassis, and drive the wheel(s) with axle shafts. It seems in my searching this is how nearly all EVs are currently designed and produced.
See also the Saab Emily GT project. Even with an older, heavier gen of these axial flux motors they found significant performance gains by controlling each wheel via its own motor.
I didn't want to put the usability of the motor into question or go into a complete evaluation of advantages/disadvantages :) This was just an explanation that weight trimming the motor might be very much worth the effort - even if it somewhat "insignificant" compared with savings that are possible in battery weight.
In-wheel application is possible, but it's important to understand that the pancake shape is only a consequence of the axial flux design and Yasa doesn't make motors in other "formats". Yasa motors shaped like this have been used in several supercars and all of them have been in-board on the axles, not in-wheel.
"In the era of AI and data-driven enterprises, reducing architecture debt will no longer be a technical choice. It will be a strategic differentiator that separates the companies that can transform from those that will fall behind."
Yeah. Whatever.
How is AI even related to architecture debt in software? By vibe coders forgetting to specify "decent architecture" in their prompt?
Same for data driven. For most companies data driven just means focusing on one more or less relevant metric, but ignoring all rational arguments on topics cannot be measured in an easy way. Leading to short term thinking and optimizing for a metric instead of for business success. Not great, but still not a lot to do with software architecture.
And as much as I'd love software architecture to be a strategic differentiator: It really is not. Companies need software that is good enough. Good and consistent architecture just lessens the developers pain in dealing with it (and technical/architecture improvements are much more fun to do, since there is no customer with strange requirements). There are many companies out there with horrible software quality that still succeed.
"Several commentators, business leaders and academics have identified that 1970 deal as a significant fork in the road of the Cold War, as it established a mutual basis for economic cooperation between Russia and western Europe."
There are certainly different opinions on that.
Gas imports started long ago and in the cold war that approach was working to some extend.
> Germany didn't avoid nuclear by switching to renewables. It does so by burning coal and building gas-fired power plants.
That statement is plain wrong:
https://energy-charts.info/charts/energy/chart.htm?l=en&c=DE...
In 2013 about 300TWh of electricity came from fossil fuels, 92TWh from nuclear. In 2024 153TWh from fossil fuels and 0 from nuclear. So fossil fuels declined by 147TWh while nuclear only by 92TWh. Claiming that fossil fuels replaced nuclear is ridiculous, even after repeating it hundreds of times.
You can claim that keeping nuclear could have sped up the transition, but the inflexible nuclear plants could also have prevented people from investing in renewables, since the economics are worse if there is energy that is supplied permanently regardless of the price. Nuclear and renewables don't mix well.
You are entirely missing the point. The issue is what do you do when you have no renewable because it’s the winter and there is no sun and no wind. German answer to that - like it or not - is building gas fired power plants and using coal in the meanwhile. That and buying a ton of nuclear energy from France a fact you are conveniently forgetting.
The ratios you quote are meaningless. The issue is that it can’t scale so as to fully decarbonise the grid. Thankfully the current German government seems to finally have seen the light.
'no sun and no wind' is not actually a thing that happens. What happens is less sun during the day and more or less wind in different places in Europe. This is a problem that can be solved through a combination of excess capacity, long distance transmission of energy, and storage, affordably and with existing technology. It's been obvious for a long time that a fully renewable grid can work, and Europe is rapidly moving towards that. Gas turbines are a reasonable stop-gap which will slowly get pushed out of generation as the proportion of renewables and storage grows.
> It's been obvious for a long time that a fully renewable grid can work
It’s far from obvious to me.
There are literally no exemple of one ever running and some of the technological challenges are still open questions at the moment.
I generally think proponents of renewables are overselling the idea and significantly minimising the challenges they pose at scale. They definitely have a place in the energy mix but I don’t personally believe they are the solution.
Mostly renewable has already been achieved, 100% renewable is on track and economically feasible.
In December 2021, South Australia set a new record for renewable energy generation and resilience, after running entirely on renewable energy for 6.5 consecutive days.
In 2022, it was stated that South Australia could soon be powered by only renewable energy.
70 per cent of South Australia's electricity is generated from renewable sources.
This is projected to be 85 per cent by 2026, with a target of 100 per cent by 2027.
And gas plants are what is closed to H2 and can be switched over easiest. But H2 is only viable once renewable production exceeds demands during long stretches of time. Otherwise it is always better to use the energy directly or use short term storage (batteries) which are also growing exponentially: https://battery-charts.de/battery-charts/
Sorry, you are all emotion and provide wrong statements. What I wrote directly contradicted your statements and proved them wrong, but now you say they are missing the point? Reducing fossil fuel consumption by 50% within 10 years is an achievement. There are always things that could be done in a better way. But let's be real here.
That is kind of the point of having an integrated grid.
But 19TWh. While producing 470TWh. 4%. That is not ... a lot. And in 2022 Germany exported 5.5TWh and had to restart coal plants when the French nuclear plants were in trouble. So what? That what a grid is for.
H2 has been the alleged long term solution for decades while barely progressing at all. Even in aviation where it’s seemingly the only solution we have, it’s stagnating.
If you look at who is pushing H2, you will see that it’s mostly fossil fuel companies who want to prop up gas because as you rightfully pointed out "And gas plants are what is closed to H2 and can be switched over easiest."
> So what? That what a grid is for.
It’s going to be hard to reach net zero while burning coal and if the actual solution is importing nuclear energy from somewhere else while pretending it doesn’t happen, it would be simpler to just straight up go for nuclear.
Smartphones are notorious for not delivering- heck, my fitbit won't even give me an actual wrist temp value.
Theres optical ones that can be built in a lab and require profiling to an individual. The most accurate ones are still invasive- just not consumable fungus extract based. Maybe some biomedical company makes a reference design.
As a diabetic having alarms is the most important thing. Measurements are not that accurate (neither is the finger prick method: If sometimes get a difference of 20% comparing two measurements from both hands). But also the "ok" range of 3.8 mmol/L to 10 mmol/L is quite large and levels can rise/drop 20% in minutes. So it is still quite helpful.
With the CGM there is also an additional delay of about 15 minutes in the measurements. Mostly you want to be triggered when something strange happens and then you do a manual measurement to confirm.
A false alarm of low blood sugar is annoying, but it is a lot better than collapsing. You can relax a lot more if you know you will get an alarm.
As written above: This might be different for people with dementia or other issues that are not in a mental state to manage their diabetes.
From what I understand from the study the aim is to show that mass producing islet cells from stem cells is possible. Previously those where extracted from pancreases from dead people.
Having cells extracted from your own body has the advantage that there are a lot of they are not rejected by the immune system.
The reason the immune suppression is still needed is the cause of type 1 diabetes: It is a auto immune disease where the body attacks its own islet cells.
But this a specific immune reaction which could be easier to prevent than the generic rejection of cells from a different body. But this is not what this approach is trying to do for now.
This study just wants to show: This approach of creating islet cells work and it is worth trying to do a bigger more expensive study that can produce statistically relevant results.
"Curing" type 1 diabetis is still years off and that requires the immune issue to be solved as well.
> This study just wants to show: This approach of creating islet cells work and it is worth trying to do a bigger more expensive study that can produce statistically relevant results.
It is incredible to me what a hard time people are having understanding this extremely obvious fact.
Knowing this is a possible path to a cure, even if it’s currently undesirable for the majority of patients, is an important step.
Diabetes type 1 is quite well manageable if you have a CGM sensor and inject insulin regularly.
But if a person with dementia tends to peel of sensors, gets aggressive when getting injections etc. this might not work. And an unmanaged diabetes can be deadly.
Not sure how these approvals work in that case, but this groups of people might be the first that can benefit from a treatment like this.
Everything before the introduction of the gregorian calendar is moot:
"In 1582, the pope suggested that the whole of Europe skip ten days to be in sync with the new calendar. Several religious European kingdoms obeyed and jumped from October 4 to October 15."
So you cannot use any date recorded before that time for calculations.
And before that it gets even more random:
"The priests’ observations of the lunar cycles were not accurate. They also deliberately avoided leap years over superstitions. Things got worse when they started receiving bribes to declare a year longer or shorter than necessary. Some years were so long that an extra month called Intercalaris or Mercedonius was added."
Before 1582 the rule is just simpler. If it is divisible by 4 it's a leap year. So the difference is relevant for years 300, 500, 600, 700, 900 etc. For ranges spanning those years the Gregorian algorithm would result in results not matching reality.
When the Julian calendar was really adopted I don't know. Certainly not 0001-01-01. And of course it varies by country like Gregorian.
>The Julian calendar was proposed in 46 BC by (and takes its name from) Julius Caesar, as a reform of the earlier Roman calendar, which was largely a lunisolar one.[2] It took effect on 1 January 45 BC, by his edict.
It was already known to scholars that the length of a (tropical) year is close to 365-and-a-quarter days since at least 238 BC (when Ptolemy III tried to fix the length of the year in the Egyptian calender to 365-and-a-quarter days in the Canopus Decree).
However, due to a mistranslation the Roman pontifices got it wrong at the introduction of the Julian calendar. The Romans counted inclusively, which means: counting with both the start and end included. (That is why Christians say in a literal translation from Latin that Jesus has risen on the third day, even though he died on a Friday and is said to have risen two days later, on the next Sunday.)
In the first years of the Julian calendar, the Roman pontifices inserted a leap day “every fourth year”, which in their way of counting means: every 3 years. Authors differ on exactly which years were leap years. The error got corrected under Augustus by skipping a few leap years and then following the “every 4 years” rule since either AD 4 or AD 8. See the explanation and the table in https://en.wikipedia.org/wiki/Julian_calendar#Leap_year_erro...
Also note that at the time, years were mostly identified by the names of the consuls rather than by a number. Historians might use numbers, counting from when they thought Rome was founded (Ab urbe condita), but of course they differed among each other on when that was. The chronology by Atticus and Varro, which placed the founding of the city on 21 April 753 BC in the proleptic Julian calendar, was not the only one.
> Besides that: find small (2-3 day) project ideas that require you to learn max. 1 new technology or idea, and build lots of such projects.
That's how you learn programming. It's not a bad idea, but at least for me software development is more about long term issues coming up, team communication, features that create short term value but long term problems. How to organize big piles of code.
A lot of abstractions don't make any sense in a 2-3 days project and you are better of hacking away a script than looking into "properly" modelling things.
My impression is always that as a junior you learn how to do stuff. Then you learn to do complicated stuff. And becoming a senior you learn how not to do complicated stuff.
This takes some of the fun out of it as well. Deploying a feature that is simple and that just works without any issues does not create nearly as much excitement as "saving the company" with a big hack and high risk deployments, although it is much better to not have to "save the company" in the first place.
> Deploying a feature that is simple and that just works without any issues does not create nearly as much excitement as "saving the company" with a big hack and high risk deployments, although it is much better to not have to "save the company" in the first place.
Some people are more risk-affine, and some are more risk-averse.
The fact that such things work rather kindles a desire to do new interesting, experimental stuff in risk-affine people - something that a corporate environment often prohibits, which frustrates risk-affine people.