My understanding of intermittent fasting is that it can encourage "garbage collection" of the body pruning the dead/sickly cells. Weight loss/gain is still driven by calories in/out.
It's always calories in and calories out. The idea is that intermittent fasting makes you less hungry over time and thus you take in less calories.
If they had their test subjects eat the same amount to see if intermittent fasting metabolized food better then it seems obvious that there would be little to no difference.
My SO did IF and strict calorie counting for around 2 weeks to a momth, and it drastically reduced their appetite to something more akin to a normal level. Now, they can barely finish a large meal at McDonald's without leftovers.
They've cut quite a bit of weight since then and mostly have just focused on keeping their appetite low, and eating healthier more fibrous meals in general.
With a chess engine, you could ask any practitioner in the 90's what it would take to achieve "Stage 4" and they could estimate it quite accurately as a function of FLOPs and memory bandwidth. It's worth keeping in mind just how little we understand about LLM capability scaling. Ask 10 different AI researchers when we will get to Stage 4 for something like programming and you'll get wild guesses or an honest "we don't know".
That is not what happened with chess engines. We didn’t just throw better hardware at it, we found new algorithms, improved the accuracy and performance of our position evaluation functions, discovered more efficient data structures, etc.
People have been downplaying LLMs since the first AI-generated buzzword garbage scientific paper made its way past peer review and into publication. And yet they keep getting better and better to the point where people are quite literally building projects with shockingly little human supervision.
Chess grandmasters are living proof that it’s possible to reach grandmaster level in chess on 20W of compute. We’ve got orders of magnitude of optimizations to discover in LLMs and/or future architectures, both software and hardware and with the amount of progress we’ve got basically every month those ten people will answer ‘we don’t know, but it won’t be too long’. Of course they may be wrong, but the trend line is clear; Moore’s law faced similar issues and they were successively overcome for half a century.
> With a chess engine, you could ask any practitioner in the 90's what it would take to achieve "Stage 4" and they could estimate it quite accurately as a function of FLOPs and memory bandwidth.
And the same practitioners said right after deep blue that go is NEVER gonna happen. Too large. The search space is just not computable. We'll never do it. And yeeeet...
Sure, except this is the first time in my life I've seen the term "pulse" used for a vegetable. And, honestly, only in the last 10 years have I been hearing the term legume in common conversation. Grain is definitely the more common term.
"elected for no obvious reason" isn't quite right, as a test image for computer graphics it has regions of very high frequency detail and regions of very low frequency detail which make it easier to spot various compression artifacts, and it makes a good study for edge detection, with both very clear edges along the outline, but more subjective edges in the feathering.
It's redish. Ok it has a blur and details on the foreground but could have been any image with blurred background and a face.
"very low frequency detail", we are talking about a 512x512 picture here, it has low and high frequency details (FFT speaking) like most photos.
"Good for edges detection" doesn't mean anything. Like, is the image good for edge detection or the algorithm is good at detecting edges ? What does "subjective edges" even mean ? Does it mean hard to spot ?
That looks like technical reasons but it just noise. They literally grab a playboy magazine and decided it was well enough (and indeed, it wasn't that bad, yes). Still not professional. The message is "We have playboy magazines at work and we are proud of it".
Try out running different edge detection algorithms on that image and you will see that there is a lot of disagreement amongst them in the feathering region. Exploring what the differences are, and how the algorithms lead to those differences helps build intuition about the range of things we might call an "edge", and which algorithm is appropriate for a particular task at hand.
I wonder if the author was making similar arguments against solar power 20 years ago. The case for both isn't one of immediate ecenomic advantage, though that may come with sufficient development like solar has had. If you take it as a given that compute demand continues scaling, at some point we will need to shift power generation off Earth, and it's a lot easier to move computed data streams instead of terrawatts of power.
The author repeatedly states that they stayed within the scope of the VDP, but publishing this clearly breaks this clause: "You agree not to disclose to any third party any information related to your report, the vulnerabilities and/or errors reported, nor the fact that a vulnerabilities and/or errors has been reported to Eurostar."
Seconded, "oh I'll always have a charged battery", but the real experience is spending much more time swapping batteries than I used to spend plugging in. More frequent and longer duration interruptions.
I did a quick alalysis and it actually matched the ~1.5 degree celcius rise pretty accurately. It required a bunch of incorrect simplifying assumptions, but it was still interesting how close it comes.
Estimated energy production from all combustion and nuclear from the industrial revolution onwards, assumed that heat was dumped into the atmosphere evenly at once, calculate temperature rise based on atmosphere makeup. Ignores the impact of some of that heat getting sinked into the ground and ocean, and the increased thermal radiation out to space over that period. In general, heat flows from the ground and ocean into the atmosphere instead of the other way around, and the rise in thermal radiation isn't that large.
On the other hand, this isn't something that the smart professionals ever talk about when discussing climate change, so I'm sure that the napkin math working out so close to explaining the whole effect has to be missing something.
We use ~20 TW, while solar radiation is ~500 PW and just the heating from global warming alone is 460TW (that is, how much heat is being accumulated as increased Earth temperature).
Well the math is correct, the methodology has obvious flaws some of which I pointed out. If you took all the energy that has been released by humanity burning things since the industrial revolution and dumped it into a closed system consisting of just the atmosphere, it would rise by about 1.5 C.
The discussion thread (and original question) you are participating in is about heat being rejected to the atmosphere through vapor-compression refrigeration or evaporative cooling, not CO2 or emissions from combustion. Reread the top level comment.
The amount of heat rejected to the atmosphere from electronic devices is negligible.
reply