Trivially the answer is yes by the infinite monkey theorem. If we allow the sampler to pick any token then any stream of arbitrary tokens can be generated. Therefore if an original idea can be represented with written words then a LLM can generate it. That is perhaps not the most satisfying answer, but if you want a better one you'll need to provide a function that determines if an idea is original.
Why are "premium" laptop vendors still putting vents on the bottom of their machines? Did they never try actually putting their laptop on their laps and realise how much that design sucks?
These are not laptop computers, they are notebooks. The only laptop I've personally seen had a 80286 processor... We call them laptops, but historically that name is wrong.
While it is possible to use a notebook on your lap, you are not supposed to. It is a terrible place that is unergonomic. You are supposed to put them on a table of some sort. If you are using it for more than something quick you should have a separate keyboard and mouse (ie attached to a docking station on your desk). The portable form factor is useful for meetings, presentations, or other such - but for real work they are terrible.
Since the lap what it was designed for, and a lap is a bad idea anyway: putting vents on the bottom isn't bad.
I frequently sit on my balcony, with my MacBook Pro in my lap. I'm still alive.
I also frequently use that computer, that doesn't have vents on the bottom, for watching movies in bed. And I don't need to think about vents. My last non-Apple computer was some ThinkPad, with vents on the bottom, and I remember always chasing some book to put under the computer to make the vents free. Boy, how I hated that.
I doubt any computer you can lift will kill you if you put it on your lap. It might be uncomfortable, but it won't kill you. (I'm not about to put some of VAXen I've seen on my lap even though I can technically lift them, but...)
People do all kinds of terrible things to their body. That you do something and seem to be getting away with it doesn't mean you should. Talk to a real doctor trained in this (your regular doctors probably is not) for details.
When I'm working, I'm at my desk, with a keyboard, mouse, and monitors. But the reason why I have a light laptop instead of some beefy desktop computer is the convenience of having a computer when I'm not at my desk. At a café. Or to check emails when I'm at relatives'. Or to work on my hobby project in a car, waiting for my daughter to finish her training session. Or to be next to me, on a carpet, while I'm checking why the network under the TV is behaving strangely.
I've been living laptop life since 2002, IIRC. In all that time, the most stupid design decision I saw was on some HP EliteBook, where designers, in their infinite wisdom, put tiny legs on a laptop. Those four stupid pieces of plastic and rubber bite my legs every time when I try to use the convenience of having a mobile computer.
Fans on the bottom of the laptop case are firmly in the second place on my list. And no amount of ackchyually you're holding it wrong will change that.
How does gravity make vents at the bottom useful? A normal non-passive laptop uses forced convection with fans, the natural convection should be completely negligible in that case.
Vents in the bottom just make sense, does it have to be explained? The fans aren't going to be running full blast during idle times or when portable for example. Saves power to use gravity.
Macs don't have to worry about that since they made a huge efficiency jump with M1. Before that they overheated due to poor thermals.
Arguably it helps so the air can go through the top holes in the keyboard better, you probably want openings on top and bottom for that.
But I still think this effect is negligible compared to the fan. I don’t think you would notice a temperature difference if you rotated your laptop 90 degrees with the keyboard vertical.
2×? Try 5× for the Noctua NF-A12x25 compared the the Arctic P12 Pro that matches or beats it in most metrics. Which isn't to say the Noctua fan is bad, it's just a luxury product for reasons other than performance.
2x more than other premium offerings that often perform noticeably better, which I'd say are usually from BeQuiet, LianLi, and Phanteks.
But yes, sometimes up to 5x more than the comparative Arctic in common size categories where it basically trades blows for most metrics that matter. Arctic is seriously unbeatable in value:performance if you just need a basic fan without other QoL or aesthetic features.
120mm is the most competitive category, and it's the most obvious category how Noctua can't keep up with the faster iterating/innovating competition.
Disclaimer: I read HWCooling like everyone serious about the subject. These reviews aren't everything, the appalling QC that results in resonances or coil whine lottery isn't mentioned.
In general, yes, Noctua is overpriced and Arctic is an incredible value, but when you want to optimize your silence/performance ratio, it's still Noctua, BeQuiet or (sometimes) Thermalright.
This was a fun revelation when I got into watercooling. You might not hear coil whine over a gpus fans. But remove the fans and put it under load and whoo boy.
So this confuses social media discussions on the topic by mixing together everyone's reports, regardless of their level of acoustic masking. "My card has no whine!" says the guy with three 2000 rpm fans going etc.
Gpu waterblocks seem to be shifting towards fully enclosed "tomb" style and I can't help but wonder if coil whine contributed to that decision.
But on topic, I had seven a12x25 in my last build, two a12 and four a20 in my current build. They are exceptional. A computer is as quiet as it's loudest part. If your care about noise, why would you ever skimp on the moving parts.
I think GPU waterblocks are becoming fully enclosed because there are so many hot components on the back of the GPU now. They were designed to rely on random case air turbulence to passively cool, but there typically isn't much airflow over the back of the card when the stock cooler is replaced with a waterblock.
Problem becomes worse when the cards are driven harder because there's more cooling capacity from the watercooling in the front, but the passive cooling capacity on the back is still the same.
I used to stick a giant fin block on the back of the card to keep temps there reasonable. I'd love it if actively cooled backplates become the norm for watercooling.
The Arctic fans are known to hum at certain speeds. This may, or may not matter to you, and certainly depends on how low the "noise floor" in workspace is.
Last I checked they weren't really any quieter than their competitors at the same airflow and pressure (which is a little subjective because your curve will never match perfectly). They do have a really low number on their specs because they have a really low max RPM, but that's not really relevant when you can just lower the speed of other fans.
They're still really good fans, but a lot of this is just marketing.
At max power the Noctua NF-A12x25 has 56 CFM and 2.3 mmAq for 31dBA [1]. At 70% the Artic A12 Pro is 56 CFM, 4.3 mmAq, and 31dBA [2]. At 60% the Asus ProArt PF120 is 61 CFM, 2.6 mmAq, and 30 dBA [3].
Note that the ProArt is a bit thicker (25 vs 30 mm) and all these dBA numbers are almost certainly unobstructed airflow. The Noctua is certainly good, but it's literally over 5× the price of the Artic.
Noctua is working at the last five percentages of performance AND lifespan. They want their fans to perform (and sound) identical ten years later with daily use.
Most people change fans far earlier than that.
Indeed, the main reason why I choose Noctua fans from those that are silent enough and efficient enough is because I trust their reliability.
I still have computers from 2017, with Kaby Lake CPUs, which have been used as servers and in which the Noctua fans work as well as in the first day. Prior to that I had some computers with Noctua fans that had been used for more than a decade without fan problems, and which were upgraded or replaced for reasons unrelated to fans.
Thus the good experience that I had with the reliability of Noctua fans, coupled with some bad experiences with cheaper fans, which had to be replaced prematurely, make me reluctant to experiment now with other brands, which might have the same performance when new, but I could learn about their reliability only after a few years.
On the other hand, if I recall right the internet is rife with customer reports of the Arctic fans having noose spikes / unpleasant hums or resonances at certain RPMs. Lots of people using config tuning to avoid it.
I ended up buying Pure Wings as mentioned. Also much cheaper than Noctua and seemingly not having those issues.
It's funny because I replaced my NF-A14 and NF-F12 because they had hums at certain rpms when used on radiators, and neither the Arctics before them, nor the BeQuiets that replaced them, had that issue.
It's par for the course in the premium PC parts industry. It's overkill in a way that does not impact performance at all because gamers will pay for that.
Noctua fans are still the top #1 performers in the world. You can argue that it's diminishing returns and you can get a fan with 90% of the performance for 50% of the money, but that doesn't change Noctua's position at the top.
I do not play often on my PCs. I just like well engineered devices and do have more than enough money to buy a more expensive fan every five years or so. I like the item, it works well, is silent, I’m satisfied ¯\_(ツ)_/¯
The very simplified answer is that the models are first trained on everything and then are later trained more heavily on golden samples with perfect grammar, spelling, etc..
This has come up multiple times before [1], and more generally it's come up hundreds of times with Unix style tools in general. It's always been a stupid idea for every tool to have its own barely documented file format.
This wouldn't be an issue if patches were XML or JSON with a well defined schema, but everything must be a boutique undocumented format in the world of Unix tools.
Maybe the worst part about this is that it can entirely come from a patch being exported by git and then imported straight back in to git. If you can't even handle your own undocumented format then what hope do other tools have that want to work with it?
While patch[0] has problems, the issue here is not that it is undocumented.
Git recently added this doc on roundtripping, and the problem is with git.
Any line that is of the form:
* three-dashes and end-of-line, or
* a line that begins with "diff -", or
* a line that begins with "Index: "
is taken as the beginning of a patch, and the commit log message is terminated before the first occurrence of such a line.
The patch isn't even the complicated forms with RCS, ClearCase, Perforce, or SCCS support, it is just doing what the pre-POSIX spec says.
The argument is if git should do input sanitation etc...
But `patch -p1` is doing exactly what was documented, even in the original Larry Wall usenet post of the program.
> This wouldn't be an issue if patches were XML or JSON with a well defined schema, but everything must be a boutique undocumented format in the world of Unix tools.
Patch files are readable by humans. Replacing them with XML or JSON would fix this problem, but at the expense of removing a core feature.
If, by "readable by humans", you mean "it would reliably fool humans as well", I'd say it's an ambiguity bug regardless of whether it's "a core feature" or not. A patch format, human-readable or not, should clearly indicate which part is the commit message and which part is an actual diff; it's not the case here.
Alright, allow me to disambiguate in your preferred format.
<?xml version="1.0" encoding="UTF-8"?> <claims> <claims_I_did_not_make description='Claims that I did not make or defend.'> <claim>Patch is perfect.</claim> <claim>Ambiguity is good.</claim> <claim>There are no better formats for conveying patches.</claim> </claims_I_did_not_make> <claims_I_did_make description='What I actually said.'> <claim>Patch files are readable by humans.</claim> <claim>Being readble by humans is useful.</claim> <claim>XML is painful for humans to read and write.</claim> <claim>JSON is painful for humans to read and write.</claim> <claim caveat='Actually this would require all parties to handle JSON or XML correctly which on further reflection I am not sure about. Still, it is a claim I initially made.'>JSON or XML would actually fix this problem in the format.</claim> </claims_I_did_make> <claims_I_did_not_make_but_am_open_to description='Things that were never specified but that I do not actually disagree with.'> <claim>The patch format could be improved.</claim> <claim>Formats should be unambiguous.</claim> <claim>Separating sections is good.</claim> </claims_I_did_not_make_but_am_open_to> </claims>
that's not the preferred format for writing XML, this is:
<?xml version="1.0" encoding="UTF-8"?>
<claims>
<claims_I_did_not_make description='Claims that I did not make or defend.'>
<claim>Patch is perfect.</claim>
<claim>Ambiguity is good.</claim>
<claim>There are no better formats for conveying patches.</claim>
</claims_I_did_not_make>
<claims_I_did_make description='What I actually said.'>
<claim>Patch files are readable by humans.</claim>
<claim>Being readble by humans is useful.</claim>
<claim>XML is painful for humans to read and write.</claim>
<claim>JSON is painful for humans to read and write.</claim>
<claim caveat='Actually this would require all parties to handle JSON or XML correctly which on further reflection I am not sure about. Still, it is a claim I initially made.'>JSON or XML would actually fix this problem in the format.</claim>
</claims_I_did_make>
<claims_I_did_not_make_but_am_open_to description='Things that were never specified but that I do not actually disagree with.'>
<claim>The patch format could be improved.</claim>
<claim>Formats should be unambiguous.</claim>
<claim>Separating sections is good.</claim>
</claims_I_did_not_make_but_am_open_to>
</claims>
it was valid but not the way XML is written or read by humans which is what we are discussing. how much of a pain it is to read is a matter of taste. i won't deny that. but XML can be made more readable without fail because it is a structured format. i would not have been ale to reformat a patch text the way i reformatted this XML example. XML is also more powerful. it could handle word based changes, as opposed to patch which can only do line based changes. same goes for JSON. patch could potentially be improved, but i don't see how it could handle word based changes without extra syntax to mark line breaks.
If the only thing we're concerned about is human readability, we can do better than patch files with their pesky @@ lines and plusses and minuses. But we're talking about a compromise between readability and parseability/schemas.
Haha, good one. Much like Makefiles, patch format precedes a lot of more modern things (by decades!) and is good enough to stick around. Unlike Makefiles, I've never seen tool gain any acceptance at all to replace patch.
And a lot of these older tools are not meant to be fed untrusted, unvetted input. The patch shown there confused me for quite a bit.
Or, more snarky: tee is also a huge security problem if you pipe untrusted input into `tee -a /etc/passwd`, such as `curl | tee -a /etc/passwd`. Not many things are safe with a `curl |` in front of them. I think yes might be?
> Maybe the worst part about this is that it can entirely come from a patch being exported by git and then imported straight back in to git.
No one wants to apply diffs in commit messages. But some people use this technique via email:
Finally fix it
---
Changes in v2:
- Proper formatting
- Remove irrelevant typo fix
They’ve used the `---` commit message delimiter in the commit message itself so that everything after it won’t be applied by git-am(1). So that’s intentional loss of round tripping.
Tons of bugs in scripting in Unix come from the fact that data and metadata are interspersed in the same stream (you can mitigate somewhat with stderr vs stdout but hardly anyone does). Examples include things like trying to handle random filenames from * expansions.
It’s a bit more annoying to deal with sometimes, but for actual scripts it’s much more foolproof.
xargs is one of the programs that is designed to work around the original issue.
Yes, structured data between scripts and programs. No xargs, tee, awk, sed, grep mangling. No "argument list too long" errors.
So many problems are avoided, but at the same time the Windows ecosystem is just so far from providing an properly usable terminal experience. Things are still really not designed to be used from PowerShell.
what are you suggesting? XML is a simplified form of SGML. an SGML parser can parse XML so it was already possible to write an XML like document before XML was defined.
I can not figure out what on Earth they've done with these graphs, it almost seems like these are an artists impression of a graph.
Looking at the commit graph: Why do commits have big steps followed by slow rolloffs? Why do the steps not happen at uniform points Why do larger steps sometimes have less of a slope than smaller steps but not all the time?
Then looking at the other graphs there's completely different effects going on.
They seem to be the result of an image-gen model to me
If this is the unvetted and unbased information they are putting out in public facing-blogs, only the stars would know what data is being "presented" in their boardrooms
> Does Palantir collect data or just analyze aggregated purchased data?
Neither. Palantir makes data management software, they've never been in the business of collecting or analysing data themselves at all. There's generally a fundamental misunderstanding online of what Palantir actually does.
Any time you see an article or comment saying something along the lines of "Palantir is stealing your data", consider if it makes sense when you replace Palantir with MySQL, if it doesn't then it's generally safe to assume that article is garbage.
There are plenty of legitimate reasons to have grievances with Palantir, but they're completely drowned out by nonsense.
Neither. Palantir makes data management software, they've never been in the business of collecting or analysing data themselves at all. There's generally a fundamental misunderstanding online of what Palantir actually does.
This is rather naive. Palantir makes politics by creating and funding a SuperPAC to discredit a former employee who happens to support the RAISE act.
Leading the Future, a super PAC whose funders include the founders of companies like Palantir and OpenAI, is spending millions of dollars this election cycle, and a considerable amount of that money is going toward attack ads against Alex Bores – even though Bores himself used to work for Palantir.
Those are legitimate grievances as mentioned, what they are not is Palantir themselves collecting massive amounts of data, which is often what they're portrayed as doing and what the GP asked about.
reply