This is the voice model, which doesn’t have any «thinking» or «reasoning» phase. It’s a useful model for questions that aren’t intended to trick the model.
I’ve used it for live translation with great success. It tends to start ignoring the original instructions after 20 min, so you have to start a new conversation if you don’t want it to meddle in the conversation instead of just transferring.
The text-only model with reasoning (both of opus 4.6, gpt 5.2) can be tricked with this question. Note: you might have to try it multiple times as they are not deterministic. But I managed to get a failing result right away on both.
Also note, some model may decide to do a web search, in which case they just likely find this "bug".
You’re not the only one to mention that in this thread. I’ve had a nanotextured MacBook as a daily driver for about six months, and I have no clue what you are talking about. Maybe the issue is iPad only?
Rainbow isn't really the right term. It's more of a sparkling effect. Apple actually uses the term "sparkle" for this characteristic in their patent for the display treatment (see para 0073). They also mention that different diffractive layers can be used to minimize the effect, so it is possible that the issue is worse on some devices than others.
Well whatever it is, my MBP screen looks perfectly fine. It's like a matte display, but not with the colors all washed out like the matte displays of yore. No visible artifacts of any kind that I can see.
I personally haven't noticed that. I am also primarily coding & writing rather than image editing or such so am less sensitive to things like that fwiw.
I tried the iPad with Nano Texture and didn’t really like the rainbow effect that shows up on white brackgrounds. So I ended up returning it.
A while later I had an idea to mount an iPad to my fridge so that I could check the weather, add things to my shopping list, play music, etc. I bought the rather expensive iPad with Nano Texture screen and it has been amazing to use. There is a big window opposite the fridge, and without the nano texture the glare from behind makes it hard to read what’s on the screen.
Not sure I would enjoy nano texture on my MacBook. For outdoor use I found that Vivid is great to turn up the brightness using the extended range of HDR that Apple doesn’t otherwise allow me to use.
Mine is mounted to the fridge, so it's not seeing as much use as it otherwise would. Screen does get smudges and they are more noticeable compared to the regular iPad screen. Not so much that it's a problem to be honest. I wipe it down every few weeks, and that's fine with me.
They would have to leave the sensors out, but could cover the rest of the bezel. They don’t do it because then the sensors would be more visible. But I’d prefer them to put function over aesthetics.
... on the iPad. I have a nanotexture MacBook and double-checked. It's textured all the way across. But you're right, the bezel of the iPad is glossy (1). Why would they do that? Is it masked off or a separate piece of glass?
When I took my first database course one topic was IO performance as measured in seek time on regular old hard drives.
You can’t really optimise your code for faster sequential reads than the IO is capable of, so the only thing really worth focusing on is how to optimise everything that isn’t already sequential.
I’m curious about if the model has gotten more consistent throughout the full context window? It’s something that OpenAI touted in the release, and I’m curious if it will make a difference for long running tasks or big code reviews.
one positive is that 5.2 is very good at finding bugs but not sure about throughputs I'd imagine it might be improved but haven't seen a real task to benchmark it on.
what I am curious about is 5.2-codex but many of us complained about 5.1-codex (it seemed to get tunnel visioned) and I have been using vanilla 5.1
its just getting very tiring to deal with 5 different permutations of 3 completely separate models but perhaps this is the intent and will keep you on a chase.
I hate the new electric busses from China. Their acceleration is much better and braking effect is also stronger.
Bus drivers in Norway are binary people. They either press the accelerator or they press the brake. Most drivers call it leg day, because you spend the entire day pushing on these peddles as hard as you can.
Our existing buses have terrible acceleration, which helps a lot with the comfort of the bus ride. But for some unknown reason someone decided that the bendy busses should have the only wheels with power, in the rear of the bus, after the bend. So any slight hill that is slippery from a bit of snow or frost, now becomes a a comedic video of buses trying to drive up before they flop in the middle of the bend and slide back down again.
I don't think anyone is pretending that a Macbook Pro can compare to 8 H100 cards from Nvidia in terms of LLM training or for serving LLMs. But you can buy an awful many macbooks for the price of 8 H100 GPUs.
But if your workload belongs on 8 H100 GPUs then there isn't much point in trying to run it on a macbook. You'd be better served by renting them by the hour, or if you have a quarter million dollars you can always just purchase them outright.
The H100 is just an example, this is true for any workload that doesn't fit on a laptop.
I’ve used it for live translation with great success. It tends to start ignoring the original instructions after 20 min, so you have to start a new conversation if you don’t want it to meddle in the conversation instead of just transferring.
reply