Matches the name of episode 152[1] the Wikipedia article cites for the info. Seems the classification of seasons and even the season's episode order on Wikipedia differs from the one in the Youtube title.
It's already possible (with domain join, customized bootable installs or esoteric workarounds) but non-obvious due to deliberate UI changes/regressions.
I suppose this just hints at the possibility someone may be advocating for it to be made again a clear choice during install but it's a vague response.
I doubt Mullvad would be doing this if they weren't getting compensated given they've always said (even right now[1]) they don't offer a free tier since they don't believe it makes sense.
The other aspect is I expect it would stain the IP pool further. VPN IPs often end up on various blacklists due to abuse and introducing a wave of free users would only make it worse for paying customers.
> Why no free plan? "Free" services nearly always come at some cost, whether that be the time you spend watching an intro ad, the collection of your data, or by limiting the functionality of the service. We don't operate that way – at all.
I get that without any vpn in 2026. In fact theres been times I’ve been locked out, returned my user agent and ip and asked to email a webmaster to prove I am human. I guess because I use firefox.
It's mainly because nearly all VPN providers all use the same shady providers - M247, xtom, fdc, datapacket etc.. Most CDN setups will "challenge" those ASNs.
People think Mullvad is special but it's same shit as all the others in most cities/markets, I wish they would use some of their big ad money spend to deleverage from these typical dodgy scam hosting ASNs.
In an earlier video they made a couple years back about Disney's sodium vapor technique Paul Debevec suggested he was considering creating a dataset using a similar premise: filming enough perfectly masked references to be able to train models to achieve better keying. So it was interesting seeing Corridor tackle this by instead using synthetic data.
With regards to the sodium vapor process, an idea has been percolating in the back of my head ever since I saw that video. But I don't really have the budget to try it out.
theory: make the mask out of non-visable light
illuminate the backing screen in near Infra-Red light. (after a bit of thought I chose near-IR as opposed to near-UV for hopefully obvious reasons)
point two cameras at a splitting prism with a near IR pass filter(I have confirmed that such thing exists and is commercially available)
Leave the 90 degree(unaltered path) camera untouched, this is the visible camera.
Remove the IR filter from the 180 degree(filter path) camera, this is the mask camera.
Now you get a perfect non-color shifting mask(in theory), The splitting prism would hurt light intake. It might be worth it to try putting the cameras really close together , pointed same direction, no prism, and see if that is close enough.
im familiar with this work and specifically they tried replicating the sodium vapor style approach but what worked for poppins level isnt actually good enough for today. Specifically you still end up with light spill that contaminates the foreground, especially for things like the fresnel reflections on the side of a face. the magenta idea was to still do what is basically a color difference key, but increase the color separation between fg and bg by lighting the two with different opposite colored lights. then using a ml model to recover the original fg color.
This approach was used in the 1950s/60s with ultraviolet light (rather than IR) to create a traveling matte. I'm not sure why visible-light techniques won out. Easier to make sure that the illumination is set up correctly, maybe?
I'll do you one better, which requires no special cameras (most have IR filters) nor double cameras or prisms.
Shoot the scene in 48 or 96 fps. Sync the set lighting to odd frames. Every odd frame, the set lights are on. Every even frame, set lights are off.
For the backing screen, do the reverse. Even frames, the backing screen is on. Odd frames, backing screen is off.
There you go. Mask / normal shot / Mask normal shot / Mask ... you get the idea.
Of course, motion will cause normal image and mask go out of sync, but I bet that can be remedied by interpolating a new frame between every mask frame. Plus, when you mix it down to 24fps you can introduce as much motion blur and shutter angle "emulation" as you want.
You need to basically timecode/genlock the greenscreen "illumination LEDs" to the camera so the greenscreen lights up only exactly at every other frame. Not sure if there exists any off the shelf solution which can do that but if not it can't be super hard to cobble together.
Somebody recently used a variation of this to get good video of welding - basically a camera synced with a very bright (strobe-ish) light, brighter than the weld itself, so you adjust the camera to the ludicrous-but-consistent brightness level and get details of the weld and the surroundings. https://www.youtube.com/watch?v=wSUxK8q4D0Q (Chronos "Helios", from early 2025)
Surely this makes your actors feel sick? And wouldn’t it make your motion blur look dashed and also cause artifacts at the edge of the mask if there’s a lot of motion?
You could strobe at some multiple of the sensor frame rate as long as your strobes are continuous through the integration period of the sensor and the lighting fades very quickly. This probably wouldn't work with incandescents but people strobe LEDs a lot to boost the instantaneous illumination without going past the continuous power rating in the datasheet.
You mean do strobe, strobe, strobe, strobe, pause, pause, pause, pause? I bet that's at least as bad as holding the source on for the first four intervals and then off for the latter four intervals.
In any case, if you actually have a scene bright for 1/24th of a second and then dark for 1/24th of a second, repeating, you're well within photosensitive epilepsy range. Don't do that to your actors unless you've discussed it with them and with your insurance company first.
Feel sick? Possibly. People are more or less sensitive to imperceptable flicker.
Artifacts?
I bet that can be remedied by interpolating a new frame between every mask frame. Plus, when you mix it down to 24fps you can introduce as much motion blur and shutter angle "emulation" as you want.
Motion blur can also be very forgiving. You are more likely to notice artifacts in still or slow moving scenes and then the problem goes away.
Incandescent lights flicker at twice your AC power frequency -- to a decent approximation, their power is proportional to V^2. But this is input power -- the cooling of the filament is slowish and the modulation depth is low. Most people aren't bothered by this.
Fluorescent lights with old or very crappy "magnetic" ballasts flicker at twice the mains frequency, with deep modulation. The effect on people varies from moderate to extremely unpleasant, and it's extra bad if anything is moving quickly (gyms, etc). There are even studies showing that office workers perform worse under such lighting even if they don't experience personally perceptible symptoms. The effect is so severe that people invented the "electronic ballast", which flickers at much, much higher frequency and avoids low-frequency components. Phew. (The light might still be a nasty color, but the temporal output is okay.)
"Driverless LEDs" are deeply modulated at twice the mains frequency. These are very nasty.
If you actually have a light that flickers at the AC power frequency (certain LED sources in a two-brightness diode-dimmed kitchen appliance fixture will do this, as will driverless LEDs with certain types of failures), then it's extra nasty.
There are plenty of people around who find (depending on the actual waveform) 60Hz flicker intolerable and 120Hz flicker extremely unpleasant. And there are plenty of people who can often perceive flicker under appropriate circumstances up to at least several hundred Hz and even into the low kHz with certain shapes of light sources. You can read up on IEEE 1789 to find a standard based on actual research on what lighting waveforms should look like.
The effect of 120 Hz flicker is bad enough that energy codes in some places (e.g. California) have started to require that LED sources minimize this flicker, but, sadly, it's poorly enforced.
Also, the human eye sees flicker much better at the periphery than in the central area. The Rod receptor cells respond more rapidly than the Cone color-sensitive cells, and the peripheral vision is also more tuned to quick motions (much advantage in having faster detection of peripheral motion, so positive selection evolutionary pressure).
I think the total light output of each bulb in the pair is the same at all points in time, but the orange-blue gradient is reversed. So when one is orange at one end, the bulb beside it is blue at that end.
IIRC, the end that's negative looks orange, because the electrons emitted from the filament haven't gotten up to speed yet and can't ionize the mercury atoms at that end to the highest states.
If you didn't do this, you'd see 60 Hz strobing when you looked at one end.
- It'll bleed on fast motion. Hair in the wind would just not work.
- Incandescent lights are out.
You could solve both by having two ghost frames shot very close to the real frame (no need to evenly space the frames, after-all) and using strobing a high powered laser.
You'd need very fast sensor or another one optically on the same position.
The calculation isn't too hard though. The width of a pixel divided by the velocity of the subject on the sensor is the maximum delta(T) between real and ghost frames.
But, again, you dont have to shoot faster. You just have to drop the 180-180 degree phase between a real and ghost frame to be 10-350 degrees. Then your 24 fps is capturing the background as if it were 870 fps
Corridor Crew cover this in one of their VFX breakdowns where I can't remember the film but it was supposed to be filmed on a rapidly rotating platfom.
There were a large number of lights around it and each one was blinked on for an instant while the camera shot at an insanely high frame rate - something like 288 frames per second with twelve lights.
This meant that after the fact you could pick any one of the twelve frames for that 1/24th of a second, to choose the angle the light was hitting at.
this is the approach that stop motion uses, except they get to keep the camera in the same place. its still not perfect because of spill from the background onto the foreground and requires additional masking and cleanup.
That is exactly what it is. Move the mask light out of the visible spectrum so that the masking operation does not interfere with any colors.
The sodium vapor light process was the best tech in the 1950s, Sodium vapor lights were used because they deliver a very pure single wavelength light. But we can do better now. leds natively illuminate with a single wavelength(and we have to put a lot of engineering into them not doing this) and we have cameras that can view frequencies that the eye cannot. put this together and in theory you can do the single frequency illuminated backing sheet mask(green screen) with a frequency that is not visible to the human eye and therefore does not interfere with any of the colors in the final shot.
That is far-IR, thermal stuff, Near-IR, 700 nanometer-ish is right below red in human vision.
Camera sensors can pick up a little near-IR so they have have a filter to block it. If that filter was removed and a filter to block visable light was used in place you would have a camera that can only see non-visable light. Poorly, the camera was not engineered to operate in this bandwidth, but it might be good enough for a mask. A mask that does not interfere with any visible colors.
> Poorly, the camera was not engineered to operate in this bandwidth
At least for cheap sensors in phones and security cameras that engineering consists of installing an IR filter. They pick it up just fine but we often don't want them to.
Keep in mind that sensors are inherently monochrome. They use multiple input pixels per output pixel with various filters in order to determine information about color.
The sensitivity to red light decreases quickly at wavelengths greater than 650 nm, but light can still be perceived if it is strong enough, up to around 780 nm.
Many so-called near-IR LEDs may actually be somewhere around 750 nm, so they are still visible on a dark background, even if they are perceived as extremely dim.
On the other hand, there are many near infrared LEDs around 900 nm and those are really invisible. Near-infrared LEDs around 1300 nm or around 1550 nm are also completely invisible.
An invisible near-infrared laser beam could become visible due to double-photon absorption, but if a beam of such intensity as to cause double-photon absorption hits your retina, there are more serious things to worry about.
I remember reading some people can perceive some near IR, but mostly that near-IR LEDs actually leak some red themselves due to imperfections in manufacturing or something?
I've seen skepticism about the veracity of the claims, in part as various sources cited in the git repo pointed to todo files not actual data[1] (in that example was only just hours ago a source file was added, when the project still claims part of the conclusions are based on data said to be contained there).
Which has led some to suspect much is LLM generated and not properly human-reviewed, in addition to the very short timeframe from initial self-disclosed start of the research to publishing it online (mere 2-3 days) despite the confident tone the author uses.
There's a forum (HardForum) where they've taken a kind of opposite approach: people pay to access private forums where they can talk about politics and random things while the public-facing boards remain tech focused.
Basically incentivizing those who feel strongly about things to just pay up to talk about them in an exclusive area, which also keeps the site ad-free. Been apparently working for 25 years.
> I don't want someone to come up to me and say, "Your code is wrong; you should have done ABC". I want them to say "Hey, I ran into a problem with your code. I think here on line 123 you meant to do ABC but you did XYZ by accident. What do you think?"
It's also a sign of good faith and tentative analysis since while that example sounds cut and dry sometimes context/details can be overlooked even those familiar with something.
Even in the article while the headline and main text suggest they'd be happy with bluntness to the point of rudeness the actual full examples at the end show the language used is merely succinct while still being helpful, with the second example also substituting the more direct 'you' for the diffused 'we'.
Article explains it's being partially driven from people following celebrities who believe they're using it as countersignaling, demonstrating economic class via things that could be perceived as the opposite.
"Wearing wireless 24/7 tells me you don't own any land."
I wonder how much is being driven by such lead following.
Also not really a fan of the 'what they're hiding from you' tone it takes (even if that's the subject), like saying that because a website was made less than 100 days before a bill was signed it was a '77-day pipeline' to the bill (which jumped out as a dramatized rephrasing and not present in the original Reddit post).
It also doesn't inline link sources, like the Bloomberg article it mentions (this[1]). A more impartial voice and linked citations to allow quick reference would raise fewer red flags, even if the goal is worthwhile.
I think the relevant use case for this are places like Russia (one is even quoted in the testimonials) where I've seen concern about the country isolating itself from the outside internet, due to the various regional tests actually trialling this.
I've seen such users ask about ways to prepare storing outside data in the event it becomes permanent. Some have suggested mesh networks, others downloading Wikipedia and torrenting things.
So it seems that this is useful where internet is still available but is restricted at say the ISP level. It seems to be a browser that when a page is unavailable it checks for Ceno torrents of the page from other users and serves that instead.
[1] Text-based summary: https://en.wikipedia.org/wiki/MythBusters_(2010_season)#Epis...
reply