The engineer on the truck seemed to have the most annoyance with the PTP aspect of 2110, but it seemed nobody questioned the move to 2110, and at least as far as broadcast equipment goes, they're all in on 2110. As a small(ish) YouTuber, NDI is more exciting to me, but I'm not mixing dozens or hundreds of sources for a real time production, and can just re-record if I get a sync issue over the network.
Perfect is the enemy of the good, as always—reading through that site, it seems like no solution is perfect, and the main tradeoff from that authors perspective is bandwidth requirements for UHD.
It looks like most places are only hitting 1080p still, however. And the truck I was looking at could do 1080, but runs the NHL games at 720p.
> it seems like no solution is perfect, and the main tradeoff from that authors perspective is bandwidth requirements for UHD.
The “no standalone switch can give enough bandwidth” issue has generally been solved since that page was written. You can buy 1U switches now off-the-shelf with 160x100G (breaking out from 32x800G). One of the main drivers of IP in this space is that you can just, like, get an Ethernet switch (and scale up in normal Ethernet ways) instead of having to buy super-expensive 12G-SDI routers that have hard upper limits on number of ins/outs.
Of course, most random YouTubers are not going to need this. But they also are not in the market for broadcast trucks.
Yes its a huge benefit. Of course without an NMOS SDN solution, actually reliably routing so much data over a network (especially if incrementally designed) is a huge pain in the ass. But thankfully we have those systems now.
We sort of traded the big expensive SDI switchers for big expensive SDNs
Also, I guess we traded a ton of coax cable for somewhat more manageable single-mode fiber. :-)
I never fully understood why SDI over fiber remains so niche, e.g. UHD people would rather do four chunky 3G-SDI cables instead of a much cheaper and easier-to-handle fiber cable (when the standards very much do exist). But once your signal is IP, then of course fiber is everywhere and readily available, so there seems to be no real blocker there.
I don't know but is there a maximum compression weight on fiber, because in some of these broadcast centers they've got cable trays of SDI that are so heavy and packed that removing a dead line is a fire hazard (because the friction of pulling the line could cause a fire).
They'd obviously need a lot less and the lines are a lot lighter but maybe folks figured if they could avoid repeating that scenario in their design, it might be a good idea :-P
You can build fiber basically arbitrarily solid. A normal patch cable won't be that solid, but the more rugged trunk cables is something like (just pulling out of a data sheet for something I used a while back):
* Outer diameter: 6mm
* Max tensile load: 900 N
* Crush resistance: 750 N / 10 cm
* Max proof stress: >= 0,69 GPa
To be clear, this is not specially rugged cable by any means. This is just a normal G12 cable for general use. You can get stuff that's much more solid. It's certainly much lighter than the equivalent SDI copper cable.
2110 is certainly popular in the industry. There’s no one way to get video out of a sports venue and across the network to takers, though. Where I work different workflows have SDI, NDI, SRT, RIST, and our own internal stuff uses MPEG TS over UDP and gets routed by a distributed system that determines next-hop routing through our network at each hop. The encoding might be H.264, HVEC, or even JPEG2000.
NDI is indeed quite good for prosumer cases. As a Newtek (now Vizrt) shop, our Tricasters speak it natively and that's a great reason we've made use of it.
That being said, if you aren't already in the Newtek/Vizrt ecosystem, might I recommend exploring Teleport, which is a free and open source NDI alternative built into OBS which has also served us very well.
Which is completely wrong by the way, JPEG-XL quantizes its coefficients after the DCT transform like every other lossy codec. Most codecs have at least some amount of range expansion in their DCT as well, so the values quantized might be greater bit depth than the input data.
I do not understand the connection between the patent concerns in the article and open-source 3D printing. In particular, the patent issues seem to be the case for all non-Chinese 3D printer companies, whether open source or not. I am not sure how sharing your designs makes this worse (I suppose with the original drawings it's a bit easier to write a patent in bad faith - but certainly not necessary). Something like a defensive patent grant might make a lot of sense (see Opus, AV1 etc) but that's also independent of whether the implementation is open source or not.
The AR coefficients described in the paper are what allow basic modeling of the scale of the noise.
> In this case, L = 0 corresponds to the case of modeling Gaussian noise whereas higher values of L may correspond to film grain with larger size of grains.
Almost none of the games pictured are actually "doujin" games - they are commercial publishers.
Also, the reason we don't remember PC-98 is because it was never sold in the US (except for the very unpopular APC-III). It was the most popular computer on Japan from late 80s to early 90s and is well remembered there. Being the most popular PC, there is a huge amount of software for it, including huge amounts of office and productivity software, many genres of games, and plenty of Western ports.
I agree. I posted a documentary on actual doujin gamedev in Japan, but it looks like the documentary was removed from Youtube. You can still find it on archive.org though for those that are interested in the scene.
And there similarly was a market for relatively low-budget and/or pornographic and/or copyright-infringing computer games in western markets, it's just that people today find weird old ecchi VNs with anime art more interesting than weird old strip poker games with digitized photos.
I agree. Whilst it's great to see a mention of PC-98 the article views it through a very odd lens, and gets a lot of things confused or even just plain wrong.
You don't need to store two whole independent images. The high quality image can predict from the low quality image, and the low quality image can be a lower resolution, too. It is less efficient than storing one image, but more efficient than storing two independent images.
Digital video tape formats (e.g. DV, HDV) are an example. Other containers that operate in this mode are TS and Ogg (and optionally, MKV). Any sort of live streaming format generally is, too.
https://stop2110.org/
reply