Hacker Newsnew | past | comments | ask | show | jobs | submit | whizzter's commentslogin

Space (Space-X showed that reusable rockets are feasible), Programmable health (Covid vaccine and remember that mRNA curing that dog?),etc.

Sadly, I think there's a risk we might also be heading towards a dark age with few advances since fundamental research has been squeezed away for being unprofitable or hobbled by a industrialized publishing/review-system for a while now and we've been coasting along on profitable applications rather than (expensive) breakthroughts in basics.


If they can get Valve/Steam for an OS that handles most games well that could in fact be huge if the pricepoint is a bit lower initially but with plenty of unified RAM (both for AI but also games).

That said, gaming laptops cooling issues are so often around the GPU so it'd also require a seasoned manufacturer to make it correctly.


NVidia already has the Shield, and GeForce NOW.

Isn't that a streaming device+service? I'm more thinking in on lines that the DGX devices can run fairly moden games like Cyperpunk at good framerates, the Windows foothold has been games and enterprise, and an ARM device with a mainstream GPU together with the work Valve has done to make Steam devices.

There is Steamdeck and SteamMachine (That can double as a desktop and still X86 based), only real thing missing in that lineup is a laptop for factor machine and if NVidia can provide a bit cooler and fairly power efficient alternative (gaming laptops are still damn loud) it could very well be dang enticing for many.


And actual native games instead of relying on Windows developers.

As I noted in my other comment (1), in 1985 Amiga OCS bitplane graphics (separate each bits of a pixel index into separate areas) was a huge boon in 2d capability since it lowered bandwidth to 6/8ths but made 3d rendering a major pain in the ass.

The Aga chipset of the 1200/4000 stupidly only added 2 more bitplanes. The CD32 chip actually had byte-per-pixel (chunky) graphics modes but the omission from the 1200 was fatal.

Reading in hindsight there was probably too many structural issues for Commodore to remain competitive anyhow, but an alt-history where they would've seen the needs for 3d rendering is tantalizing.

1: https://news.ycombinator.com/item?id=47717334


> The Aga chipset of the 1200/4000 stupidly only added 2 more bitplanes. The CD32 chip actually had byte-per-pixel (chunky) graphics modes but the omission from the 1200 was fatal.

The intention was good, but the Akiko chip was functionally almost useless. It was soon surpassed by CPU chunky to planar algorithms. I don't think it was ever even used in any serious way by any released games (though it might have been used to help with FMV).


Ah, I was under the impression that it had a native chunky mode but it was a built-in C2P routine? Anyhow, seems it was useful (1) when running on stock CD32's but not in conjunction with faster machines.

1: https://forum.amiga.org/index.php?topic=51616.msg544232#msg5...


Which brings me to my pet peeve, the already slow 68020 (680ec20) at 14MHz was crippled by, even though it had a 32-bit bus, was only connected to a 16-bit RAM bus. (Chipram.)

This 16-bit memory (2 megs) is also where the framebuffer and audio lives, so the stock CPU in A1200 has to share bandwidth with display signal generation and the graphics and audio processing.

All-in-all, it meant the Amiga 1200 had only about twice the memory throughput of the Amiga 500. (About 5 megabytes/s vs about 10 megabytes/s)

If the A1200 had at least some extra 32-bit memory (it existed as a third party add-on) the CPU could have had its own uncontested memory with a troughput of about 20-40 megabyte/s.

Imagine the difference it would have made if the machine had just a little extra memory.

That's just a tiny detail. That the chipset wasn't 32-bit was another disappointment.

The bigger problem was that Commodore as a company was aimless.


Yeah, and it took ~7 years to make those marginal improvements over the earlier Amiga chipset! I'm ignoring ECS, since it barely added anything over OCS for the average user.

Commodore so slowly and ineffectually improving on the OCS didn't help, but the original sin of the Amiga was committed in the beginning, with planar graphics (i.e., slow and hard to work with, even setting aside HAM) and TV-oriented resolutions/refresh rates (i.e., users needing to buy a "flicker fixer"). It's like they looked at one of the most important reasons for the PC and Mac's success—a gorgeous, rock-solid monochrome display—and said "Let's do exactly the opposite!"

Iirc interlaced display and 6 bitplanes were a compromise to allow color graphics in 1985 with the memory bandwidths available at the time.

If it's a sin or feature can of course be debated but I remember playing games on an Amiga in the early 90s and until Doom the graphics capabilities didn't look outdated.

By 1992 with AGA however I agree, flicker and planar graphics(with 8 bitplanes any total memory bandwidth gains were gone) was a downside/sin that should've been fixed to stay relevant.


5 sins in 1992: - 8 bit planar instead of chunky - progressive display (vs interlaced) - sound was not 16-bit - should have been 68030 with mmu support (vs 68020ec) - HD mandatory.

If they addressed this, the Doom experience would have run better on Amiga.


The CD32 chip actually had byte-per-pixel (chunky) graphics modes but the omission from the 1200 was fatal.

I agree. Unfortunately, even with chunky graphics and/or 3D foresight, 68k would still have been a dead end and Commodore would still have been mismanaged into death. It’s fun to dream though…


Was it necessarily a dead end? Considering the ways Intel and later AMD managed to upgrade/re-invent x86 that until x64 still retained so much of the x86 instruction encoding/heritage (heck, even x64 retains some of the instruction encoding characteristics).

Had the Amiga retained relevance for longer and without a push for PowerPC I don't see a reason why 68k wouldn't have been extended. Heck the FPGA Apollo 68080 would've matched end of 1990s P-II's and FPGA's aren't speed monsters to begin with.


The 68060 is pretty good to be fair, but it never ended up being widely used and Motorola definitely saw PPC as the future.

Maybe if these theoretical new 68k Amigas became a huge market hit they could have taken the arch further and it could have remained competitive, but all the other 68k shops had already pretty much given up or moved on already (Apple was already going PPC, Sun went SPARC, NeXT gave up on their 68k hardware, Atari was exiting the computer business entirely, etc) so I don’t know that the market would have been there to support development against the vast amount of competition from both the huge x86 bastion on one hand and the multitude of RISC newcomers on the other.


Right, and I think that is a junction. Had Motorola not been enamoured with the new shiny as a chipcompany and realized that they already had a huge market that just wanted improved performance of their software and pushed 68k improvements instead of a new PPC architecture, both Apple and (a better managed) Commodore could've been competitive with improved 68k designs.

Remember, Intel also barked up the wrong tree with Itanium for 64bit and didn't really let go until AMD forced their hand with x64.


The argument is that 68k is "CISCier" than x86, the addressing modes in particular, so making a performant modern out-of-order superscaler core that uses it would be harder than x86.

I believe in that. But Commodore could have plunked a cheap 68020 in their machines for backwards compatilibity (like how MSX2 had a SOC MSX1 inside, PS2 had a PS1 SOC, PS3 had a PS2 SOC, and so on) and put another "real" socketed CPU as a co-processor. Or made big-box machines with CPUs on PCI cards, for infinite expansion options. "True" multitasking, perfect for CAD, 3D rendering and non-linear video editing. It would have been very cool with an architecture where the UI could be rendered with almost hard realtime and heavy processing happened elsewhere.

This is almost exactly what the plan was, until C= went out of business:

https://en.wikipedia.org/wiki/Amiga_Hombre_chipset

It was going to be HP PA-RISC based and have an AGA Amiga SoC, including a 68k core.


How much of Hombre is myth-and-legend? Given how little progress with made with OCS->ECS->AGA, it seems unlikely they could even have built an Amiga SoC, nevermind designed a new 64-bit chipset.


Don't agree there considering x86 has MODRM, size-prefix(16/32 and later 64bit operand sizes), SIB(with prefix for 32bit), segment/selector prefixes,etc.

Biggest difference perhaps where 68000 is more complicated is postincrement but considering all the cruft 32bit X86 already inherited from 8086 compared to the "clean" 32bit variations of 68000 I'd make it a toss at best but leaning to 68000 being easier (stuff like IP relative addressing also exists on the RISC-y ARM arch).

Apart from addressing the sheer number of weird x86 instructions and prefixes has always been the bane of lowpower x86.


There were no tech problems IMHO, it was all mgmt problems. They could have chosen a handful of completely different (edit: mutually exclusive even!) tech paths and still have won, but instead they chose to do almost nothing except bleeding the company dry.

Edit: I don't mean that their success was certain if they executed better. I mean they did almost nothing and got the guaranteed outcome: failure. (And their engineers were brilliant but had very little resources to work with.)


I think the turning-point was that flat-framebuffers and plenty of CPU-power for the first time eclipsed specialized 2d hardware (Amiga,Megadrive, Snes, etc).

Flat framebuffers and "powerful" CPU's also enabled easier software rendering (Doom/Duke) of 3d, compared to the Amiga where writing textured rendering for an Amiga is a PITA due to video memory layout with separted bitplanes spreading bits of each pixel into different memory locations (the total memory bandwidth reduction in 1985 by using 5 or 6 bitplanes became a fatal bottleneck at this point).

It wasn't really always full framerate though and the 2d chipsets did help in "classic" actiongames that were still much in the rage.

The Pentium further widened the gap, but at the same time consoles gained hardware 3d acceleration (PSX/Saturn/Jaguar) yet the Pentium could do graphics better in some respects (As shown with Quake).

Once 3d accelerators landed, PC's has more or less constantly been ahead apart from when it comes to price (and comfort/ease).


Historically I think the people that first ran into these issues and started talking about this was gamedevelopers even if it flew under the radar for many others since it was an era of less third party engines and code sharing.

The motivation was partially to solve processing on PS3's, the 7 usable Cell units have small memories of 256kb each so processing had to be moved to a compact/streamable memory format but developers were also concurrently fighting larger cache latencies caused by Entity Component hierarchies that had arisen as a solution to problems with inheritance using dislocated object hierarchies (ie separated linked objects).

Out of this grew the Data Oriented Design paradigm (that's intrinsically built on mechanical sympathy) that suited both Cell like streaming as well as minimizing cache effects for main-memory code where Entity Component Systems were reorganized to use cache efficient array processing instead of linked objects.

https://gamesfromwithin.com/data-oriented-design

Was the outside world oblivious? I'd say so to a large part, CPU's had gotten faster, more cores and memory latencies were masked to a certain degree for other kinds of code!

The Java people even pushed out their biggest mistake just as these effects started to become visible, the erased objects for generics that kind of locked in a design that requires generic list types to have disaggregated storage due to separate objects. In hindsight, this is IMHO the single biggest benefit that C# got over Java as their paths diverged as generic struct List<> object's have radically more efficent storage in C# compared to a Java ArrayList<>.

Afaik Project Valhalla still hasn't become mainstream even after almost _12_ years at this point, had they gone with proper generics at day one it would've been a trivial compiler upgrade.


Not entirely sure if it's fit the critera but there is usually pops up retro-themed compos for most retro platforms meaning there's natural hardware restrictions (like demos for retro platforms).

8bit like Nes (Nesjam late may/june), Gameboy(GBJam was last year, bi-annual), Atari,etc, but also for MSDOS, Amiga and more "mid-school" platforms together with semimodern like PS1.

Now, even with modern tools it's plenty of work to get impressive things working on older platforms (I had a Gameboy techdemo last time there was a compo that's due to grow ridiculously much).


Far cleaner, how is testability though?

Very easy - mock the useNotifications and you can easily see all the behaviour by changing three properties.

30 megs is like 2005 era size :D , obvious that it's realtime stuff (and music is probably a significant fraction?).

> The Razor1911 zip[1] is 30MB, which actually is very much on the small side for a current-day demo.

> and music is probably a significant fraction?

For the Razor1911.exe in the ZIP which ends up being 31MB on disk, which is almost entirely made out of a compressed 145MB executable, whose size is mostly 48 PNG files (11MB), 69MB of zeros (nice?), 329 compiled DirectX shader blobs (DXBC) totaling 6MB, One large MP3 of about 17MB and finally like 34MB of what seems to be other types of runtime data like asset tables, font and UI data,


Seems about right then, my guess was about a third for music, classic 128kbps mp3's are at about a meg a minute (960kb/m) so this is at a slightly higher bitrate. Not sure how compressible those parts are but between half and a third in the end depending on the final compressor checks out.

from the nfo >Sorry for the not very optimized file size for this party version, we'll make sure to push a PROPER once Revision 2026 is finished

Love the 69MB of zeros.

I bet that neatly compresses during packaging too :)

Razor 1911 and FC are different in that FC was one of those team/friend-groups that depended more on a constellation of people working together and producing until life took them away to other things.

Razor, Fairlight and some others became more of continious groups with evolving memberships (I was briefly a member of the demoteam back in 1999 and did one production in association with the people that moved over to Fairlight).


Damn, that’s awesome. Any memory to share?

Wasn't a member for too long, I think there was some anti-piracy raids around that time that I vaguely remember where some of the fallout for whatever reason was the other guys going over to Fairlight but I were already involved enough with other groups (and our highschool equivalent or perhaps work by that time?).

Funniest thing perhaps is that Smash was a musician back then for 2 things where I did the code (one musicdisc and one joke intro), Smash then went on to become a damn accomplished coder of quite a few famous Fairlight demos, Sony tools and made the commercial Notch visual toolset/editor/player that has roots in the Fairlight demoeditor codebase (Notch startup logo often pops up in democompos for those that haven't followed the scene).


Oh yeah Smash is god tier. Saw his name on so many amazing FLT demos

Most full demo (no tech or sizelimit) soundtracks since the early 00s are just mp3 streams or alike, size-coded that have soft-synths or retro categories were singing is an issue due to datasize or hardware power often don't (sometimes they do as a technical demonstration).

But I did notice some 64k's and small synths-executables had singing this year, I've added small voice-samples (compressed) but that's just seconds whilst these entries had longer sequences so I'm a tad curious as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: