I'm guessing they mean with some type of procedural generation or another randomization approach in mind that creates a bunch of new maps, bosses, vehicles even!
Advance Wars, if I recall correctly, has a hand-crafted set of maps and battles structured in a campaign story. So once you're done, you're done. Replay-ability is a bit limited.
I absolutely loved Advance Wars and my brother and I played the heck out of it growing up!
Advance Wars has a map creator and the AI can play on it. In theory one could get an AI to generate maps given the rules of the map creator and then users can implement them. Per my memory, no way to share, so it'll be a bit harder to setup and you can't do things like give the enemy units whose location you don't know, but it achieves a near infinite replay value. With some more work and one could have a new AI interface with an emulator and play the game instead of using the built in AI, if one finds it is no longer challenging. Thought that is probably reaching the scale where remaking the game from scratch would give one more freedom and control (and ability to monetize).
Replayability is overrated imo. The old advance wars games on gba and ds took hours to beat and had excellent map design. There's also something cool about the no strings attached development model of old cartridge games where the game had to be perfect on release that is very endearing.
I always thought those strategy games were missing a competitive community and better PvP experience but I could also see how more replayability would make them more popular.
I think it's hard to make those PvP modes compelling to most people, being turn-based with longer turns is rough (card games get away with it because turns are typically pretty short).
It's hard to think of popular turn-based video games where it's popular for multiplayer and the turns are long.
What's an example of an idea that doesn't require you to be overly social? I would probably guess founders and sales in general requires a lot of social skills.
You can do stuff that's pure automated transactions. Stock trading, ad bidding arbitrage, that kind of thing. Of course those are pretty competitive spaces, but there are probably similar niches awaiting exploitation.
I would argue that instead of starting with a Lambda Monolith and splitting routes out when they need to scale separately, you should be starting with an actual monolith and using Lambdas _only_ when a single route needs to scale separately (in a way that fits lambda). The Lambda Monolith is an unnecessary architecture as far as I'm concerned.
So a separate server running a monolith is not "unnecessary architecture", but a simple Lambda function is?
With a Lambda function you can have a dozen different versions of the same code running simultaneously with zero extra cost and none of them will affect each other's performance. Every one of them will be versioned and you can instantly roll back to any version or pick any version to be "production".
If you need multiple versions of something running simultaneously then ya lambda might be simpler.
In my experience, running a single monolith server will be much simpler than 20+ lambda "monoliths" that call each other. I think the simplicity of lambdas vs a persistent server looks good on paper but falls apart when you have multiple times more deployments to manage.
No no, you're doing it wrong if you've got Lambdas calling Lambdas. That's not a monolith, that's a shitty microservice that'll get really expensive real fast :D
You can't if you have even moderately complex storage (like an SQL database). There is only one version of that, and while you can make sure that you can run one other version in parallel, it's just one version and a lot of extra complexity.
Gleam looks like a new language built to compile down to the BEAM. While you could call elixir or erlang libraries from Gleam code, using Phoenix probably wouldn't be useful. I imagine a new framework would be written for Gleam.
The elixir macros dont work well from gleam, you can however 'call' it, but you need to wrap all the macro code in elixir functions then call them from there.
Gleam looks really interesting. I'm surprised, for some reason I was under the impression it was more like Sorbet or Mypy with type annotations in existing code and I had no desire to check it out at all. I'm now intrigued and will probably pick it up soon!
Having worked for one of those company's, I think the productivity of Rails is overstated. Yes it's quick to get a rough demo and POC working (which is already plenty quick in many other frameworks) but once you get into the weeds of actually building and iterating the difference is not so great. I think particularly if you look at these established companies, what tech stack they are running often is just a matter of what the founders knew at the time. The business wasn't successful because they chose Rails, it was a successful business who happened to pick Rails to start with.
Cool! I was looking into monetizing my OSS idea just the other day and it seems much more complicated than it needs to be. Is your idea to build a marketplace and all the behind-the-scenes license / billing tools, or just the marketplace?
Thanks! It starts as a marketplace that provides you payments and invoicing infrastructure as well as a license, but I'm sure it might be further improved
In that case you are deploying a whole monolithic app to the Faas platform. This needs to be initialized on every cold start which takes longer the larger your code is. The main selling point (that I have experienced) in Faas is that you can have small, individually scalable functions that are quick to initialize and can be torn down after executing. If you have the whole app under one handler you would be much better off with a different architecture where the app is persistent and always running.
If you manage to lazy load your routes it may help with cold starts but I think you effectively lose the benefit of warm handlers since every request could use new code that needs to be loaded fresh.
I've never attempted lazy loading inside a Faas platform but I think generally you would be better off creating small deployable chunks and letting the platform handle scaling them up / reusing as necessary. Less fighting against the platform and if you have to maintain strict boundaries to enable lazy loading only the needed code for each route it's a short step to deploying those as their own functions anyway.
The article is about 15k functions. You know what is a lot worse than lazy loading? Getting 15k functions to deploy accurately. I've seem functions fail to deploy, are you going to fail the whole build if one of the 15k fails? What if functions depend on other functions? How about CI/CD where you're updating multiple times a day? How about when you update a single dependency across those 15k and you need to deploy again?
I don't know what PaaS you've used, but at least with Google Cloud Functions, you can do a minimum number of instances. Set it to 1 and you never have cold start issues.
I agree 15000 functions is overkill and I would never recommend something like that. I'm trying to explain why combining all 15k functions into a single Faas deployment is not going to be the best option either.
There is a middle ground that will be much faster and cheaper all around.
One of the things that bothers me the most with the cooperate software development rat race is how many problems are being solved over and over again. Every company is staffing their own devops teams to build their own abstractions over these technologies so app developers don't have to worry about it. I personally know multiple devs who basically move from company to company reimplementing the same devops tools at each one.
It all just feels like a collosal waste of energy and collective resources.