Slightly nitpicky, but is something a PaaS if you run it yourself? Anything as a Service isn't a service if you're running it yourself. It's just ... infrastructure.
Not nitpicky at all - this is an important distinction to highlight for pointy-haired decision makers.
This product is undoubtedly the P in PaaS, but there is no service behind it. If your company uses this as an alternative to a real Heroku/AWS/xyz PaaS, you must have engineers at hand for 24/7 ops, scaling servers and fixing bugs. In my opinion, this is quite risky for anything running in production and should not survive a cost-benefit analysis.
I completely disagree, the difference of price between dedicated servers and even EC2 instances is completely amazing.
This is what you get for less than $200/month with a dedicated server:
1× AMD EPYC 7281 CPU - 16C/32T - 2.1 GHz, 2 × 1 To NVMe, 96 Go DDR4 ECC, unmetered 750 Mbps
In one of my companies the AWS bill is just completely insane, we have like half that hardware, with a really small bandwidth, which is metered, for more than $800/month, which is fine while we're on free credits.
I love working for cloud companies, it's a lot of fun, but when it comes to my money then I never go for anything but a dedicated server.
When you got applications that don't require high availability while needing a very low cost per CPU, dedicated servers just make sense. We are running a cluster of a few high-CPU dedicated servers for our data-science team, and it just makes sense: we don't need 99.99%+ availability, and the servers we rent are cheaper than the equivalent AWS storage cost alone ... The op cost of managing these is exactly the same as managing equivalent EC2 instances. We don't need backups either.
On the other side, we got some low-CPU web services that require high availability, redundancy and reliable backups. For these I just use Heroku. It's extremely reliable and easy to operate, while only costing about $100/month (a few hobby dynos + a fully managed PgSQL DB). Sure it's probably 5x more expensive than a dedicated server with 10x the performance, but I don't have to worry about backups, availability and scalability. And these apps just don't need this 10x faster CPUs anyway.
Do you really mean that you never had a server with 60 days uptime ? This is really insignificant to achieve, even a reboot per month is nothing, 99.9% of uptime is enough for 99.9% of projects, and a monthly reboot does not even get you close to that service level.
I am not affiliated nor haven't tried CapRover (for special reason of: coding my own for my tastytastes), but I would bet any standard system administrator could get a 99.9% uptime after the second month of production without particular effort (unless they don't know underlying technologies ie. "what a container" "what http" "what is namespace" "what iptables" ...)
Yes, hardware as a service will always be much more expensive than hardware you own. But it may be less expensive than the team you will require to run that hardware at an acceptable service level. It very likely will be less expensive than the opportunity cost of running your own hardware.
As an example of the latter bit, if you are running your own hardware and need to add another host and you do not have a spare lying around, then you need to order one. It has to be shipped. Someone has to unpack it. Someone has to make sure that the data centre has sufficient power. Someone has to install it, its power and its network cables. Each of these steps takes time, but also each step is an opportunity for friction.
By contrast, with a service, you would just add a new host. Five minutes later you are up and running. That gives you an operational nimbleness that you wouldn't otherwise have had.
I love how there's this myth that servers and services just blow up every 10 minutes 24/7 and unless you have a legion of ops personnel you're going to get hours of downtime each year.
Servers, for the most part, just work. In DC climate-controlled environments, hardware failures is exceedingly rare. Apart from harddrives, most hardware will happily tick along for a decade, if not longer.
Sane production-grade OSes (read: not Ubuntu) will also happily run for literal years with zero human intervention. For obvious reasons, it's a bad idea to not patch your systems, but things will continue to "just work" pretty much forever unless you're running really shitty code.
For renting vs buying servers, there's upsides and downsides. Buying gear is far far cheaper if you plan to be around for more than a year, but renting dedicated servers gives you a lot more flexibility -- to provision a new server, you hit a button in their online panel, wait 15 minutes, then let your deployment strategy take care of the rest.
I find it almost mind-boggling that AWS and friends have convinced people that it's normal to spend ridiculous amounts of money for fairly "meh" service specs in what's essentially VMs.
The points you make are fine but I think the experience becomes more painful linearly with the number of servers you manage, since you're N times more likely to see something happen that takes down a server. It just happens more frequently. At some point that becomes often enough that you don't want to deal with it anymore.
I don't think you understand the sheer scale you need to be experiencing a failure more often than once a month. By my anecdotal experience you'd need at least 1k servers for that to happen... and if your company is big enough for $2MM capex for servers alone you can handle $100 remote hands and 30 minutes of engineer time.
Not to mention that at that scale you have plenty of redundancy and, if your ops team knows what they're doing, automagic failover / HA. Anything that happens can easily "wait till Monday", no need for 24/7 anything.
That example is not realistic. You rent dedicated servers from a provider that will always have extra hardware at hand, and handle all of those steps; you don't rent hardware yourself and run it in your basement :)
Or you rent managed servers or colo space from one of the many hosting providers that also offers cloud services, and pick and choose. That lets you migrate your base load to colo or managed servers over time, while you still have the nimbleness of being able to scale up and down dynamically if you want or need to.
And my experience from providing devops services to clients on a contract basis is that the clients who use cloud services tends to need more, not less, devops assistance.
Certainly hardly anyone should be physically managing their servers. The relevant comparison is between getting 1GB RAM in the form of a $50/month Heroku dyno and getting it with a $2/month VPS (actually with Hetzner that will get you 2GB, they don't go below 1GB).
Neither you nor the parent are wrong but I'd argue that you don't really see IaaS or PaaS used all that much for on-prem platforms these days. (And the definition of PaaS was always a bit fuzzy--something like an abstraction that is in between IaaS and SaaS.
You're probably more likely to see OpenStack called a private cloud or on-prem cloud than "IaaS" these days. And OpenShift is usually called a Container Platform rather than a PaaS.
The definitions have always been pretty clear to me, but all right then, thanks for the heads up, I guess CapRover people and I are also what we call "old school devops" these days.
"Container platform" seems pretty vague to me, PaaS means something I know right away.
I mean, k8s is a container platform too isn't it ? But you'll need to build what we called a PaaS on top of it yourself (or use something like Kelproject, OpenShift ...)
Yeah, the terminology isn't always super-clear. Yes, k8s is a container platform. OpenShift, depending upon how you use it, can span a range from being an integrated k8s distribution to something a lot more like what was commonly called a PaaS with developer tools, CI/CD pipeline, registry, etc.
PaaS isn't a verboten term or anything like that. But it turns some people off because it was most associated with services/products/projects that mostly focused on a simplified developer experience at the cost of flexibility.
Well, for me PaaS is a software built uppon bricks like an image registry (also present in IaaS), authentication registry (also present in IaaS), developers tools ie. to log into a system (also present in IaaS). But, with the IaaS you get an infrastructure of bare virtual systems, emulating a physical world, and with PaaS you get deployments of code. A PaaS works on a IaaS, but can also run on baremetal, it doesn't matter for the PaaS in general. With PaaS, you don't need to define bare system provisioning, PaaS does it for you, many IaaS teams ended up implementing their own PaaS one way or another, back in the days you are refering to I guess.
k8s for me is a framework, OpenShit, Rancher, KelProject would be "distributions" of k8s, just like Linux kernel and distributions including it.
As a person who writes technical requirements and implementation document, it strikes to me when I'm asked to document implementation of a "SaaS" that there will be paid accounts and billing.
Maybe CapRover will provide paid accounts on managed servers in which case they would be creating a SaaS with their PaaS solution.
But again I'm not talking from a "managerial" perspective of the definitions, rather from a technical one. I suppose at this stage CapRover is trying to attract technical users rather than managerial ones (unless they have something to sell for cash but I didn't see it on their site or just missed it)
”Service” to me is just ”delivery of something with a specific scope and a defined contract”, not so much about who delivers said ”service”.
Many IT depts would do themselves a massive favor to deliver actual services instead of “just infra and some stuff thrown on top” and call it service delivery.
Tools like in this link can help, but a big part is simply about automation and delegation/self provisioning.
You can't just decide that words mean different things to "you".
Platform as a Service or anything "as a Service" means someone else provides it as a service (ie subscription). The Platform part is all this is offering. So it is not a Platform as a Service.
The distinction you are drawing sounds to me like the difference between "managed" and "unmanaged."
My read on whether something is a service or not is, can I make a request of the thing in simple terms, and have the thing carry out all the messy details on my behalf?