The benchmarks appear to show AWS wins by a considerable margin in all tests, and is cheaper at all quoted price points, but is hardly above Azure in the cost performance table. This appears to be due to including Azure's reserved pricing but continuing to use AWS and GCP's on-demand pricing. This is misleading, as both GCP and AWS offer discounts for yearly commitments.
Also, the RAM ratios for the machines are different across vendors. E.g. In the blogpost, the AWS machines have 2GB/vCPU vs 4GB/vCPU for GCE. This does depends on the workload though.
Not sure what OP is referring to, but Microsoft has multiple Visual Studio Code extensions (e.g. Remote SSH) that are closed-source and hard-coded not to work with any self-compiled versions of Visual Studio Code.
> We do not have an adequate history with our subscription or pricing models to accurately predict the long-term rate of customer subscription renewals or adoption, or the impact these renewals and adoption will have on our revenues or operating results.
For context, GitLab recently axed their lowest priced plan and grandfathered in existing users at cheaper rates for the next year. Their retention rate may drop once discounts run out and the new pricing kicks in.
As to the parent's comment about "The Homer" and non-core features being bad, I'd point to their CI autoscaling solution as an example of being underdeveloped, over-marketed, and suffering from technical debt. Their autoscaler uses docker machine behind-the-scenes, which hooks into various cloud providers to abstract away the act of spinning up new VMs. It works reasonably well, but Docker has archived the repository and no longer supports the software. GitLab forked the repository and maintains it for critical fixes, but is not willing to develop or accept new features. It has been known to break against new versions of Docker, does not handle concurrency very well in new environments, and does not allow [1] executing multiple concurrent jobs within spun up VMs, despite marketing that it can [2].
While the autoscaler does work, it's limitations and quirks reduces it's utility and cost-savings significantly within smaller organizations. The technical debt leaves me doubting any improvements will come within the next few years as they try to architect a new solution to replace the existing one.
I have no idea how GitLab compares in other areas, but within CI autoscaling it seems they're stuck with a cliff to climb down before they can move forward again.
CI is moving in Kubernetes everywhere I know. Builtin kubernetes pod autoscaler can add capacity based on job queue length metric, so no need for docker machine anymore.
To some extend, yes. On the other hand, they’re currently the only service where I can set up one VM, give it my AWS credentials, and have fully automated scaling.
The hoops you have to jump through in Github are absolutely unfun.
> Free is for a single developer, with the purchasing decision led by that same person
> Premium is for team (s) usage, with the purchasing decision led by one or more Directors
Doesn't this conflict with the stewardship promise that "The open source codebase will have all the features that are essential to running a large 'forge' with public and private repositories"?
[0]: https://mariadb.org/mariadb-contribution-statistics-march-20...