> a framework for applying code changes across hundreds or thousands of repositories at once
Statements like this raise fair questions. Is there code duplication across 1,000s of repos, and why respond by increasing surface area further with bespoke tooling?
Imagine you initialized 10,000 NPM repos identically simultaneously. Then had 100 different teams each take of 100 those repos for 10 different projects, and let each repo run for 1,000 commits. How distinct would each of those repos be? How might have they evolved independently? What types of interesting patterns might be adopted to improve development experience, or detect bugs by each team? What packages at what version might be most popular?
Now imagine you had the tools to do a diff across all those repos simultaneously, and classify, group, and review those patterns. What could you learn NPM teams and practices?
Now imagine you could pick best of breed, and propagate those back to all the other projects automatically to improve their productivity, security, etc. How fast would your productivity improve and your engineering culture change if everyone could automatically learn the best of what everyone else had to offer?
Companies like Spotify have sophisticated tooling for detecting repo changes and enforcing policy like that, and they run that experiment 1,000 times a day. Small evolutions in what was an identical build script, like a version bump, are detected, and if it passes a threshold it can be rolled out everywhere else immediately.
Having all the copies that you can sync up centrally periodically puts natural selection to work on internal best practices.
Basically, things work differently at scale. When the number developers you employ approaches a meaningful percentage of the total number of developers globally, your internal diversity starts to mirror the global diversity. So you have to manage that diversity. If you freeze policy entirely, you fall behind the global average. If you let things run wild, your company fractures technologically.
So, make a 1,000 copies, see what pops up, adopt and enforce things that look good, then do it again. Evolve to the next best place you can be from where you are.
Check out primeageons 99 prompts. The idea, as i understand it, is you scope an agent to implementing a single function at a time with firm guardrails. So something in between yolo agents and rudimentary tab complete
Last time I used a visitor, it was probably last week when I created a lint rule. Visiting every node in a tree (ast or otherwise) with lambda is doing the pattern regardless of what you call the pattern. Tools like eslint still literally use the visitor pattern. I would point to software engineers dismissing tried and true ideas as the better generalization.
You use a static typed language for guardrails but then you throw out the guardrails of a database schema? Seems like those two decisions are directly at odds.
Without a db schema, you still have to worry about migrating data at runtime or otherwise. Removing the schema just shifts the pain doesn’t remove it, in my experience.
MongoDB actually has built-in schema validation which can be enabled, we're just not using it at the moment because we haven't yet found a good use case where the TypeScript schema itself is not enough.
The schema in Modelence is defined in your code, and at the moment the only use case where it's not enough is if someone directly modifies the database data externally. We're not against having the native MongoDB schema as an extra enforcement, the only reason we haven't added it yet is because it requires extra work to carefully sync both. I believe at some point we'll add it as an extra layer to prevent corrupting the data by direct modifications.
Not endorsing conspiracy theories but one of the YC guys is an investor in flock, which is positioned to benefit from some of the recent political policies.
> First let’s talk about my credentials and qualifications for this post. My next-door neighbor Marv has a fat squirrel that runs up to his sliding-glass door every morning, waiting to be fed.
Some of the writings here feels a little incoherent. The article implies progress will be exponential as matter of fact but we will be lucky to maintain linear progress or even avoid regressing.
Statements like this raise fair questions. Is there code duplication across 1,000s of repos, and why respond by increasing surface area further with bespoke tooling?
reply