We hear you… There is always a trade-off between analytics and user privacy. We believe that open-source solutions are the way to go in this space, however no implementation is going to suit every use-case.
MCP supports authentication via OAuth2, which is what we use here. For the most part, this means that a browser window is opened and the user can sign in with their GitHub or Google account. The access token is verified by us and passed to the upstream MCP server.
We'd love to allow orgs to bring their own IdP but there is some refactoring we still have to do for this.
It is indeed amazing. Like, the death of Cleopatra in 69 BC is closer to our present day than it is to the building of the great pyramid of Khufu (~2600 BC).
What's even more interesting is that even the great pyramid itself wasn't the beginning of it all, they built it in the later Old Kingdom period.
And we know so much because of the unusual culture of having to carve out one's achievements on the tombstone. In fact, serious people of the Nile cared more about the afterlife than life itself.
Oh yes, sorry I meant to write 3 * O(n) which though doesn't change the order is still three times the operations. The example I was remembering was doing filters 'inside' maps.
So... O(n)? Leaving aside the fact that "3 * O(n)" is nonsensical and not defined, recall f(x) is O(g(x)) if there exists some real c such that f(x) is bounded above by cg(x) (for all but finitely many x). Maybe you can say that g(x) = 3n, in which case any f(x) that is O(3n) is really just O(n), because we have some c such that f(x) < c(3n) and so with d = 3c we have f(x) < dn.
It's not the lower-order terms or constant factors we care about, but the relative rate of growth of space or time usage between algorithms of, for example, linear vs. logarithmic complexity, where the difference in the highest order dominates any other lower order terms or differences.
What annoys me greatly is people imprecisely using language, terminology, and/or other constructs with very clearly defined meanings without realizing the semantic implications of their sloppily arranged ideas, still thinking they've done the "smarter" thing by throwing out some big-O notation. Asymptotic analysis and big-O is about comparing relative rates of growth at the extremes. If you're talking about operations or CPU or wall clock time, use those measures instead; but in those cases you would actually need to take an empirical measurement of emitted instruction count or CPU usage to prove that there is indeed a threefold increase of something, since you can't easily reason about compiler output or process scheduling decisions & current CPU load a priori.
I do understand 3 * O(n) is just O(n), thanks. I was just clarifying my initial typo. However, it's still three/four times the iterations needed - and that matters in performance critical code. One is terminology, and the other is practical difference in code execution time that matters more, and thus needs to be understood better. You might not 'care about constant factors' but they do actually affect performance :).
> Sorry but this kind of theoretical reasoning wouldn't move a needle if I'm reviewing your PR.
If this were a PR review situation I would ask for a callgrind profile or timings or some other measurement of performance. You don't know how your code will be optimized down by the compiler or where the hotspots even are without taking a measurement. Theoretical arguments, especially ones based on handwavey applications of big-O, aren't sufficient for optimization which is ultimately an empirical activity; it's hard to actually gauge the performance of a piece of code through mere inspection, and so actual empirical measurements are required.
I recall looking at New Relic reports of slow transactions that suffered from stacked n+1 query problems because the ORM was obscuring what was actually going on beneath the hood at a lower level of abstraction (SQL).
My point is it's often difficult to just visually inspect a piece of code and know exactly what is happening. In the above case it was the instrumentation and empirical measurements of performance that flagged a problem, not some a priori theoretical analysis of what one thought was happening.
Check out Apache Iceberg. It's a format for storing Parquet data in object storage, for both read and write. Not sure if DuckDB does Iceberg (I know ClickHouse does), but it's a similar principle, disaggregating data from compute.
I'm kind of in the same boat (but with VSCode). In addition to that, I found that it didn't make things too much easier than something like MVC with built-in template/html. The context integration seems like a huge footgun, since it just panics if you access a value that doesn't exist.
Nothing at all. Some people just prefer copyleft licenses and use hyperbole like "disrespectful" and "dangerous" to attack software with permissive licenses.