Thanks for sharing the NFT-as-access-token example, it's far and away the strongest example of a relatable use for NFTs that I've heard, and is compelling both in its value to the issuer (as a verifiable access token) and to the buyer (for both immediate use and due to its resale of residual value after having been used).
I imagine that your friend's NFT contract stipulates he gets a cut of resales, but I'm also curious if/how your friend deals with potentially bad actors that, say, scrape the entire content then resell the token for most of what they paid for it having extracted the content it unlocks. Obviously the contract could help offset a part of that trade, but seems tricky nonetheless?
Edit: I suppose part of the premise is that there are only 500 passes and content continues to be produced, so if you want access to the current library after all passes have been used trading the pass is actually a key attribute of others gaining access to the content.
They’re most common on Thursdays but increasingly appear on Wednesdays, too. Sunday is typically the “big” puzzle day, and has a titled theme that usually provides a hint to at least some of the clues, or the puzzle aspect.
But also, from reading the Pages docs I wager you can also make the build command something like `echo done` and then set the build directory in your configuration to be `.`.
I’d love to see a version of this (or Netlify, Vercel, etc) that didn’t bundle CI with the preview and deploy steps so that I can pick my own CI and then effectively use the “CDN” as the highly qualified host of my static assets.
We built this as a 'control plane' for everything above your backend APIs. You can bring your own CI/CD and get the same 'infinite staging' and other features that help bring Jamstack principles to large, dynamic websites.
Me too. I hate that Vercel and Netlify limit my build to using Node v12 because that is what Amazon supports. Node v14 is LTS now but still no telling when it will be a supported runtime. I refuse to live without optional chaining!
The "Workers Sites" works differently than "Cloudflare Pages", at least from the developers side. You're using their wrangler tool instead of passing a git repo to their CI/CD setup.
you can definitely do something similar with workers sites.
we recently added workers kv - the storage mechanism behind workers sites - to our free workers tier, so you can host your static sites on workers for free as well.
pages is an evolution of that with better tooling/dx for people who want to get a static site up on our network and want things like deploy previews and pre-configured github integration.
if you want to just have workers do the hosting and want to do all the CI stuff yourself, you can use something like wrangler-action[1] to simplify the process on github actions, or just install wrangler[2] (our CLI) as part of your CI workflow and do `wrangler publish` at the end.
i wrote our github action, so if you decide to go that route, happy to help debug or look over the project to make sure it'll do what you want - i'm @signalnerve on twitter, DMs are open :)
Maybe? It’s more that I’d like to pay for the managed service of hosting my assets, registering branches for long lived access URLs, getting preview links, and promoting my site to its latest version.
Differently: why are Cloudflare and others interested in running my build?
I think the target market here appreciates the one stop
Solution given the big players here have always offered that.
I think alternative build integrations would be good though I do agree with that full stop. There should be a way to hook into the system from an outside pipeline to notify that a build is ready and able to be uploaded
I’d bet it will be an enterprise only feature first once though once it happens, we shall see
Ira Glass is the creator and host of a long running radio show called This American Life on National Public Radio. The show runs weekly and tells stories of Americana and other things. I suspect part of what makes the show famous is that it coined not just a particular kind of show, but a particular kind of sound, editing and production, that, I’d say set the standard for modern audio programs.
Several of the shows producers (and Ira’s mentees) have gone on to create other famous shows:
- Sarah Koenig created Serial, one of the first truly mainstream podcasts (1M listeners in its first week) for telling, in the first season, the story of a murder of Haymen Lee and investigating whether the man convicted, Adnan Syed, was likely innocent.
- Alex Blumberg created Planet Money, a short radio show that explores economics and explains it by looking at what’s happening, culturally, around the world. Later, Blumberg founded Gimlet, a podcast company that recently sold to Spotify for $100M, ostensibly to create content with that, now famous, NPR _sound_. Some of the bigger/more famous work from Gimlet is Startup (now also a TV show in the US), Homecoming (now also a show on Amazon Prime Video), Reply All, and Conviction.
Anyway, Ira Glass has hosted This American Life since the beginning, and Blumberg recently interviewed him for a show Gimlet produces called Without Fail. If you’re looking to understand his impact in this space, I think that’s a wonderful place to start.
*edited to correct where the Homecoming TV show airs.
The bit about his mentees and the 'NPR sound' helps a lot and does create that important context.
First time I saw his interview, I did some light googling and as far as I could tell, he was some kind of host/writer. I'd gander most non-Americans haven't even heard of NPR and thus cannot really understand the impact it had/has. Much less his mentees and their various offshoots.
The deployment infra constraints mentioned in parent post are generally managed by tooling we also publish[1]. The encoding ends up similar to Helm’s, but happens through this tooling just by virtue of taking a dependency on a published jar, npm package or conda package, and that removes a lot of programmer error/guess work/maintenance. (I think we’d be open to also emitting Helm’s formats in our packaging tools, if that’s interesting to someone reading this feel free to open an issue on our repo and reference my post.)
Two big motivations for us:
(1) we had a large footprint of JAX-RS annotated Java services and a correspondingly large footprint of Typescript frontends communicating with those services, both hand-maintained;
(2) we wanted something that felt just as native and ergonomic in browsers as it did on the backend.
Migrating from that setup required first having a declarative API format to translate through, and Conjure was our answer (starting in 2016) to generate human-quality code that would drop-in-replace our hand-maintained, language-specific client and server definitions.
That said, we're fans of gRPC/Protobuf and heavy Cassandra users (so also have a good breadth of experience with Thrift), and gave both serious looks before getting going, and again before deciding to open source our work today.
When we started on this tooling there really weren't great gRPC options for the browser, and the balance of our developer pain was around frontend/backend rather than backend/backend RPC. We also took a long look at Swagger/OpenAPI, but ultimately moved on because it focused more on the full coverage of any kind of HTTP/JSON API and as a result was too general to end up with consistent APIs across many services.
Over the last two years of development (and conversion of all our hand-maintained clients) we found that Conjure held a lot of value as an easy-to-adapt declarative definition format, that it applied strong enough constraints so as to make API development focused on the semantics and behaviors rather than the syntax or specification. And, we thought that that had sufficient value we should open it up to others.
Beyond that, we've got some work underway to use protobufs as the wire format, and enough flexibility built into the framework that we can use that or other non-JSON wire formats along-side JSON with the same client and server interfaces and code implementations.
Thanks for your answer. I think I initially misunderstood the intended purpose of this tool (probably because of how many times the word RPC appears in the text) and assumed it was used for backend-to-backend communication, for which HTTP/JSON seemed extremely sub-optimal given the existing open-source solutions.
While it does make a lot more sense in the context of FE<->BE, the second part of your answer, where you speak about consistency of APIs across many services is still a little confusing, as it suggests BE<->BE communication again. Unless you have a huge, monolithic, internet/browser-facing service (which sounds rather undesirable) it's hard to imagine how browser<->middleware API could get out of hand to the point where it requires a dedicated unification framework.
Rephrasing my question - even though Conjure is not a commercial product (respect to Palantir for contributing to opensource) you must have had a target audience in mind for it: who is it? What exactly is the problem that Conjure is solving better than its existing alternatives?
Our systems look more wide than deep, and the breadth happens both in FE and in BE, so our diverse FE apps communicate with a relatively diverse set of BE services — we’d like those interactions to look unified no matter where you hit the system. In other words: we have a large number of microservices without a consolidating middleware.
Compared to other frameworks, Conjure more easily retrofits into HTTP/JSON service boundaries (no surprise based on its origins) and, IMO, provides great ergonomics for mixed FE/BE teams, especially where FEs are big and complex with lots of BE interaction.
In terms of audience, we’d hope this helps FE/BE teams with easy to use and ergonomic API defs, and think, as above, it’d especially help others who have existing surface area to convert to declarative APIs, even if only as a stepping stone.
On the BE/BE comms point: we use Conjure for all RPCs in our systems — while JSON is obviously not as compact as Protobuf or Thrift we’ve found serialization and transmission are rarely bottlenecks, and that, instead, a unified format and common treatment for clients is a boon for operability and stability — and often times also for aggregate system performance.
(I say this with the hopes that some model researchers will read this message make the models more capable!)