Yes. That very example shows why itis hard to write fast program in Ruby. It's not even the interpreter (which is slow), but it's that Ruby is pure magic.
I was never able to understand what happens, when I looked at a particular snippet of Ruby code, if it was not me who wrote it. With Go, understanding others people code is a trivial task.
If we're still talking about Ruby here: List could be _anything_. It might be a list of numbers, it can be a dataset in a remote server, it can be a web page parsed into a list of sentences, it might be list of people in Active Directory.
You literally can't know what's going on under the hood without popping it and looking for yourself.
Ruby's infatuation with clever magic code is what turned me off it years ago. You can get up to 80% in 20% of the time compared to other languages, but then you spend the 80% of time you have left fighting against the magic to get the last 20% done properly. There's way too much stuff You Just Need To Know.
I don’t think that example tells the whole story. You have to zoom out and consider what code based look like when this trade off is made repeatedly by devs. When “easyness” or DRYness becomes king you get ravioli code, and it becomes unintelligible. Keep it simple
As much as I don't personally enjoy writing Go. This is a huge benefit. There's a decent amount of conformity around the "right way" of doing things. I'll admit, it has definitely made me consider my approach to solving problems in other languages much simpler. I'm not back to writing loops in JavaScript when I use it. It really bugs the functional programming folks but... no one can argue that it's readable, and it's pretty fast.
CI systems (or surrounding community) typically give a mechanism for “last successful commit”. Executing a diff of changes and detecting from that which projects should be built from is something you can build using that data. If you have a lot of interconnected project dependencies in a mono repo, this logic might be more challenging to write, but there it is.
Essence: "A single associate assigned to a specific production line working between the production dates specified (12/14/2020 – 12/21/2020) was found to be using an improper torque wrench technique".
The counterpoint is valid (rising seas don't help), but then we have Saint Petersburg ([1]) that's literally build on top of swamps and occupies multiple islands. It has a first class subway ([2]). It's a little deeper than regular subways, but it exists and works really well.
Forget the tunnels, sealing those is commonplace, what really matters is the station entrances. St. Petersburg station entrances were built well above sea level, and by "sea level" I mean the level the builders expected for the foreseeable future.
What are the limits for the number of layers? 176 seems like a large enough number to believe that a few thousands are a possibility. Is the tech any close to a bottleneck, or they will keep growing them up?
The biggest limiting factor is the ability to etch deep holes with a high aspect ratio. Shrinking individual memory cells along any axis is of limited use (fewer electrons means less durability), so you can't make layers much thinner. Making holes wider can get you more layers, but isn't really a net win for total density. So far, nobody has started stacking more than two decks of layers, so I don't know if we have good data on how cost and yield scale when going all-out on string stacking to reach extreme layer counts. As an alternative, there's R&D into using wider holes, and then splitting the vertical channels in half, giving you two semicircular memory cells per layer rather than one circular cell.
There's also a need to keep shrinking the peripheral circuitry as the number of memory cells stacked above each mm^2 of buffers and charge pumps keeps growing.
I've been reporting on it for the past 5 years, including a recent interview with one of Micron's lead engineers about their 176L NAND, which I haven't written up and published yet. https://www.anandtech.com/Author/182
Technically part-time, paid as a freelancer. Only two or three of the senior editors are on salary. It's not a lucrative line of work, but most of the time it's pretty fun.
Oh wow. The quality is absolutely amazing though. Thank You. While there are less content to write purely just on DRAM, NAND and storage, some of those reporting still takes a long time to investigate and write. Never thought it was freelance.
One limiting factor as the string gets too long is resistance becomes too large. This is the same reason the 2D NAND string is at 128. Maybe certain material breakthroughs will change that
vDSO is a good way to mitigate issues like these. It's also a better stable ABI than libc, and easier to maintain than pure kernel ABI (like Linux does), because nobody forbids the kernel to inject different vDSOs to different binaries, if a need arises.
vDSO is also a language independent construct, so there's no special treatment for any favorite language, be it C (OpenBSD), C++ (Windows) or Oberon OS (Oberon).
vDSO functions are still allowed to use arbitrary amounts of stack though, which means Go might still have problems with them without having to allocate much larger stacks than it normally would.
That would be up to a specific OS to decide which guarantees on stack size limit would it like to provide. But I agree that it's a valid concern, and not all possible vDSO implementations are reasonable.
I do not buy it. Just use gemini:// rather than https:// to refer to that specific subset of https + html. Maybe also add a header to the server response that says x-gemini: true or whatever.
> It's difficult or even impossible to deactivate support for all the unwanted features in mainstream browsers
Just do not use mainstream browsers then. Make your own like you do for gemini. They address it a bit later with:
> Writing a dumbed down web browser which gracefully ignores all the unwanted features is much harder than writing a Gemini client from scratch
And I will disagree on this part, http 1.0 is actually easier to implement than the gemini protocol.
> Even if you did it, you'd have a very difficult time discovering the minuscule fraction of websites it could render.
Except gemini browsers render even less websites right now.
It seems to me that the gemini developers can't really think outside the box. This is further proven by their dependence on things like TCP, TLS, and DNS.
gemini isn't trying to reinvent the web, to make it P2P or serverless or whatever; it's just reusing the ideas of gopher and the web but going in another direction. It doesn't try to replace the web. From this point of view it makes total sense to reuse TCP, TLS and DNS, because they're not trying to replace them.
> Maybe also add a header to the server response that says x-gemini: true or whatever.
That would require adding headers, which means extension is possible, and that's explicitely something gemini doesn't want. I believe it makes sense in the goals gemini wants to achieve.
> And I will disagree on this part, http 1.0 is actually easier to implement than the gemini protocol.
HTTP 1.0 still has multiple headers and multiple verbs. It's actually closer to HTTP 0.9. Yes, you can say "don't use those" but at some point it's good to refresh the spec and see what is and what isn't needed. Moreover HTTP is just HTTP, Gemini is transfer + encryption + client identification (through client certificates); the latter is still the wild west for the HTTP world, there is no clear set of "best practices" in this domain
> 1.4 Do you really think you can replace the web?
> Not for a minute! Nor does anybody involved with Gemini want to destroy Gopherspace. Gemini is not intended to replace either Gopher or the web, but to co-exist peacefully alongside them as one more option which people can freely choose to use if it suits them. In the same way that many people currently serve the same content via gopher and the web, people will be able to "bihost" or "trihost" content on whichever combination of protocols they think offer the best match to their technical, philosophical and aesthetic requirements and those of their intended audience.
For the other point you seem to forget that gemini doesn't exist on technical grounds but on philosophical grounds: it wants to create a new space with its own rules, even though the technicalities are close to something that already exist. People have written blogs (called gemlogs in gemini), and they "hacked" the format to build an informal replacement to Atom. The constraints of the medium created the requirements and the result is a simple, human-readable and human-editable document that can replace Atom in most cases: https://proxy.flounder.online/gemini.circumlunar.space/docs/.... It follows the philosophy of making this new space more human-centered.
By "This is what they claim, yes." I meant that they claim that they are not trying to replace the web, not that they claim that they try to replace it.
Every Gemini post on HN has an inevitable post like this.
Gemini developers want to keep things inside of a box that can't be bulldozed by corporate interests. I love it. I just wish Gemini browsers would optionally inline media and video links, but I'd be surprised if that doesn't happen before too long.
What I meant I guess was displaying images and video inline on a graphical browser instead of on a new page or browser instance. Looks like LaGrange already does that at least for images.
Yeah, I really like the links being on distinct lines. I don't know why, but it's refreshing. I think it also encourages a document writer to actually provide details about the link and definitely gives the user agent an opportunity to display the destination and be clear about what is happening.
> The problem is that deciding upon a strictly limited subset of HTTP and HTML, slapping a label on it and calling it a day would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way. It's impossible to know in advance whether what's on the other side of a https:// URL will be within the subset or outside it.
Not that I can't see the appeal for the gemini devs to start from scratch and roll their own protocol rather than start with a browser and subtract the unwanted parts, but that justification is, technically, practically, and also generally very weak.
Technically, because you can very well limit HTML to a subset of markup elements (for example, to exclude <script> elements or, likewise, onclick and similar attributes accepting script). The whole point of SGML, on which HTML is based, is to define markup languages, and also derive restricted languages from general ones. The problem here is rather that HTML on its own isn't expressive enough for interactive things we've come to expect (such as idk menus, table-of-content summaries, and other navs or in-page search dialogs and other interactive features that are not quite webapps) yet is also too powerful with js being inserted everywhere to non-heuristically comprehend content for reader mode apps and screen readers in the general case.
Practically, because syntax checkers (SGML or otherwise) and NoScript exist and have for a long time. It would also be cool if search engines could finally come around and penalize or at least flag content relying heavily on script and/or invasive tracking for ads. One way to make this happen is to introduce application/html as opposed to text/html media types.
Generally, because HTML and other markup vocabularies have been developed using public money for, well, publishing hypertext in academia, and it's odd to marginalize that original use case just because of the desires of ad companies.
The argument isn't being technical or practical, it's being mental.
It's acknowledging the fact that if you given humans a different/unfamiliar protocol they will be naturally inclined to treat it's content differently, which is the entire point.
I've entertained an idea to build a set of tools for contemporary development (version control, code review, CI, etc) which would only speak Gemini. That would be both the UI and the API.
Blockers came fast:
* no way to upload anything that exceeds 1024 bytes,
* escaping is subtly broken and so it would be impossible to review code written in Python or Markdown (triple quotes)
* No text editing capabilities, pretty much.
Which is fine: Gemini is great because it's restricted. It's nearly impossible to abuse. It will find its users, even if it would not be me.
The one thing that puts me off the most is the limit of two levels of headings.
(There are three, but you need one for the page title, and it would be awkward to reuse the "title" heading level for "section" headings. Even if you got over this awkwardness, three levels is still annoyingly constraining.)
and go from there. The format is meant to be human-understandable, not machine-readable.
But your point is a good one, and made me wonder how the documentation of Gemini itself handles the issue. And behold, the Gemini FAQ [0] itself has the first level 1 heading as
## 1. Overview
and the second one as
# 2. Protocol design
That's really disturbing once you notice, but I'd wager that hundreds of people have read the page and not noticed.
>IMHO you could just use [...] and go from there. The format is meant to be human-understandable, not machine-readable.
I disagree that it's not meant to be machine-readable. Accessibility clients rely on heading levels to infer sections and subsections. HTML has `<section>`, but a Gemini client only has heading levels to go by.
But anyway, that's why I said it'd be "awkward", not "impossible", to reuse the title heading level for section headings. The rich-text Gemini client as well as the human user reading the raw file have to special-case that the first `#` is the page title and not equivalent to other `#` in the file.
And like I said, even once you get over that awkwardness, three levels is still very limiting. I'm looking at a markdown documentation file right now that has five levels of headings. That's with one unique level for the page title, so even if it were to reuse the title heading level for the section headings it would still require four levels of headings.
>And behold, the Gemini FAQ [0] itself has [...]
Ha, I didn't know about that one. I had noticed in the past that the homepage [1] uses a unique level for the title, but the specs page [2] does not because it can't afford to.
I also noticed a few cases of trying to use markdown emphasis on the docs. If you can't even write the platform docs without having to work around your markup system, maybe don't go on about how the spec will never need updating.
I would argue that being limited to two levels is a feature.
Here's a related quote[0] from Edward Tufte.
"Dr Spock's Baby Care is a best-selling owner's manual for the most complicated 'product' imaginable -- and it only has two levels of headings. You people have 8 levels of hierarchy and I haven't even stopped counting yet. No wonder you think it's complicated."
True, text/gemini is not really suitable for UIs. I did write a wiki engine where the editing is done with sed commands and all responses are text/gemini. It works but people used to modern web apps are not going to love the UX.
You can serve other mimetypes over gemini (the protocol). That's useful for some use cases (eg. ansi.hrtk.in serves modem download emulated versions of ANSI art; requires a streaming-capable client).
But all in all, Gemini tries hard to not be an application platform. These exercises in stretching the limits are fun and IIRC have also guided the development of the spec. But the focus of the project is on text-based content.
> Of course, it is possible to serve Markdown over Gemini. The inclusion of a text/markdown Media type in the response header will allow more advanced clients to support it.
So not as the default markup language, but available.
I was never able to understand what happens, when I looked at a particular snippet of Ruby code, if it was not me who wrote it. With Go, understanding others people code is a trivial task.