true, but HTML-only websites are often pretty clunky
infuriatingly, if HTML had just a bit more oomph, we could make a lot better websites with it, but they haven't been moving HTML forward as a hypermedia for decades now (see https://htmx.org for what I mean, they could implement this concept in the browser in a week, and it would change web development dramatically)
but, still, there are some really obvious and simple things that could be done to make HTML much more compelling (maybe let's start by making PUT, PATCH and DELETE available in HTML!)
If browsers natively supported HTMX features (and possibly _hyperscript) that would be a total game changer.
Well, at least it's no trouble to include these two libraries and bring your clients into the 21st century of hypertext. But doing so without JS would be amazing.
As a predominantly (hobbyist) systems developer, the pervasiveness of NPM is a major detractor for website development.
I have done a few Angular and React apps to keep the skills sharp. The imposed workflow for the power is an interesting set of tradeoffs, never mind the magic and amount of decisions you need to make to use both.
Add on top of that the rest of npm, nvm, css compilers, the various css toolkits for layout, and the rest and it just feels so complicated, even for the most basic page which needs to pull in some data over an api.
Well, I think a lot of people (developers at least) associate SPA-type web applications with smoother transitions, etc.
I agree with you to an extent: I've had some very bad experiences with javascript applications as well, but the general sentiment is (reasonably, in my opinion) that a well done SPA will feel smoother than a well done MPA
W/ htmx or similar libraries like unpoly or hotwire, you can close this gap. The big difference is the ability to update partial bits of HTML, rather than needing to do a full page refresh.
Lets say you want something as simple as a table-of-contents. If you're doing HTML, you gotta render this yourself. In Wordpress or webpage generators like Jekyll or Hugo, table-of-contents generated from H1 / H2 headers is basically just one line.
Lets say you want to sort all your posts by date and paginate the results (say, 20 results per page or something), as per typical blogging patterns. That's a lot of HTML cruft you gotta make for this to happen. Meanwhile, Wordpress / Jekyll / Hugo (etc. etc.) do this all automatically for you.
Yes it is implemented in Javascript. You don't have to write JS yourself in your application though. Not that I can't write Javascript, it's just that some things are a lot easier with hx directives.
HTMX can do a lot and it just requires not being dogmatic about
avoiding the server doing some work every now and then and sending perhaps a little more data down the wire.
If you are treating URLs as references to resources, you want the ability to CRUD those resources.
PUT and DELETE correspond with U and D, so they make sense to include. PATCH is a little less obviously useful (partial update vs. a full update w/ a PUT.)
Regardless of our feelings about them, they are there in HTTP, the HyperText Transfer Protocol. So maybe we should make them accessible through the HyperText we are transferring.
I promised myself to never-ever answer the "CRUD vs HTTP verbs" topic. I could not resist.
It is a huge shortcut to map CRUD operations to HTTP verbs POST, GET, PUT, DELETE. By the way, the definition of HTTP verbs defined in the HTTP RFCs (from RFC 1945 to RFC 7231) and the original thesis from Roy Fielding about REST (which is defining REST principles as an architectural style) never talk about one-to-one relationship between CRUD and PUT-GET-POST-DELETE.
While I understand that it might be confusing without deep-diving in long-long-lectures, the last RFC explains clearly what is the purpose of each HTTP verb including the difference between PUT and POST:
The fundamental difference between the POST and PUT methods is
highlighted by the different intent for the enclosed representation.
The target resource in a POST request is intended to handle the
enclosed representation according to the resource's own semantics,
whereas the enclosed representation in a PUT request is defined as
replacing the state of the target resource. Hence, the intent of PUT
is idempotent and visible to intermediaries, even though the exact
effect is only known by the origin server.
Therefore it would be perfectly valid to "create" a resource with both PUT and POST methods as well as "update" another one with, also, PUT and POST. It is actually even clearly stated with a few examples in the RFC. For instance in POST definition:
Appending data to a resource's existing representation(s).
I used to defined PUT and POST requests according their characteristics: the first one is idempotent, the second is not ... but could be.
Therefore, in my understanding, it is perfectly valid to perform an "upsert" (insert or update) to a resource which doesn't exist yet but for which you already know the URL, it is indeed idempotent. For instance:
PUT /resources/xyz
A last piece of evidence is the first line of PUT section in RFC 7231:
The PUT method requests that the state of the target resource be
created or replaced with the state defined by the representation
enclosed in the request message payload.
I hope it helps to clarify a little bit the topic.
Do you have any references to documented, truly REST APIs? In your experience, what (pragmatic) shortcuts were needed to veer from (some definition of) "pure" REST?
IMO Stripe API[^1] is following well the REST principles and constraints. Btw, if I well remember, they are not using PUT at all while they obviously allow users to update some resources.
They implemented an "Idempotency-Key" header that you could maybe call a "shortcut". Although it's not really deviating from HTTP standards. I guess it was easier and more pragmatic for Stripe and users to implement an "Idempotency-Key" header instead of duplicating each POST endpoint with PUT and PATCH methods since they also allow partial updates. I guess (again) that they would also have to use/implement additional header (such as ETag or If-Match) to replicate current "Idempotency-Key" header behavior.
Disclaimer: This last paragraph is full of assumptions and I most probably miss a lot of internal details from Stripe API.
I really doubt the world is going to standardize over that specific style of implementation and, frankly, the semantic web is dead. Long live the bazaar...
And the number of projects that I have seen follow them is fewer than the number of projects that just seem to wing it. Which is my point... perhaps this standard is superficial and unnecessary, and maybe we should discard it.
IETF/whoever can write whatever they want, I only care about what I actually encounter. That's "standardization"
edit- All those nice looking checkboxes depend on the individual implementation. It's a nice concept but how many projects have actually implemented it to the conventions about idempotency/etc shown?
edit2- who is for-GET anyway? A random github link doesn't have much more worth than a random Medium link
"Some people claimed that websites without CSS and JavaScript are "bland". Who cares? If your content is readable and accessible without the noisy bells and whistles of loading animations and a fancy-pants design, then ship it.
Someone else said HTML-only websites are "ugly as hell." I disagree. They're beautiful."
Is the www an information system for hyperlinking and sharing information over an internet or is it something else. Is the www the software program (e.g., "browser") used to access it.
Imagine if people criticised books for the fonts they are printed in, how the paragraphs are formatted, whether they have images, and so on. Imagine if readers were forced to wear special reading glasses to see the text.
Generally, book reviews focus only on the textual content not the presentation. I wonder why.
Using a text-only browser (not Lynx, mind you), I can read content faster and easier, with less distraction, than I can using a graphical browser running Javascript provided by an advertising company. YMMV.
As it happens, I can discuss www content submitted to HN with folks who are using such graphical browsers. Yet I cannot see the same fonts, images or formatting, nor do I execute any Javascript. For example, I read the text of the OP website and I am commenting on it here, but I have no idea what it looks like in a popular, "modern" graphical browser running Javascript and controlled by an advertising company. How is this possible.
It seems to me there is a functional aspect of www content, i.e., information, that is independent of the software used to view it.
Web developers like to assume there are only a handful of software programs that can be used to view www content, and thus by manipulating those programs they can control how www content appears to the reader. In practice, given the takeover of the www by "tech" companies living off advertising and VC money, that may be true. However text is text. I can view and process it with an infinite variety of software. I can make it look however I wish on the screen.
The www can be whatever someone wants it to be, as text can be extracted and manipulated in an infinite number of ways. Users of the www can, in theory, process the information found via the www in any way they choose.1
1. For example, GPT-3 was created using a text-only corpus extracted from the www, namely Wikipedia and Common Crawl. Web developer Javascript is ignored.
You seem to be stuck in the era where the "web" was really about distributing text content - often blogs or articles.
I'd argue that's not really what the web is anymore. A good chunk of it still does that (for example - this discussion probably falls into that category). But there's a whole section that actually is distributing applications using html/css/js.
They are not distributing long form text. They are providing spreadsheets, collaborative word documents, rich monitoring and analytics solutions, custom CAD software (yes, really - https://www.tinkercad.com/), online chat applications, plus far more than I can list.
Basically - if there was a desktop app for something, there's probably a website version of it now too.
So - no, I don't agree that the web is just text. I also don't agree with your main focus comparing it to books.
Imagine the fucking gall it would take to walk into a comic conference and say this: "Imagine if people criticised books for the fonts they are printed in, how the paragraphs are formatted, whether they have images, and so on."
That's the thing is though if most of the web is about sending documents and media, while only a fraction of it is spent on delivering desktop applications, then wouldn't it be better to just split off the part of the runtime needed for applications into it's own thing so we don't have to run untrusted code just to read a blog or watch YouTube?
You're more than welcome to do that if you'd like, but most users don't like - because it turns out documents and applications tend to go hand in hand (just like your code is data, and your data can be code). Basically - that line is a hell of a lot more blurry than you're making it out to be.
If you don't want to run javascript - use a browser that doesn't run javascript, or turn it off in your browser of choice.
If you don't want to run js to play youtube - open the video url in VLC.
But again - I think you're glossing over the progressive nature of a lot of these applications. Ex: Youtube isn't just a video feed (despite what we'd sometimes like). It's an application with comments, streaming, voting, searching, sharing, and many more features.
Can some of those be done without JS? Yes.
Does it make since to use JS to implement many of them? Yes.
Can some of them only exist with JS? Yes.
Go click the "Go live" button in Youtube and then come back and tell me how you're planning on implementing that application feature in plain ol' HTML?
Of course documents and applications go hand in hand. You need applications to view them after all. But do documents need to also be applications?
I submit they do not. Maybe YouTube has to have js for it's features but I don't accept that that's good. I don't accept that what YouTube, Twitch, Vimeo and Dailymotion provides warrants the need for each to have their own separate applications that have to be downloaded and allowed to run on my system.
I can understand that. In some situations I can even respect that. I think - sadly - you're tilting at windmills.
In other situations - I'd argue you're just wrong. The simplest benefit of a web app vs a local app is exactly the isolation that the browser provides.
To be blunt - the applications from youtube/dailymotion/twitch/etc are NOT running on your system. They're running on the browser. They can't touch your files by default, they can't touch your other apps, they are uninstalled when you close the tab. That's incredibly powerful. It's incredibly liberating too. Users in places with fairly tight restrictions on installed software are almost always allowed to use most web apps (the limitation is usually concerns around inappropriate content - not so much security).
Basically - The browser is the OS that is literally designed around allowing you to run unknown code downloaded from other networks, from untrusted sources, with a modicum of security and consistency.
I think it's very, very hard to surpass the browser as a distribution method, and I think the possibilities it allows are, frankly, miles beyond basically anything else we've invented in the space.
Do some folks go overboard and create bloated, crappy web apps? Absolutely. Just like some desktop apps are complete pieces of garbage.
Does that mean we should throw the baby out with the bath water? My opinion is a resounding "no".
> Do some folks go overboard and create bloated, crappy web apps?
I'd argue the problem is they create unnecessary web apps. Every damn corporation's and its aunt's (no matter how tiny mom-and-pop organizations they are) home page, basically just a fricking brochure, is an endlessly-scrolling blinking self-reformatting SPA nowadays, in stead of just a simple page that stays the fuck put in the browser so you'll actually the link you thought you were clicking.
(There, ya gots any more clouds for me to shake my walking stick at?)
> the applications from youtube/dailymotion/twitch/etc are NOT running on your system. They're running on the browser.
The browser runs in your system. The applications run in your system. This is as inane as saying any other interpreted language doesn't run in your system.
> they are uninstalled when you close the tab.
Except for all of the parts that aren't
There's no reason you can't achieve the same level of isolation in other runtimes. I don't think it's likely this would take off, obviously, we're already too deep into the browser-as-shitty-os rabbit hole.
I think the point is that the modern web repeated the doom of Microsoft Office to a certain extent.
There are code libraries and horrid Microsoft libraries for extracting information from Office documents, or API calls to Office itself to do so, but it is NOT EASY. And wow does Microsoft love this, because you end up buying office ... everywhere.
The modern web may not be so centrally owned as the Microsoft Office monopoly, but it does bury all forms of information under ludicrously bloated generated javascript / css. HTML actually is the forgotten stepchild of the javascript / css / html base of the web.
While WebAssembly offers hope for undermining the javascript monopoly, it probably won't help.
From where we are now, the data structuring and extraction is basically enabled by HTML. Javascript/CSS are obfuscators, not enablers to that. And to the point of many, that's how the tech industry likes it, because extractable / analyzable HTML pages are hackable and reformable and the tech companies lose control of "their" data (which is your data that you gave them, but that's another rant).
That is, they lose the ability to reliably get all the ad revenue.
That's what we thought in the early 2000s until Google Maps blew it's competition out of the water, making use of XmlHttpRequest when no one took js seriously back then.
Blogging? sure.
Making interactive content? a turing complete language helps.
An application (like maps) and an online brochure are two different animals. A brochure for a company with a hundred or fewer products should generally be able to do fine with just HTML. Maybe a generator engine can assist with formatting, but there should be little need to add JavaScript unless you really really think eye-candy is important to sales. For a child or teen, maybe such gimmicks matter, but not for a furniture store.
If you really need a fancy store with shopping carts and wish lists etc., there are plenty of online services you can rent for small and medium stores. It's a wheel you shouldn't have to reinvent. But a brochure for say 25 medical devices/services shouldn't need any JavaScript.
It's said that the father of LISP, John McCarthy, the father of LISP, lamented the W3C's choice of SGML as the basis for HTML : « An environment where the markup, styling and scripting is all s-expression based would be nice. »
Following John McCarthy, this HTML <b><i><u>Hello World</u></i></b> could have been written {b {i {u Hello World}}} and {* 1 2 3 4 5 6} could have been evaluated to 720. For instance: http://lambdaway.free.fr/lambdawalks
<b><i><u>Hello World</u></i></b> could have been written {b {i {u Hello World}}}
I think that would be awesome now using a modern IDE with great syntax highlighting and block editing. Back in 1997 when I was writing HTML in notepad.exe I think it would have been a bit less fun. Seeing the closing tag was incredibly useful.
HTML has its own syntax, including closing tags and quotes around attribute values. Since HTML is the benchmark here, are you suggesting semicolons at the end of statements are harder than either of these?
> Like learning the joys of debugging missing semicolons.
Any modern IDE built within the last 20 years would not only highlight where a semicolon was missing, but would also be able to tell the difference between a semicolon and a Greek question mark.
Reminds me of a funny story. I grew up with the web, and I only know writing HTML, CSS, and plain JavaScript. I don't do web development professionally, as I've spent more time with native client and backend development through the years. A couple of years ago, though, I needed to show how to call some APIs, so I wrote a simple page as a demonstration. Immediately after sharing the page, the web developers asked if I could point them to the GitHub repo. I think I might have laughed out loud as I replied, "right-click, View Source." It's amazing that that's how we used to always do it and that it's almost never done that way anymore.
> we live in 2022 with high broadband and powerful browsers/CPU.
Some of us do. I think it's important to keep in mind that especially those of us living in tech hubs are in a highly distorted bubble when it comes to tech — for us, things like gigabit internet and 1-3 year old top of the line phones and laptops are the norm.
Beyond that bubble however are a lot of slow internet connections (sub 1mbps DSL is still a reality for many North Americans) as are computers that are either pushing between 5 and 10 years of age or are of similar power to computers that old (think bargain bin x86 laptops and Chromebooks).
Occasionally I'll pull out my circa-2008 Dell laptop (which can still run modern operating systems fine) and use it for a few hours to remind myself of this. It mostly does fine until I have to use some unnecessarily heavy website.
In my area the ATT only provides internet that runs 18 mbps maximum. Its infuriating when I am browsing a javascript-heavy website at peak hours and it takes forever to load. I don't think HTML-only is necessarily a good idea, but less javascript for basic things that HTML does well anyway is certainly welcome.
That's the one that kills me. When I am on a barely functional internet connection, and I need to download megabytes of js so that it can then do a fetch for three paragraphs of text.
To add to this, something occurred to me recently, during a train ride:
I don't know how many high speed connections a moving train has, but when I cannot load a simple mostly text website, then some people on board apparently are doing other things than I am doing on that shared connection. One hunch I have is, that they are downloading many megabytes of bloat JS libraries, while I am trying to just read some text on a website and have mostly JS blocked. Some more might even be watching movies or running big downloads or windows updates or whatever.
Anyway, one result of bloated websites, even if we have high speed connections in our homes, is that we struggle with the shared connection, like on a train. If every Billy needs to load Facebook, Instagram, YouTube and whatnot, it surely is not going to improve the situation for other people on the train. Of course another reason might be, that the train's connection is bad in the first place.
I live in a suburb outside of a large metro area. Not at all considered rural. The only internet provider we had when I moved from my old house to this new suburb was Xfinity (Comcast). The ISP I had previously did not have service in my area and after an exhaustive search, the best I company I could find that WAS NOT Xfinity was a commercial DSL line with a dedicated 5Mbps up and down line for the same cost as and Xfinity line with a 400Mbps up and 15Mbps down. It wasn't even close. I also had to have this company install the line which would've been even more money.
In the end, it was pretty surprising how many areas still only have a single choice for their internet service.
> Occasionally I'll pull out my circa-2008 Dell laptop (which can still run modern operating systems fine) and use it for a few hours to remind myself of this. It mostly does fine until I have to use some unnecessarily heavy website.
But this doesn't have to limit you to html only websites. 2008 (or 2006) js was perfectly fine for most tasks.
Yeah I'm never going to make the argument that HTML alone is adequate in most cases. Light JS, like as you said was featured on most sites of that era, is perfectly fine since the utility added is significant and the drawbacks very minimal. Same goes for images… highly optimized small PNG glyphs and small JPEGs are fine, you only start getting into trouble when loading multiple megabytes of images for purely ornamental purposes. My single core G5 and P4 machines handled such sites with ease, even with the (relative to now) badly optimized web engines of the 00s.
Problem is, light JS and small/optimized images are becoming more the exception than the rule. When devs have ample bandwidth and powerful machines they're much less likely to carefully weigh every dependency and unnecessarily large image.
I have a 2017 MacBook. Simple sites that should just f**ng work (e.g. JIRA) are completely sluggish.
Yes, let's limit ourselves a lot, I shouldn't require a modern M1 to use the web. In fact, Javascript shouldn't exist, because the industry apparently can't use a super optimised runtime correctly.
> we live in 2022 with high broadband and powerful browsers/CPU
That's a bit smug. I live in one of the richest nations in the world, and still much of the country has only slow connections available. Even those have often become intermittent since waves of climate-change enhanced disasters have started regularly sweeping away much of our infrastructure. Those disasters have also further impoverished much of the population, making it hard enough for many to keep a roof over their heads (thousands living in tents and caravans), let alone 'powerful CPU's.
> we live in 2022 with high broadband and powerful browsers/CPU
It would be nice if this were true but we're far from it. High broadband is not a given, browsers are slow (yes), CPUs stopped scaling vertically and don't compensate for bad programming anymore.
It seems to me that the choice of not using HTML-only has more to do with the inability to do so, rather than the desire to not limit oneself.
A fair portion of the time people are on phones where they have no idea how fast/reliable their connection is going to be from moment to moment.
Whether that (among other cases) creates an obligation in everyone to account for semi-failure and full failure (let alone retreat to pure markup) is another thing, of course, but the industry would be better and its practitioners more deserving of the term "engineer" if we did.
I'm not neccessarily sure of this analogy but I want web browsing to be like a book. I "open" a web page and it's there in it's entirety. Waiting and ready for wherever I want my eyeballs to go.
I'm not averse to javascript or to images(my site is almost all images) but the slowness of so many sites these days says the high broadband and cpus can't keep up.
And maybe someone can explain this to me but why does going back in the browser seem slower or more intensive than loading a new page? Is that because the broswer itself is trying to load the previous state?
What if my highspeed internet flatrate is used up? After that I only have a slower speed (64 kbit/s)
This should be enough to render text-based sites. E.g. HN or news sites. But the reality is, that basically no side is able besides HN is usable at that internet speed. Soo much content out there could be accessible at that speed. Sure no videos or images. But everything text based should still work.
There are probably a lot of tools in the webdev-toolbelt that would allow to allow even e.g. image heavy news-sites to be usable during images and scripts load.
If I run a website, to some degree, I’m running it for my benefit. If I want to put fancy schmancy JavaScript on there to show how clever I am, or because I think it will make me a billionaire, I’m gonna do it. Tech-Puritanism is not going to stop me.
Now if you tell me I shouldn’t because it’s less likely to make me a billionaire, I might want to listen.
> While I agree somehow, we live in 2022 with high broadband and powerful browsers/CPU.
There's plenty of situations where one or both of those statements are temporarily untrue and plenty more where they're permanently untrue.
Most users don't have flagship phones or brand new MacBooks. They don't have ultra fast WiFi or 5G. Even if they have more powerful devices they might be on shitty school/Starbucks/public/office WiFi.
You don't need to build everything like it's 1996 but it's absurd to simply assume every user is on a MacBook with gigabit Ethernet. The web is full of terribly built web pages pretending they're "apps" and using megabytes of JavaScript to show some text and images.
Is there an HTML-only way of separating menus from the html page? Similar to how external style sheets only need to be edited once to apply on all pages where it's linked?
Of course we all know something like:
<?php include 'menu.php' ?>
But in order to get that working, the site can no longer be HTML-only, it will need a web server and pages must be titled .php. This assumes most people are building multi-page sites, and not those one-page "link in bio" style pages.
This is what I was doing in 2015 when my brain was fried with HTML and CSS and I was desperate to avoid having to learn even more syntax (PHP). I remember trying something like .shtml and .dhtml.
It didn't feel "real". I had never seen web pages with those extensions. Frankly, I hadn't even seen .html pages in several years because everybody edits the .htaccess file to prettify the URLs.
I ended up biting the bullet and going with PHP. Surprised to learn that there still isn't anything out of the box that handles this.
True. Though How would one serve a website without a server in some capacity?
SSI like those from apache acting in otherwise the most privative form of server just using paths and no routing produce the kind of repetitive HLML that iframes via the client produce.
There used to be Apache server-side includes for this. It is kind of an odd omission from HTML that you can't insert a bit of DOM from a different source without Javascript.
HTML the markup vocabulary itself doesn't have these, which makes sense when seeing HTML as an SGML application inheriting all what SGML had to offer for transclusion using entities, and quite powerful means for content-oriented applications such as using markdown (or other custom syntax), building tables of contents/site maps, search context snippet and other summary generation, type-checked user and third-party content inclusion, etc. See [1] to get a flavor for including a shared header/menu and footer on multiple pages.
Depending on how far you want to stretch the definition of "HTML", this can be done with XML pages that link to XSL stylesheets to describe common layouts.
I just wish we had something like Turbo Frames and Turbo Streams in a native browser format. Just set the endpoint, maybe some animation, and it will load it async. Add POST support and it's a pretty interactive app. https://turbo.hotwired.dev/handbook/frames
With modern browsers I barely noticed the difference between a HTML only site (MVC backend) and using XHR to partially load / dom patch. Just the spinner in the tab giving it away. HTML only can be a good ux. HTMX seems like a nice toedip into some interactivity.
I find some irony in that some 20 years ago, while there were pure-html web apps, eg. web-chats using <meta refresh...>, there were also java applets (remember the <applet> tag?), which were fast and used common inet-family sockets. I believe, they were mostly booed because security and privacy concerns. The applets were small (bytecode, so no interpreter required) and allowed using same language (statically typed) for both frontend and backend. Now, instead of making them more secure, we have xmlhttprequest, web sockets, node and typescript or webasm with minified and obfuscated scripts. Well, evolution paths are often curly.
All you need is 18,000 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors, and approximately 5,000,000 hand-soldered joints.
>> Someone else said HTML-only websites are "ugly as hell." I disagree. They're beautiful.
Well, we can always agree to disagree.
I started learning HTML in 2001 (or 2000) and typically plain HTML websites at that time were boring, so there were some ways to enhance it, e.g Flash.
I took a different route: DHTML, and then website development felt more fun. Perhaps the point is to use CSS/JSS libraries/frameworks sparingly.
It's technically the way that hand tools are all you need to build a house.
But I'd rather not hand write content in HTML and have limited ability to change themes without editing every page, and have to update links in indexes every time I add a page to make it visible, and need my laptop or SSH client on a phone to write.... a flat file CMS does a lot.
Well, First the auther shouldn't use "Light" font for the main body text. It makes the site unreadable/unusable. The text very light/thin, underline is very thick and dark.
The wholse site uses a lot of unnecessary css/design.
Yep, they just need state-of-the-art, high SLA servers that generate pages dynamically for the user.
It's really unfair to criticize the complexity of high javascript sites comparing them to simple html pages that "just work" because all the complexity is hidden on the service side.
Better HTML would go a long way, for that matter. It's pretty dumb that there are millions of independent implementations of payment forms for the Web, for one thing, and that most depend heavily on javascript to function well. Simple table sorting—should be built-in, probably since 15+ years ago. Proper memory-efficient list elements, like native toolkits often feature, should be available in pure HTML. So much wasted effort because the Web took over but no-one ever bothered to make it, you know, good for all these things we're now using it for.
People vastly underestimate DX, unintentionally gatekeeping technology while feeling like they "sticking it to those pesky Devs who overengineer everything!!!"
My web page for my "Free Hero Mesh" software project is purely HTML, no CSS and no JavaScripts. (The Fossil repository uses CSS, but I made a separate web page which does not have CSS. Because, I think that it is much better) I do not specify fonts, colours, etc; those must be specified by the end user's preferences instead. I do not add excessive pictures/decorations/etc.
There's a problematic "c" option whose domain could be served entirely with HTML and CSS but insist that some multi-megabyte JavaScript monstrosity is the only possible solution. So they weigh down every news article, blog post, and effectively static document with said JavaScript.
It's not a little more work, it's reimplementing things browsers and servers already do with JavaScript replacements. They deliver a naked script tag and then do everything in heavy JavaScript.
Tracking JavaScript is entirely separate from the framework-to-show-text bullshit design of many pages. They're bad before all the tracking gets loaded. The tracking makes them worse.
> Some people claimed that websites *without* CSS and JavaScript are "bland". Who cares?
The people that I want to read my content?
And yeah, lots of sites are far too bloated with CSS and JS and whoosiwhatsits. But a couple of kb of CSS will make a website much nicer to consume, and won't impact loading speed to any noticeable degree.
Assuming an average line of CSS is 30 bytes, that's 66 lines of CSS. That definitely should be enough.
I would argue that if on a diet, 20 lines of CSS should be enough to provide a very readable, aesthetically inoffenseive website.
If browsers had consistent, sane defaults which were designed for readability, then CSS wouldn't even be necessary for most information sites. The symantics of HTML tags were observed uniformly, sane defaults could be assumed.
> dramatically increases site performance, accessibility and the end-user experience
Where are the metrics? Who is the target audience? What is the distribution of hardware among this target? A dramatic increase in performance on a potato might be imperceptible on the latest hardware, lessening or even eliminating the impact of your technology choices.
A dramatic increase in accessibility and end-user experience is also pretty hand-wavy. How did you come to this conclusion? Any examples? I don't see how HTML-only is tied to these things. You can accomplish or fail both with or without heavy CSS and JS.
None of the sites she links to make the case for these assertions. Am I missing something?
> Why all this backlash against HTML-only websites?
What backlash? I see the HTML-only rhetoric so often these days that it's clear this position is becoming rather fashionable.
I think the improvements to accessibility are generally accepted to be true in the industry to a degree that metrics aren't needed to back up the claim every time it's discussed. Can you get the same degree of accessibility with javascript? Sure; it just requires a lot more work.
First impression: view-source ... "<!-- CSS --><link rel="stylesheet" href="/css/main.css" />", OK then, way to deplete your own hypothesis before anything even renders to the page.
They're using Eleventy and Netlify, those are not HTML either; Eleventy implies they start with markdown too I think?
I'll forgive them image formats and the .js file for Twitter. But, they follow up with a heap of "lets use tables for design" examples, always hated that ... I'm nope-ing pretty hard there.
I started writing HTML in the 90s using Pico, my roots are purist, but no, not really ... you need styles for a11y, you want styles for user comfort and engagement, even minimal things like favicons and svg. We've come a long way, yes, bloated messes like Sharepoint spews out are abysmal, but to make a website that meets any reasonable standards ... you need more than just HTML in this dinosaur's opinion.
Are you talking about the site from the submission? To be fair, it never claims itself to be HTML only. It does say performance was improved by being much more mindful about how much other stuff, including CSS and javascript, is included. The linked blog post about the perf improvements goes into some detail about that. Doesn't smell like false advertising to me. The site then goes on to show by extreme example that you don't technically need anything other than HTML to make a website.
I do agree with your points about what is necessary these days. A dash of CSS, tables only for tabular data, other small touches. Like, that's a more reasonable standard for "what is necessary". And to me, I think your comments are in line with the spirit of the original post.
Hear, hear. Though I guess they could have gotten all their CSS into a style tag in the <head> (which might even be reasonable for reducing the number of requests for things that get hammered on, maybe, if you didn't also want them cached for other pages).
I have ublock advanced set up. I get a massive cat icon with the text below that. I have to allow a number of third party scripts/sites to get the site proper.
How are you supposed to take an article with this title seriously, on a website that makes heavy use of modern CSS (flexbox, grid,...), has a Twitch stream link and even embedded Twitter (with JavaScript)? This is 'Do as I say, not as I do' at a quite extreme level.
How are you supposed to take an article with this title seriously
You read it and if you have anything interesting to say about its content, you write that. Picking apart the implementation details of the site itself just brings the unseriousness to HN which is why the site guidelines ask you not to do that sort of thing.
Website design is arguably not tangential to a discussion about website design, which is the only thing that the guidelines discourage.
If someone indulges on a beef steak while talking about something, that would be tangential in most circumstances, except when the talk is about the benefits of veganism. You're creating a strong dissonance between your message and your mode of delivery.
What the guidelines do discourage, however, is commenting on whether the article has been read.
It's tangential to the ideas expressed in the content beside being a really predictable and repetitive (notice how it's in three separate bottom comments in this thread) gotcha on top of 'how can I take it seriously' itself being just a bombastic trope.
is commenting on whether the article has been read.
> It's tangential to the ideas expressed in the content
One of the ideas expressed in the content is to use HTML tables for styling and aligning content - using grid and flexboxes flies in the face of that suggestion.
I don't know if the author is talking to professionals who implement javascript sites at work or hobbyists with a simple blog page. In the first case, she has no data points to quantify how javascript adoption generates revenue for the company vs simple bland html pages (with more logic on the server side). In the second case, she has no business in what people implement in their spare time to learn new technologies or just unwind. In both cases, I don't really understand these kind of articles.
infuriatingly, if HTML had just a bit more oomph, we could make a lot better websites with it, but they haven't been moving HTML forward as a hypermedia for decades now (see https://htmx.org for what I mean, they could implement this concept in the browser in a week, and it would change web development dramatically)
the upcoming view transitions API will help:
https://github.com/WICG/view-transitions
but, still, there are some really obvious and simple things that could be done to make HTML much more compelling (maybe let's start by making PUT, PATCH and DELETE available in HTML!)