Normal cookies are JS-accessible, but HTTP-only cookies should not be: "A cookie with the HttpOnly attribute is inaccessible to the JavaScript Document.cookie API; it is sent only to the server."
Ah thanks! This is new to me. That is indeed a concern, but probably can be worked around, e.g. by proxying requests to third party domains through the same Domain.
Which is why you use domain scoping, httpOnly and Secure cookie flags so they can only be read by matching hosts (with greater granularity than same-origin policy) over HTTPS and can’t be read by JavaScript. The Web Storage API does not offer these protections.
They can't be read BUT the browser will send the cookie with every request. If you have an XSS, it is game over. The attacker can just send requests from your browser. Slightly less convenient. You are merely taking away the convenience of the attacker doing the attack manually on his own browser, which he probably doesn't want to do anyways. If he can inject js into your site, he will make your browser send the request(s) to do the actions with your credentials in an automated and quick way. Your browser will send the cookie automatically. From the attacker, it would merely be nice if he could read your tokens, but it is absolutely not necessary.
Like some here, I don't understand the hate around keeping tokens in localstorage. People immediately say "but js can read it!" but so what? If someone can put malicious js in my site, it is GAME OVER, secure http-only cookie or not. When that is the case, the saner option is doing away with an old and misused invention called cookies. The upside with ditching cookies is that you are an order of magnitude safer against CSRF since your browser does not send anything automatically. You don't need to keep CSRF token state in your server(s) either (helps with scale, one less state to worry about), it is a win.
http-only secure cookies do not give you any additional security. Ditching cookies does.
I think some people don't realize that CSRF tokens are basically the same thing as bearer tokens (which JWT also does), but it's just that they get re-generated every time you open a new page usually. So it's a bit ironic when everyone screams that tokens are bad, but they're all using them to protect against confused deputy attacks.
I am talking about a much more general class of security than just XSS. You’re making perfect the enemy of good here - yes, of course XSS is not completely mitigated by httpOnly. That was not my point.
My actual point stands, the Web Storage API doesn’t offer the same protections as cookies. Don’t store sensitive data in localStorage, that is emphatically not it’s intended use.
>I am talking about a much more general class of security than just XSS.
And what would those be that are relevant to this discussion? The way we (ab)use cookies is arguably not their intended use either.
I can't think of a scenario in this context where an attacker says "damn he is using http-only cookies, I won't be able to do what I want to do"
The only pragmatic difference between both is js accessibility. That only matters when someone can inject scripts into your site. My point is, when that happens, cookies are also bust.
Store a security token in localStorage and additionally store a secure signature for it in a secure, HTTP-only cookie. On your backend, verify validity of both the token and its additional signature contained in the cookie.
I don't believe it adds any meaningful security that justifies the cost (development, testing, hardening, scaling the state across servers if necessary etc.) With security "more complicated" does not necessarily mean "more secure". Doing it without multiplying the number of ways things can go wrong is deceptively hard.
Yes, just that with regards to security I've seen to many burned by "it can't hurt" processes. With your suggestion, assuming perfect implementation, I personally can't see where it would help. Like, if attacker can run js in your site, they can just set the cookie as necessary before making requests (if the cookie does not exist already) since that is something they can already do. If the cookie exists (most likely scenario), the browser will send it with each request anyways so no added security there either.
Overview from the about section (note: it's not my project):
> Unravel is my project to reengineer the internet (DNS and up). It will replace messaging, chat, social networking, search, media and file shareing, and a whole lot more. It will be open source and alow anyone to build anything they want on top of it. It will be built to be secure, and provide privacy, veryfyability.
> At its core Unravel will be a mesh distributed database with an API to access the data. It makes heavy use of checksums and ECC encryption for encryption and verification. It is written in C for maximum preformance, and is built to run on anything from an enbeded device, to a phone, a PC or a super computer.
> I dont like what the internet has become. Especialy I don't like the cloud. Today most comunication online happens using walled guarden intermediaries who store and inspect and triage everything. There shouldnt need to be any intemidiares to do any of the things we want to do, but right now we have to. I think that who controls information matters. I think that privacy matters. I think the user should be in charge, of what they see, and who they comunicate with, the software they run, and what information they store and share.
> Maybe the rest of the world dont care about any of this. Maybe everyone else is happy with the internet we have. I'm fine with that, I'm just not fine with there not being any other options. Thats what im doing, I'm building another option, because I can, and because someone should.
Overview of the talk from a comment[0] of that video:
> The goal of Node was event driven HTTP servers.
>
> 5:04
> 1 Regret: Not sticking with Promises.
> * I added promises to Node in June 2009 but foolishly removed them in February 2010.
> * Promises are the necessary abstraction for async/await.
> * It's possible unified usage of promises in Node would have sped the delivery of the eventual standartization and async/await.
> * Today Node's many async APIs are aging baldly due to this.
>
> 6:02
> 2 Regret: Security
> * V8 by itself is a very good security sandbox
> * Had I put more thought into how that could be maintained for certain applications, Node colud have had some nice security guarantees not available in any other language.
> * Example: Your linter shouldn't get complete access to your computer and network.
>
> 7:01
> 3 Regret: The Build System (GYP)
> * Build systems are very difficult and very important.
> * V8 (via Chrome) started using GYP and I switched Node over in tow.
> * Later Chrome dropped GYP for GN. Leaving Node the sole GYP user.
> * GYP is not an ugly internal interface either - it is exposed to anyone who's trying to bind to V8.
> * It's an awful experience for users. It's this non-JSON, Python adaptation of JSON.
> * The continued usage of GYP is the probably largest failure of Node core.
> * Instead of guiding users to write C++ bindings to V8, I should have provided a core foreign function interface (FFI)
> * Many people, early on, suggested moving to an FFI (namely Cantrill) and regrettably I ignored them.
> * (And I am extremely displeased that libuv adopted autotools.)
>
> 9:52
> 4 Regret: package.json
> * Isaac, in NPM, invented package.json (for the most part)
> * But I sanctioned it by allowing Nod's require() to inspect package.json files for "main"
> * Ultimately I included NPM in the Node distribution, which much made it the defacto standard.
> * It's unfortunate that there is centralized (privately controlled even) repository for modules.
> * Allowing package.json gave rise to the concept of a "module" as a directory of files.
> * This is no a strictly necessary abstraction - and one that doesn't exist on the web.
> * package.json now includes all sorts of unnecessary information. License? Repository? Description? It's boilerplate noise.
> * If only relative files and URLs were used when importing, the path defines the version. There is no need to list dependencies.
>
> 12:35
> 5 Regret: node_modules
> * It massively complicates the module resolution algorithm.
> * vendored-by-default has good intentions, but in practice just using $NODE_PATH wouldn't have precluded that.
> * Deviates greatly from browser semantics
> * It's my fault and I'm very sorry.
> * Unfortunately it's impossible to undo now.
>
> 14:00
> 6 Regret: require("module") without the extension ".js"
> * Needlessly less explicit.
> * Not how browser javascript works. You cannot omit the ".js" in a script tag src attribute.
> * The module loader has to query the file system at multiple locations trying to guess what the user intended.
>
> 14:40
> 7 Regret: index.js
> * I thought it was cute, because there was index.html
> * It needlessly complicated the module loading system.
> * It became especially unnecessary after require supported package.json
>> * Many people, early on, suggested moving to an FFI (namely Cantrill) and regrettably I ignored them.
A bit off-topic but Dahl referenced Cantrill here, who I figured to mean one of the authors of DTrace, Bryan Cantrill, who I then found from his Twitter (https://twitter.com/bcantrill) just last month started a new "computer company" which sounds super interesting, especially with his past experience and the passion he seems to have for attempting to solve a tough, bold problem:
For more context, Cantrill was senior at Joyent, who has Nodejs listed as one of their products on Wikipedia and has been the corporate sponsor of Nodejs for a long time.
Thanks. This largely made me realize most decisions behind node.js were made on-the-fly rather arbitrarily without putting too much thought into it, and help me compare and contrast it with Go. :)
> most decisions behind node.js were made on-the-fly rather arbitrarily without putting too much thought into it
that seemed rather apparent even at the time - what's been more interesting is watching others defend some of these decisions as if there was a lot of thought put in to them, and that they're some example of great architecture.
not specifically ragging on nodejs - I see this a lot in various projects - small/minor decisions compound over time, and even if they were not originally planned/intended to have significance, they have it at some point, and often people who weren't involved in the original decisions think there's a lot more 'there' there behind the original decisions, when, usually, there isn't.
I've been using Xiaomi Mi Notebook Pro [1] for a little over a year now and there is not a single visible sign of wear. People have had various problems with this laptop, but mine is still in a great condition. I'm running Arch Linux and so far everything has worked without hiccups.
Build quality, in general, is very good. Some positive highlights:
- It has an aluminum body and feels very sturdy.
- Lid can be opened with one hand and even after one year it feels like when I first got it.
- Keyboard feels very nice: there's a great amount of travel and keys have a great clicky feedback.
- Touchpad is superb, very smooth and responsive. It's definitely better than that of XPS, but obviously not as good as Mac's.
Some negative aspects:
- Some keys started to squeak rather quickly.
- One of the speakers rattles at higher volumes.
- Screen of my unit had several dead pixels on arrival. Fortunately they are only seen in pitch-black darkness on a black background when the screen is at near max brightness. Not once have I noticed them during daily usage.
Overall I find it an excellent laptop with great specs. I got it for 920 EUR and it's impossible to find such laptop for such an incredible price where I live in Europe.
That's a great example of the extra magic and complexity an infinitely scrolled view requires. As you scroll down a huge Discord thread, they are removing posts from the DOM for performance reasons.
You could use the browser's built-in search functionality by choosing Edit -> Find in This Page from your browser's toolbar, but in this case it would be pointless as the results would be inconsistent as most data you've come past is not present.
I'm curious how much worse the performance would be if they did not apply this specific optimization.
I remember reading a thread about this issue on Discourse's own board, and it was clear the continuous scrolling was non-negotiable "because modish", regardless of user preferences or experience.
> I'm curious how much worse the performance would be if they did not apply this specific optimization.
Pages without continuous scrolling that can have large numbers of posts or entries don't typically show all by default. There are all the usual options - paging, allowing the user to choose the number per page, etc. They're all imperfect workarounds, and all better (for me) than continuous scrolling.
For me, parent's design definitely feels more premium than the original, which to me seems a bit amateurish.
Here are my observations:
* Layout is too wide. Parent has made it more narrow which feels more natural to me.
* Form inputs are too wide. Same thing as above.
* Weird color palette. I like parent's color simplicity. Notice how he has used different shades of the same color.
* Too random images. IMO the images used for the hero and pricing sections don't make any connection with the product. Parent has left out these images and it is much nicer.
Takeaway, prefer simplicity if you're not good at design.