Its because, although sometimes a delivery cyclist might be annoying, the reality is that there are almost zero KSI due to cyclist in any country worldwide.
The rules designed for SUV dont actually make sense for human-scale transport.
I'd definitely agree for normal bikes and e-bikes capped to speeds under 30 km/h, but at least in NYC, these delivery bikes often go ridiculously fast. I don't think they should be considered the same regulatory category.
Their speed makes them extremely unpredictable, even if the overall kinetic energy is still relatively low, and being overtaken on a relatively narrow bike lane by a vehicle going almost twice my own speed seems dangerous as well even if there is no collision.
(glanced at it so I could be wrong) They're talking about a public key that can be used to validate the JWT's authenticity. AFAIK there is no need to keep these secret, and it's not possible to (without breaking public key crypto) forge them so it should be safe to store them wherever.
- 90 days is a very long time to keep keys, I'd expect rotation maybe between 10 minutes and a day? I don't see any justification for this in the article.
- There's no need to keep any private keys except the current signing key and maybe an upcoming key. Old keys should be deleted on rotation, not just left to eventually expire.
- https://github.com/aaroncpina/Aaron.Pina.Blog.Article.08/blob/776e3b365d177ed3b779242181f0045cd6387b3f/Aaron.Pina.Blog.Article.08.Server/Program.cs#L70-L77 - You're not allowed to get a new token if you have a a token already? That's unworkable - what if you want to log in on a new device? Or what if the client fails to receive the token request after the server sends it, the classic snag with use-only-once tokens?
- A fun thing about setting an expiry on the keys is that it makes them eligible for eviction with Redis' standard volatile-lru policy. You can configure this, but it would make me nervous.
How can the key be stolen easily? That really depends on the security of the Redis setup. Redis is typically not internet accessible so you'd need some sort of server exploit.
Would have been good if the article example showed a Redis server with TLS and password auth.
Private key material should not be kept in the clear anywhere, ideally.
This includes on your dev machine, serialised in a store, in the heap of your process, anywhere.
Of course, it depends on your threat environment, but the article did mention pci-dss.
If you put it in redis, then anyone that has access (internal baddies exist too!) can steal the key and sign something. Its hard to repudiate that.
The most typical end-game is using a HSM-backed cloud product, generating the PK in the HSM (it never leaves), and making calls across the network to the key vault service for signing requests.
This is a hard tradeoff between availability and compliance. If the cloud service goes down or you have an internet issue, you would lose the ability to sign any new tokens. This is a fairly fundamental aspect of infrastructure so it's worth considering if you absolutely must put it across the wire.
It crosses from everyone has the keys like in this example, to centralising a signing service using just software, or using something like KMS or CloudHSM, or YubiHSM, or going big and getting a HA Luna (or similar) HSM setup.
Copying production data to dev is widely regarded as being a bit of a bad idea, if the data contains any information that relates to a person or real life entity.
Uncontrolled access, inability to comply with "right to be forgotten" legislation, visibility of personal information, including purchases, physical locations, etc etc.
Of course sales, trading, inventory, etc data, even with no customer info is still valuable.
Attempts to anonymise are often incomplete, with various techniques to de-anonymise available.
Database separation, designed to make sure that certain things stay in different domains and cant be combined, also falls apart if you have both the databases on your laptop.
Of course, any threat actor will be happy that prod data is available in dev environments, as security is often much lower in dev environments.
The point is that the order in which that is processed is not left to right.
First the | pipe is established as fd [1]. And then 2>&1 duplicates that pipe into [2]. I.e. right to left: opposite to left-to-right processing of redirections.
When you need to capture both standard error and standard output to a file, you must have them in this order:
bob > file 2>&1
It cannot be:
bob 2>&1 > file
Because then the 2>&1 redirection is performed first (and usually does nothing because stderr and stdout are already the same, pointing to your terminal). Then > file redirects only stdout.
But if you change > file to | process, then it's fine! process gets the combined error and regular output.
TBD - its pretty great... aligned also with continuous deployment:
It allows you to get feedback from customers very fast.
It allows you to improve the software very fast.
It allows you to react to the feedback you just got very fast.
Yes, its tricky! You need fast builds, that give you actionable feedback on whether you did a whoopsie.
Yes, it works for all sorts of things: regulated industries, incl finance, embedded systems, apps, websites, ...
Yes, you do need to rethink how changes happen, to look for ways to make that big change into multiple or even many smaller changes, this often has lots of unanticipated benefits.
Yes, it scales to very large deployments and quite large teams.
The diff tells the 'what' - no point in writing 'added method bob()'
The message tells the why.
You can bet that over time, the jiras, the issues and the confluence, slack, o365, will all have been deleted, "upgraded" or whatever, and all you have is what's in the repo.
Using in-repo ADR, and in-repo 'what's missing, what's next' files are also useful, because they co-evolve with the code.