Software developers nowadays barely know about transactions, and definitely not about different transaction models (in my experience). I have even encountered "senior developers" (who are actually so called "CRUD developers"), who are clueless about database transactions..
In reality, transactions and transaction models matter a lot to performance and error free code (at least when you have volumes of traffic and your software solves something non-trivial).
For example: After a lot of analysis, I switched from SQL Server standard Read Committed to Read Committed Snapshot Isolation in a large project - the users could not be happier -> a lot of locking contention has disappeared. No software engineer in that project had any clue of transaction models or locks before I taught them some basics (even though they had used transactions extensively in that project)..
This isn't confined just to senior developers. I have even encountered system architects who were clueless about Isolation levels. Some even confused "Consistency" in ACID with the "Consistency" in CAP.
Makes me sad, since I work mostly in retail and and encounter systems that are infested with race conditions and simila errors: things where these isolation levels would be of great help.
However it's mostly engineers at startups, I have a very high opinion of typical Oracel/MSSQL developers at BigCos who at least have their fundamentals right.
Some time ago, this was important to decipher the marketing behind MongoDB. Their benchmarks ran with a loose isolation (read_uncommitted iirc) that didn't guarantee a durable flush, and they'd benchmark against defaults from postgres, etc, which didn't use this isolation.
Clearly it worked for them, but I spent a few different stints cleaning up after developers who didn't know this sort of thing.
In over 25+ years at various companies, I only recall one interview where isolation levels were even discussed. Almost nobody cares until it's a problem.
we must have had entirely different careers, same in years and 180 degrees opposite, absolute core (and disqualifying) questions at every interview, no exceptions.
One "enterprise" HR product I had to interact with stored all its data in a single MS SQL Server table, with hundreds of columns. It was basically a spreadsheet based system with an SQL interface. This was more than a decade ago, but still.
About 20 years ago, I worked at a startup where one of the guys had built his own ORM. It was never clear why. Internally, it didn't use prepared statements, and instead used some custom escaping logic that was full of bugs. We'd regularly get SQL injection issues in production.
I’ve noticed the lack of transaction awareness mostly in serverless/edge contexts where the backend architecture (if you can even call it that) is driven exclusively by the needs of the client. For instance, database queries are modelled as react hooks or sequential API calls.
I’ve seen this work out terribly at certain points in my career.
Soon most software devs will just be transcribing LLM trash to code with no concept of what's actually happening (its actually required at shopify now - MS is bragging 1/3rd of their software is written this way), and no new engineers are coming up because why invest the time to learn if there won't be any engineering jobs left?
I think that this is really the duality of LLMs. I can ask it to explain different database transaction models and it would perfectly explain to me how it works, which one to pick, and how to apply it.
But generated code by a LLM will likely also have bugs that could be fixed with transactions.
That's because it's glorified search. The postgres docs tell you that without risk of hallucination. You are correct that it won't produce code that does the right thing in that context though.
My recommendation for juniors stands unchanged for a decade now: read a book about SQL databases over a weekend and a book about the database your current work project is using over the next weekend. Chances are you are now the database expert on the project.
Had similar situation a few years before - switched a (now) billion revenue product from Read Committed to Read Committed Snapshot with huge improvements in performance.
One thing to be aware when doing this - it will break all code that rely on blocking reads (e.g. select with exists). These need to be rewritten using explicit locks or some other methods.
Besides the obvious shocking statement that people can be gainfully working in this industry, without knowing about database transactions...I will take a guess...they have been using web scale MongoDB ?
Software developers nowadays barely know about transactions, and definitely not about different transaction models (in my experience). I have even encountered "senior developers" (who are actually so called "CRUD developers"), who are clueless about database transactions.. In reality, transactions and transaction models matter a lot to performance and error free code (at least when you have volumes of traffic and your software solves something non-trivial).
For example: After a lot of analysis, I switched from SQL Server standard Read Committed to Read Committed Snapshot Isolation in a large project - the users could not be happier -> a lot of locking contention has disappeared. No software engineer in that project had any clue of transaction models or locks before I taught them some basics (even though they had used transactions extensively in that project)..