My recommendation: you should probably start from _Working Effectively with Legacy Code_ by Michael Feathers.
The best "let's do something not trivial" TDD book is probably still _Growing Object Oriented Software, Guided by Tests_ by Steve Freeman and Nat Pryce.
Most of the "more than the basics" topics are the same kinds of practices that were considered "good design" whether you were using TDD or not. For example, Parnas 1971, Berard 1993, John Carmack 1998 ("Time is an input...."), and so on.
If you are interested in more than the basics on TDD, the right starting point is _Test Driven Development by Example_ by Kent Beck, which while a bit thin on examples actually covers a nice variety of more advanced topics (although not in great depth). If you are going this route, you should pair that with Beck's 2023 essay "Canon TDD".
I would give the book by Beck a bit of lenience as it was the first book on the subject at a time that testing frameworks weren't as ubiquitous as they are today.
It's still a good foundational book to start with though.
https://tidyfirst.substack.com/p/canon-tdd isn't particularly new; Beck has been consistent about "write tests before the code, one test at a time" for about 25 years now.
Same idea, different spelling: do you really think TDD should get credit for your good results, when you aren't actually shackling yourself to the practices that the thought leaders in that community promote?
I want to credit "writing automated tests" with the good results that I get from that practice. The problem is I need terminology that's widely used by other developers.
Bob Martin was the first to use the terms covariant/contravariant in the context of software development, as far as I'm aware. Using this precise language borrowed from mathematics at once clarifies both the problem and the solution that folks like Kent Beck and Dan North had been talking about for decades. Now we can discuss these issues with a whole lot less hand-waving.
A small community of programmers, with a disproportionately large audience, foretold that practicing test-driven development would produce great benefits; over twenty five years the audience has found that not to be the case.
Compare with "continuous integration" - here, the immediate returns of trying the proposed discipline were so good that pretty much everybody who tried the experiment got positive returns, and leaned into it, and now CI (and later CD) are _everywhere_.
As for what is gained, try this spelling: test driven development adds load to your interfaces at a time when you know the least about the problem you are trying to solve, which is to say the period where having your interfaces be flexible is valuable.
And thus, the technique gets criticism from both ends -- that design work that should have been done up front is deferred (making the design more difficult to change, therefore introducing costs/delays), and that the investment is being made in testing before you have a clear understanding for which tests are going to be sensitive to the actual errors that you introduce creating the code (thereby both increasing the amount of "waste" in the test suite, in addition to increasing the risk of needing test rewrites).
The situation is further not improved by (a) the fact that most TDD demonstrations are problems that are small, stable problems that you can solve in about an hour with any technique at all and (b) the designs produced in support of the TDD practice aren't clearly an improvement on "just doing it", and in some notable cases have been much much worse.
So if it is working for you: GREAT, keep it up; no reason for you not to reap the benefits if your local conditions are such that TDD gives you the best positive return on your investment.
>As for what is gained, try this spelling: test driven development adds load to your interfaces at a time when you know the least about the problem you are trying to solve
If Im writing a single line of production code I should know as much as possible what requirements problem Im actually trying to solve with it first, no?
This is actually dovetails into a benefit to writing the test first. If you flesh out a user story scenario in the form of an executable test it can provoke new questions ("hm, actually I'd need the user ID on this new endpoint to satisfy this requirement...") and you can more quickly return to stakeholders ("can you send me a user ID in this API call?") and "fix" your "requirements bugs" before making more expensive lower level changes to the code.
This outside-in "flipping between one layer and the layer directly beneath it" is very effective at properly refining requirements, tests and architecture.
>And thus, the technique gets criticism from both ends -- that design work that should have been done up front is deferred
I dont think "design work" should be done up front if you can help it. I've always felt that the very best architecture emerges as a result of aggressive refactoring done within the confines of a complete set of tests that made as few architectural assumptions as possible. Why? Coz we're all bad at predicting the future and it's better if we dont try.
"I call them 'unit tests' but they don't match the accepted definition of unit tests very well." -- Kent Beck, __Test Driven Development By Example__
The short version is that "unit test" did actually mean something (see Glenford Myers, __The Art of Software Testing__ or Boris Beizer, __Software Testing Techniques__), although it wasn't necessarily clear how those definitions applied to object-oriented programming (see Robert Binder, __Testing Object-Oriented Systems__).
The Test-First/TDD/XP community later made an effort to pivot to the language of "programmer test", but by the time that effort began it was already too late.
So I think you should continue to call your tests "tests" (or "checks", if you prefer the framing of James Bach and Michael Bolton).
As best I can tell - there's no historicity to the idea that "unit test" was a reference to the isolation of a tests from its peers; it's just a ret-con.
"REST is just pure bullshit. Avoid it like a plague."
No it isn't. Evidence: I'm reading this in a web browser.
"...REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them."
Bikeshedding the spelling of resource identifiers? Or what "verb" should be used to express specialized domain semantics? Yeah, _that_ is certainly plague bullshit.
> No it isn't. Evidence: I'm reading this in a web browser.
And you might not that this site is _not_ REST-ful. It's certainly HTTP, but not REST.
> Bikeshedding the spelling of resource identifiers? Or what "verb" should be used to express specialized domain semantics?
Or whether we want to use If-Modified-Since header or explicitly specify the condition in the JSON body. And 6 months later, with some people asking for the latter because their homegrown REST client doesn't support easy header customization on a per-request basis.
Or people trying (and failing) to use multipart uploads because the generated Ruby client is not actually correct.
There is _way_ too much flexibility in REST (and HTTP in general). And REST in particular adds to this nonsense by abusing the verbs and the path.
How isn't it RESTful? It's a single entrypoint using content types to tell the client how to interpret it, and with exploratory clues to other content in the website.
The "R" letter means "Representational". It requires a certain style of API. E.g. instead of "/item?id=23984792834" you have "/items/comments/23984792834".
Representational is to do with being able to deal with different representations of data via a media type[0]. There is stuff about resource identification in ReST, but it's just about being able to address resources directly and permanently rather than the style of the resource identifier:
> Traditional hypertext systems [61], which typically operate in a closed or local environment, use unique node or document identifiers that change every time the information changes, relying on link servers to maintain references separately from the content [135]. Since centralized link servers are an anathema to the immense scale and multi-organizational domain requirements of the Web, REST relies instead on the author choosing a resource identifier that best fits the nature of the concept being identified.
reply