Am I the only one having a very, very hard time parsing much on threat modeling out of this article?
Wikipedia seems to confirm my understanding as to what threat modeling is: "Threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified, enumerated, and mitigations can be prioritized."
There's a little bit in the article about mapping your system, and maybe at looking at interfaces and interaction points? Don't see anything about attack vectors, modes of detection and response, etc.
Also, why only the look at digital security? Being aware of how your systems interact with the physical environment can have a strong influence in how you approach architecture, even if the answer boils down to "our infrastructure team handles that".
Threat modeling is all I get from the article. It is a vague conceptual guide. If you were hoping for code, software tools, or commands to type from your keyboard I understand your disappointment.
Security is often difficult to explain to software developers, because (contrary to what many developers believe) security is so far outside the training and experience of writing software. It’s like trying to explain relational data theory to school children with crayons. It’s not that the children are stupid, they are probably smarter than you and I, but that data theory is not a part of their world view and in practice they really really don’t care.
Yet, many developers falsely believe they have a solid understanding of security despite a lack of experience, training, certification, or interest in the subject. This, more than anything else, explains why opinions and articles from security researchers are frustrating to software developers and why security researchers prefer to not engage software development communities aside from some highly focused security oriented groups.
After working in security for a few years the subject really boils down to two boring words: risk analysis in the most abstract way possible. As a developer we like to think of security in terms of 0day exploits, patches, audits, memory safety, and so forth but those are just symptoms/remediations of. It like comparing pills and needles to medicine.
What percentage of digital threats could evade your digital protections by having physical access to the hardware? Almost all of them. What percentage are likely to ever do it? It seems like it would be much smaller, but maybe I'm missing something?
From my experience, risk analysis or threat modelling for most software is done in approximately.. 1 of the customers I've dealt with. And they had a big bankroll, with bigger compliance/regulations/security reqs than most (incl typical F&I and HealthCare orgs).
n = 1 and I don't live in NA or EU, but security isn't even an afterthought.
Our org has instituted threat modeling as part of our SDL and it has been instrumental in building a culture of security focus. We found that it both allowed teams to build relationships with us while also setting aside time for them to think specifically about the security of their feature. If you’re thinking about starting this, do it! The benefits for us have been enormous.
I disagree with his "start from the technology," standpoint, since that view is what enables the hall of mirrors effect in security, where vulnerabilities and risks exist in a vacuum, and you get drawn into the rabbit hole of compliance.
If you have developers who, by design, have no idea about the value or meaning of the data they are processing, or the business they are supporting, then empty compliance rituals are fine, GIGO applies. But, if you can use threat modelling to bring your developers into the fold of why they are developing a given feature and what makes it valuable, it makes better developers while ensuring they build security features into their work. The direction of this is business side inward, not technology outward as the author seems to recommend.
The threat model is the counter-case of your business model, that is, when a factor fails. Having developers absorb the latest in vulnerability research and threat intelligence is not useful when you can just tell them, "I need you to make sure that a competitor's sales engineer can't pull this apart and humiliate us in a bake off," or, "given we're in payments, we've got both compliance hurdles and significant criminal threats."
There is more value in educating product owners and product managers about how to identify and mitigate the counter-cases for their products' business models than externalizing risk by foisting it on developers who are neither equipped to handle it nor cognizant of it. In a product I developed, I defined threat scenarios as feature level Epics that developers use their skills to build for because people tend to know their own business needs and how it fails better than a corporate security group ever will.
Startups typically don't need security outside of the bare minimum of compliance rituals because growth really does solve everything, see Zoom as an excellent example of this. I'm even having trouble recalling a startup that was killed by a security breach.
Security is a product level problem. Start with the thing people want, ask what facilitates them getting it, and then as what happens when parts of that fail. IMO, this will solve %95 of security issues out of the gate.
I wholeheartedly agree. I was very surprised to see the Technology first perspective. This is a path to band-aids and incongrueties.
When I read the worked example, I see something quite a bit different. The worked example seems to suggest integrating basic threat modeling practices into product/feature development user stories/use cases. This is a bit better, and I'd suggest a bit more aligned to your "value in educating product owners" comment.
For me, it's quite straightforward ... what is the revenue generating aspect of your product/service? Let's assume mal intent, ignorance, user error, etc. are all likely, but not equally so, and come up with ways the service can be broken.
Top down look at the options to protect against those - which are often a combination of people/process/technology, and then weigh up the additional complexity that introducing them adds.
You basically end up with a threat/risk register, with mitigations, etc.
My feelings after working in computer security on the risk modelling side is that we put too much focus on threat actors. I don't have anything resembling a precise definition of the problem, just rough impression. But IME, you start doing these threat scenarios in form [Actor] does [Action] to [Asset] and then [Consequences] happen, and you start noticing that:
- The sets of [Actor] and [Action] are fuzzy and nearly unbounded, and you have to shoehorn them into weird taxonomies to have something you can work with. And then you're still left with lots of overlapping scenarios; it's hard to make them independent in statistical sense, to be able to treat them as separate contributions to the risk calculation.
- Everything pivots around [Assets]. Not around [Consequences], because these are affected by a particular [Actor] and [Action]. I.e. it's a different kind of PR disaster if your DB leaked because of a careless employee, vs. NSA tapping your internal data center links. But [Assets] are ultimately what attackers are after, and what costs you money if something happens to them.
So you end up with a huge list of scenarios like "Russian hacktivists zero-day Windows machines and steal customer PII, leading to EU dropping a hammer", or "Norwegian state-sponsored script kiddies SQL-inject company webpage and steal customer PII of North Americans, leading to bad press", etc., and you can see how the only thing you can reasonably enumerate here is what was damaged (stolen PII). It's also the assets that motivate most attacks and that are the primary source of consequences (from outages to fines and bad press).
This leads me to the conclusion that it may be more productive to focus on what assets do you have, what can go wrong with any of them, and how much it would cost (without taking into account who the attacker was). You could go from there and expand into likely threat actors, but I'm not really convinced it's productive. I didn't try to verify it, but it might be more useful to just stick to assets and baseline losses, fortify any kind of access anyone has to these assets, and mostly ignore the various kinds of threat actors that might be involved. Instead of trying to distinguish between Russian hacktivists and Norwegian state-sponsored script kiddies, just lump it into "attacker".
(I've seen scenario modelling going recursive on threat actors; you can distinguish between external and internal attackers, but then you can also distinguish between self-motivated and coerced internal attackers, etc. ad infinitum. You'll likely never have the data to make anything resembling an informed estimate on probabilities here, and the deeper your taxonomy, the more the errors in guesstimates start to add up. Might as well just treat all of this as one aggregate "attacker".)
Meh, I think the title of the article is misplaced.
Developers don't need to care about threat models. Developers need to care about the attack surface their applications expose.
This article feels a lot more like it's directed at engineering managers than it does at individual developers.
It sort of pretends that there aren't best practices and that we all have to just invent shit from scratch every time we do anything. We don't have to do that.
If we all just considered OWASP top 10 and mitigated those, almost everything Fowler is suggesting here would be irrelevant.
Most people are going to get threat modeling wrong. Just follow the fucking rules. They aren't that hard. Just inconvenient if you are in a hurry.
>Developers don't need to care about threat models.
I disagree. A lot of time is spent worrying[1] about theoretical attacks that require non-invasive physical access. In reality, it's very unlikely you require, or are able to implement, a threat model where non-invasive physical access by bad actors is protected against. That requires hardware that doesn't expose what it's doing through power consumption, heat output, timing, unencrypted data, etc.
That's not even the top level. There's a need for developers to follow best-practice about cryptography, etc., not for them to follow-best practice to prevent attacks that involve photographing bare silicon.
[1] Particularly on online forums. A poster has heard that an attack is possible, therefore they accuse anyone who doesn't have a mitigation against it of being incompetent.
I really do not like martinfowler's educative content. He is a great guy but people who read and understand his contents claims to be smart and I think he forgot to teach them that "skills can become obsolete so be careful when you brag about what you learn".
Years back I went for interview to the company he works for. The interview panel proved me image of Martin fowler wrong.
It happened years back and its a good failure. I judge a person by the question they ask. I was interviewed by two experienced persons, judging by the questions they asked, it ain't difficult to jump to conclusion that they wanted to sound smart. They made sure that experience is something I'm gonna take away with a real good motivation in my head. They also muted me and passed on comments to each other, ha ha , one of them smirked. Gotcha.
Wrote their names down, looked up in linkedin, bulls eye target set right. A Guy experienced in UX and A guy sparked with booming javascript knowledge. I was at a good point in my life that I have learnt and practiced my skills perfect for my next goals in life and the office was in the perfect place I wanted to work near my home. Two professionals taught me a good lesson.
Root cause of web development and mostly front end is messed up is the ux. That's how I changed the story around. I started working on a tool to make that skill obsolete.
Have a look - https://github.com/imvetri/ui-editor. Its a slow progress but I'm consistent. Know this for sure, I'm not looking for stars or feedback. I am chasing something, I loose track and somehow picking myself up. Didn't want to get into mass marketing until I have a good motivation. Gonna write this up, let it stir things up, perhaps I'll get locked in this hobby while not pursuing my rest of my interests in world. Its okay, if you see a snake, kill by its head.
Something just added up right on this perfect day.
Solve world problems, support each other, don't get greed in the way, intellect is a sin for humanity. Be good, do good, think good.
Whom am I addressing this to? Nobody. Just a letter I wanted to nail somewhere.
I don't want to be negative but having looked at your project it seems to be about putting visual stuff on the screen. UI design is not GUI design - this looks good but UIs are about supporting workflow, the visual stuff is only there as sugar.
Wikipedia seems to confirm my understanding as to what threat modeling is: "Threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified, enumerated, and mitigations can be prioritized."
There's a little bit in the article about mapping your system, and maybe at looking at interfaces and interaction points? Don't see anything about attack vectors, modes of detection and response, etc.
Also, why only the look at digital security? Being aware of how your systems interact with the physical environment can have a strong influence in how you approach architecture, even if the answer boils down to "our infrastructure team handles that".