First I “locked” my phone number disabling transfers (although I suspect this is vulnerable to social engineering attacks).
I have also frozen my credit with the three credit bureaus (the attacker also opened a new line of credit in my name)
I am also closing the bank account that was compromised. They aren’t giving me any info but I suspect the attacker got my debit card via social engineering. It was a new account and I hadn’t even received my debit card yet.
I have a subscription to a credit monitoring service as well which has proven its worth in this situation.
Otherwise honestly I am not sure what to do. It sucks to know this person has my name, social, phone, and other info. I basically plan to keep my credit frozen indefinitely. I am also disabling text based 2FA for me and my wife wherever possible.
You need to give your ssn to so many people over your lifetime and you’re essentially trusting that all of them will be trustworthy and secure with it.
This could be easily solved with public key cryptography, but it would also confuse so many people it would be hard to implement.
If there’s an upside to the crypto craze, maybe it’s teaching people about cryptography basics.
If this were being used for SSNs, you'd have a central authority to restore access. If you lose your passport, they can issue you another one and mark the old one as lost/stolen. You can do the same thing for key pairs.
The main problem it solves is giving a sketchy client your SSN on your I9 without allowing them to use it/leak it to scam groups to spin up a credit card on your behalf.
The main problem with the “key escrow” scenario is that the government can access your private key, so this solution is still not meaningfully secure. What do you think is more likely: that this new institution will be magically invulnerable? Or perhaps you will have just created an irresistibly valuable target for social engineers and hackers that inevitably will fall?
Good news. A good chunk of the world already uses crypto for identification. My eID card is just that, i can auth with a chip and pin. This is normal in a lot of the EU.
I don’t want my identity to be linked to many of those accounts. So I’ll take a yubikey. Second, I’m glad I don’t live in a country where I’m required to carry id.
You don’t have to but in situations where you are unable or unwilling to show your ID and a peace officer wants to check your identity, they’re entitled to take you to the precinct.
I’ve “locked” my number and it requires a transfer PIN. I hope Verizon’s systems won’t allow a transfer without that pin even with a malicious employee, however I wouldn’t be surprised if they are able to override it.
Apparently my attacker had a fake ID with my name and their photo. It’s possible a store employee could override the transfer lock if they are sufficiently convinced it’s really me.
I've heard many cases of transfer locks being broken. From what I understand, it is even possible to simjack at a higher level than the individual telco.
Thus, I don't even bother with stuff like this, the only solution in my eyes is to not rely on SMS 2FA and if you absolutely have to, at least use a GV number. While GV isn't totally secure either, at least it is disconnected a tiny bit from my cell number and doesn't have humans backing it (we all know that Google never answers the support phone).
An actual inline style would suffice in this case. Before v2.0, if you want anything other than `top: 0`, it was recommended you do it in the style attribute: `style="top: 20px;"`
Imagine you have a couple of algorithms that scramble a solved Rubik's cube into a configuration that takes at least 20 twists to unscramble [0]. From there, any attempt to make it ‘even more scrambled’ would be pointless — and actually likely make solving the resulting puzzle easier.
Now imagine there's a programmer who wants to make the ultimate cube scrambler despite not knowing any of the above. Their brilliant idea is to take the aforementioned algorithms and chain them together. (Result: snafu.)
In essence, the moral of the story is that one shouldn't try stacking encryption algorithms without first acquiring a pretty good understanding of how they all work.
I think it depends. Imagine a future where quantum computers may be in reach by intelligence agencies, but a quantum-resistant public key encryption algorithm has been proposed but not rigorously defended. You wouldn't want to trust either algorithm alone, so you can use both: encrypt the data with the quantum algorithm first, then by the classical one. Decrypting would require breaking both, there's no shortcuts.
That’s not how it works unless you’re sharing a key between them somehow and one of them reveals the key. Otherwise an attacker could take something encrypted with a good algorithm and encrypt the cipher text with a bad algorithm to make it easier to crack themselves.
I gave an intuition for how it can happen that combining algorithms (in a bad way) results in weaker encryption — without claiming that it must always happen.
If we move the goalposts to where the combined algorithm receives a much larger key than any of the individual parts we're comparing to in terms of crackability, then the likely failure mode isn't ‘weaker’ any more, but ‘stronger, though maybe not as much stronger as was intended’.
The history of triple DES provides a nice practical example: ‘double DES’ isn't a thing because encrypting already-DES-encrypted data with DES again, with a completely separate key (thus effectively doubling the size of the key), does almost nothing to improve security.
To support your point. I've used these weaknesses to break crypto algorithms in the past.
A typical example is the crapto-1 Mifare Classic algorithm used to encrypt NFC cards. The way they read from the shift register and combine the bits was dumb and it's complexity weakened the algorithm.
Another I've seen is using two sequential keys XORd against one another to produce and "encryption" key. Turns out reading from low entropic systems very quickly yields a similar enough key that when XORd, partially removes the first one.
Can you give an example (or at least a sketch) of how, e.g. AES-128 on top of, or below, DES, is weaker than either?
The claims of "weakened by combining" are often aired, but all examples I've found so far are basically summarized as "remaining within a group structure" (as in your rubik's cube example whose god number is 20) - which might not be stronger than each individual, but is unlikely to be weaker either -- and algorithm combinations usually DON'T remain within a group structure (e.g. DES is not a group, so 3DES is strictly stronger than DES, even if it might not be 3-times strong in bits)
If I’m following this logic correctly - running a few more algorithms on something before trying to decrypt will make it easier rather than harder to decrypt?
Is the ceiling for “max encryption” that low, or is just that one algorithm combined with another has a local maximum?
I just cherry-picked a simple example to make a clear illustration, but …
> running a few more algorithms on something before trying to decrypt will make it easier rather than harder to decrypt?
No, it could make it easier, harder, or about the same. The ‘harder’ case is just unlikely when the algorithms one started with were already state-of-the-art and the programmer didn't know what they were doing. It might seem tempting to think that a cryptanalyst now has to do twice the work, but what they're really doing isn't cracking multiple encryptions — they're just attacking a different encryption.
Very basic example: ROT13 is a form of encryption. Applying ROT13 twice gives you plaintext.
It's of course not that trivial with better encryption algorithms. But before stacking encryption algorithms, try to first answer what you are trying to achieve (that application of a single algorithm does not).
There are ways where it can, but it is usually more secure to stack encryption despite what people on HN tell you. The NSA does stacks double for the secure version of the mobile phone that they give to high level diplomats and POTUS and there are cases like during Cloud Bleed where the only sites that were fully secure turned out to be the ones that used client side encryption in addition to HTTPS. I'm not saying that this would necessarily be more secure, just that is tends to be more secure based on all of the research I've done and personal experience on projects as well as conversations with people that actually break encryption for a living. The devil is in the details, though, and it also depends on the nature of your adversary.
I, for one, do not mind developers adhering to some common patterns/best practices. Whenever I open a codebase foreign to me, I bank on those patterns to understand the app to be able to contribute, learn, etc. Seeing new patterns would be helpful to myself, of course. But, there's a good chance I would bail out if I don't have a huge need learning that codebase.