Security notes / Glossary: Difference between revisions

From Helpful
Jump to navigation Jump to search
(6 intermediate revisions by the same user not shown)
Line 286: Line 286:
<!--
<!--
Red Team is trying to imitate adversaries, blue team is defending.
Red Team is trying to imitate adversaries, blue team is defending.


They're originally military terms, for simulated conflicts,
They're originally military terms, for simulated conflicts,
basically adopted to test the robustness of strategies in the real world.  
basically adopted to test the robustness of strategies in the real world.  


The idea was adopted by the security field.


The idea was adopted by the security field, where it has much more varied meaning.
'''In generic descriptions''',
https://www.abuseipdb.com/statistics
'''In pen testing'''


"Red teaming" is usually meant as 'think as an attacker, not as a defender',  
"Red teaming" is usually meant as 'think as an attacker, not as a defender',  
Line 447: Line 458:


<!--
<!--
Consider that people may record SSL sessions.
In itself this gives them nothing but noise.
But if your encrypted transfers always the same secret long-term secret key,
then if you find that key twenty years later, ''all'' past exchanges are now immediately readable, and all future ones too.


{{comment|(And long-term secrets are ''practical'', because as we all know (and all don't do nearly enough),
Forward secrecy is a specific mitigation that can be designed into exchanges.
you should verify secrets are tied to the identities we think. But identity and MiTM aside...)}}




Ask yourself how what happens when someone discovers a shared secret - how screwed are things now?
(And long-term secrets are ''practical'', for humans to remember, but also technically for stable identities.)


Now consider that it most uses of encryption, parties only care about the ability to understand each other during the short-term exchange.


If you could, say, agree on a unique key for each session, and forget it immediately afterwards, this gives you some extra protection.  
For example, if an exchange uses a private key or password directly,
then recording it and attacking it offline may eventually find that key or password.




Finding a session's key does not give you the secret key (by merit of it not being the same thing), so does not
If there is something like [[key derivation]] involved, then
 
* even if you find that private key/password, it doesn't necessarily break sessions based on it
But more importantly, you can also build it in a way that even if you find the ''secret'' key,  
* breaking a session doesn't necessarily reveal that private key/password
it does not help in finding individual session keys.
 


Designs ''can'' aim for both of those because that's a good idea if you can,
but forward secrecy is about the first of those.




Line 486: Line 492:




Yeah, the names are not very intuitive, because both really talk about both directions in time.
If you're thining '''the names are not very intuitive''', you are right,  
because both really talk about both directions in time.
: Also, backwards secrecy is sometimes called "future secrecy" which... yeah.
: Also, backwards secrecy is sometimes called "future secrecy" which... yeah.
: Also, there is a difference between forward secrecy and ''perfect forward secrecy''
: Also, there is a difference between forward secrecy and ''perfect forward secrecy''
Forward secrecy also has implications about how interesting it is to keep things for later.
Say someone has recorded a secured exchange (e.g. an SSL/TLS session).
In the moment, this is noise.
But if your encrypted transfers always the same secret long-term secret key,
then if you find that key twenty years later, ''all'' past exchanges are now immediately readable, and any future ones too.
If you know that, fundamentally, the only thing that exchange can reveal is the contents of that exchange,
and doesn't help towards any other exchanges, before or after, it becomes almost pointless to store this.
This also implies a certain peace of mind -- if each message is its own island to break,
then it is too expensive for someone to break all messages,
and they probably won't store them,
or they won't have much value in doing so.
----
Now consider that it most uses of encryption, parties only care about the ability to understand each other during the short-term exchange.
If you could, say, agree on a unique key for each session , and forget it immediately afterwards, this gives you some extra protection.
(and somehow do that secretly, probably using your shared secret)
Finding a session's key does not give you the secret key (by merit of it not being the same thing), so does not
But more importantly, you can also build it in a way that even if you find the ''secret'' key,
it does not help in finding individual session keys.




Line 519: Line 562:


===Man in the middle===
===Man in the middle===
<!--
The idea that if person A think they are talking to C,
it may instead be that both are actually talking to an intermediate, without realizing.
When you add things like asymmetric encryption,
this becomes harder to do,
in that it must now happen at a specific place in time: when the key is exchanged.
This is also why we are told to verify keys, via another channel.
-->


===Two generals problem===
===Two generals problem===
Line 525: Line 589:
'''The two generals problem''' helps illustrate what you can and can't do over unreliable channels - which also has implications for security.
'''The two generals problem''' helps illustrate what you can and can't do over unreliable channels - which also has implications for security.


Basicaly, two bits of army need to communicate, but their messenger may be intercepted.


Is it possible for those two generals to coordinate, even if the messenger might get intercepted?
Basically, two bits of army need to communicate, but their messenger may be intercepted.
 
Assume that we can send precise messages.
Assume that the messages cannot be imitated (we have something that serves as authentication).
 
Whether it can be made unreadable (effective encryption) is relatively secondary.




This turns out this is unsolvable, and you can even prove that - though it takes more than a few thoughts to do so.
Is it possible for those two generals to coordinate usefully, even if the messenger might get intercepted?


Even without reading the contents (if you ''do'' have encryption), just making it ''not arrive'' can have serious implications


We assume that is possible to send precise messages, and can make them them impossible to imitate - those are not the issue at hand.


This turns out this is unsolvable, and you can even prove that - though it takes more than a little work to do so.
Intuitively, the issue is that that the last was intercepted leads to various issues,
''some'' of which you can absolutely solve, but others that you cannot.


Intuitively, the issue is that that the last was intercepted leads to various issues, ''some'' of which you can solve,
but others that you cannot.
In particular, it means you may always be one step behind agreement or acknowledgment, even if most of your messages are acknowledgment.
In particular, it means you may always be one step behind agreement or acknowledgment, even if most of your messages are acknowledgment.




You can force the issue ''in abstraction'' by baking in a "I agree to never change my mind",
You can force the issue ''in abstraction'' by baking in a "I agree to never change my mind",
but that's not actually coordination anymore - particularly in the example's practice where the intercepting army may find out about the actual message,
but then it's no useful coordination anymore - particularly in the example's practice where the intercepting army may find out about the actual message,
but even without that being an issue (e.g. using encryption).
but even without that being an issue (e.g. using encryption).


Line 557: Line 627:


https://en.wikipedia.org/wiki/Two_Generals%27_Problem
https://en.wikipedia.org/wiki/Two_Generals%27_Problem
-->
===two time pad attack===
If you know two different things have been encrypted with the same secret,
then putting together those output may reveal things about the key,
which can defeat the entire point of a [[one time pad]].
<!--
One time pads are interesting because under certain conditions, a secret ''if used just once''
can make it matter much less how complex your cypher is.
Say, you can use XOR with a large one-time key and have a ''terrible'' time trying to figure out the original.
However, if you have ''two'' messages you know are encoded by the same secret (you could call this a two-time pad),
then XOR is suddenly a terrible idea (and more complex cyphers do not always help).
the image example in the following link is a ''great'' intuitive example:
https://crypto.stackexchange.com/questions/59/taking-advantage-of-one-time-pad-key-reuse
https://crypto.stackexchange.com/questions/2249/how-does-one-attack-a-two-time-pad-i-e-one-time-pad-with-key-reuse


-->
-->
Line 652: Line 751:


===Performative security, LARP security, pretend security, cargo cult security, security theater===
===Performative security, LARP security, pretend security, cargo cult security, security theater===
<!--


<!--
Different names that point out we often do things that look and sound a lot like security,
Different names that point out we often do things that look and sound a lot like security,
but when we don't understand what we are doing, we are probably less safe than we think.
but when we don't understand what we are doing, we are probably less safe than we think.




Line 713: Line 814:


More importantly, even if an email was delivered perfectly securely, it's one forward or reply away from becoming plaintext.
More importantly, even if an email was delivered perfectly securely, it's one forward or reply away from becoming plaintext.
This is also about explaining things properly.
If IT explains to your CEO "yeah two-factor means when that pops up you must click 'accept'"
don't be surprised that they now click it always.
It's literally what we told them to do, and it fundamentally defeated the point of two-factor.
What you should have explained is ''why'' - "the idea is making surer it's you - so you click accept only when you initiated"





Revision as of 16:37, 26 March 2024

Security related stuff.


Linux - PAM notes · SELinux

Securing services


A little more practical


More techincal waffling

Message signing notes · Hashing notes ·
Auth - identity and auth notes
Encryption - Encryption notes · public key encryption notes · data-at-rest encryption ·pre-boot authentication · encrypted connections

Unsorted - · Anonymization notes · website security notes · integrated security hardware · Glossary · unsorted


Attacks

Access control

Least privilege (principle)

The principle of least privilege means each actor in a system should be able to access no more than it needs.


For example, you can say

your web server should be denied to real all of the filesystem, except its own documents
your backup program should
be able to read most everything, but...
not be able to run anything (beyond its own components)
not be able to write anything except to the backup disk and its own logs


This is part of why functional accounts are often created for each such part: It's easier to handle this in broad terms even in simpler DAC setups, with just filesystem permissions.

When you want to crack down on this more thoroughly and more visibly, look at things like SELinux (because it adds MAC).


Note that the isolation in VMs and OS containers, are implicitly least-privilege as well: no connection to the outside unless permitted.


See also:



Discretionary Access Control (model)

Discretionary access control (DAC) means access to an object is at the discretion of the object's owner.


...more precisely, the identity of subjects.

Usually that's owner-based for practical reasons, though e.g. capability systems often allow transfer to other parts.


DAC is mainly contrasted with MAC

Examples:

permissions in most filesystems

Mandatory Access Control (model)

Mandatory access control (MAC) means that the system that decides to allow access between objects, instead of the object owners (though some systems apply MAC on top of DAC)


Often means a model applied on top, labeling all objects with particular categories, and having rules based on these categories.

Such (often-broad) labeling is often done for practical reasons: it often makes the rules simpler, which makes it more clear they express what you intended.


MAC usually means design up front. And each large change should probably re-evaluate all implications.


While MAC is good at partitioning off parts of a system in broad terms (e.g. "web server may only read under /var/www regardless of permissions"), it is much less flexible at anything you can't describe fully at the design stage (e.g. people wanting to share files somewhat securely).

This is one reason MAC often assists DAC, using MAC to restrict the broad problems, and DAC for the more everyday things. (While you can sort of implement something much like DAC using MAC, this is often so messy that it may be harder to verify as being correct, and harder to administer in an everyday way)


Role-Based Access Control

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Role-based access control (RBAC) is focused on roles and role hierarchy, subjects and sessions.


It's a moderately detailed abstraction, not yet a system in itself, but may be useful to architect into larger systems where you want to keep overview of what is allowed to happen and what is happening, particularly in larger organisations.


This makes it a potential piece of of DAC, MAC and others - probably replacing a simpler implementation of one of its parts (e.g. the role/account stuff).


https://en.wikipedia.org/wiki/Role-based_access_control

Attribute-based access control

Related notes

If you squint, there is sometimes very little difference between a MAC category and a DAC group (whatever the preferred terms are).

That is, very similar to useand users - the remaining difference may lie largely in who may change the permissions - the admin, or the relevant user.


This is also roughly why there is value in combining MAC and DAC.

Hacking terminology

Passive recon

Attack vector

Attack surface, Attack factor

Attack tree

Red Team

Network stuff

DMZ network

DMZ host

Bastion host

Unsorted

Forward and backward secrecy

Man in the middle

Two generals problem

two time pad attack

If you know two different things have been encrypted with the same secret, then putting together those output may reveal things about the key, which can defeat the entire point of a one time pad.


The bitter ex test

Phishing

Worm, virus, trojan, etc.

Performative security, LARP security, pretend security, cargo cult security, security theater

End to end encryption

Zero knowledge