Security notes / Unsorted: Difference between revisions

From Helpful
Jump to navigation Jump to search
(3 intermediate revisions by the same user not shown)
Line 2: Line 2:
{{stub}}
{{stub}}


==TPM==
{{stub}}
===What is it?===
It's a hardware module that assists a few security needs
Part of its job is just to be ''separate'', because that allows it to do some things with more secrecy than is easy to guarantee otherwise -- if used well, that is. And with footnotes (separation is also potential [[attack surface]])
For some other needs it's more of a coprocessor thing, which doesn't always make much difference
Physically, TPM started as a clearly separate extra chip/module, that might be built into laptops (fairly common in business laptops), and for PC motherboards started as a plug-in module that was originally very optional -- and now, years later, Microsoft is pushing very hard on the concept, in that they refuse to run Windows 11 if it's not there ''or'' it's an older version[https://support.microsoft.com/en-us/windows/enable-tpm-2-0-on-your-pc-1fd5a332-360d-4f46-a1e7-ae6b0c90645c].
'''Integrated?'''
These days, you also find TPM is integrated into CPUs (Intel calls it PTT, AMD calls it fTPM).
This is functionally similar to a separate TPM - it has its own storage, it can't be altered, only talked to via the same specific protocol.
Upsides
* saves having to deal with one more component
* and helps against some physical attacks
Arguables:
* it's possible that some attacks might be possible due to being integrated.
:: Yes, it's isolated by a communication channel - that was part of the point - and TPM(1) communication is more easily sniffed if it's an exposed trace because that is ''itself'' unencrypted
:: yet I wouldn't be surprised if there are side-channel attacks due to being on the same silicon.
Downsides:
* it is now harder to upgrade the CPU (rarely a thing in laptops, but surely so in desktops)
<!--
It can be confusing that motherboards may allow ''either'' a physical module or such integrated TPM.
{{comment|(You ''could'' implement the spec in code, but doing so means you lose the isolated environment, which defeats half the point)}}
-->
===What does it do?===
Being part of a [http://en.wikipedia.org/wiki/Trusted_Computing larger idea],
the TPM is a different thing to different people/needs.
It should also be note that it only makes attack surface smaller when used well.
Used poorly it changes very little,
and there is also the concept of blind trust leading to riskier behaviour.
{{comment|(I'm still waiting for the day that more than a few percent of people start using GPG in an actually secure way, and that's twenty years old)}}
More technically, it e.g.
* helps store some keys,
* can create derived keys without revealing the original
* can do certain encryption/decryption for you
* may let you mark keys as "never allow these keys to be copied out" - which effectively ties certain keys to specific TPM hardware ''permanently''
That
: may makes it harder to steal certain keys,
: may force us to use mechanisms that use derived keys rather than the master key,
: may mean there are some cases where you can ''use'' a key it stored without every transporting it out
:: you might e.g. [[message signing]] (to prove you have a key) without ever having that key in RAM
: may let you tie certain uses to specific hardware (for better and worse)
:: great for a select few uses, and alleviates certain physical attacks, because now only a single piece of hardware can do a thing
:: complete show stoppers for others
:: and a potentially risky tradeoff for some others yet - for example, you can force an encrypted drive to only work on the computer you encrypted it on. And if ever separated, you can basically never read that data again.
: you can prevent running boot code that wasn't previously approved
:: as protection against malware that alters the boot
:: again, with footnotes. Like - who does the approving?
<!--
https://docs.microsoft.com/en-us/windows/security/information-protection/tpm/how-windows-uses-the-tpm
'''Why is it useful at all?'''
'''Key storage''' is one good reason.
In practical terms, consider that practically all cryptography needs a key.
While being used, that needs to sit somewhere as long as we ''might'' need it.
For your most central important keys (say, hard drive encryption keys), that means 'as long as the computer is on'.  For  a completely temporary session-specific one it's shorter -- but still basically as long as adversaries will be interested in it.
Also, aside from being in RAM for most of that time, it needs to be a place to be stored permanently. And because it's machine readable, probably in an easily found place.
Put it on your hard drive, though, and someone just needs to steal your hard drive.
Hard drive encryption may reduce that problem, sure, but cannot solve that, because you can, at best, reduce it to many keys to just key with this problem - the key used to encrypt that hard drive.
-->
====On secure boot====
<!--
'''Secure boot''' is a funny one.
Secure boot assisted by a TPM is roughly the idea that you can sign early parts of boot code,
in a way that
: is hard to imitate,
: is validated by hardware before it is run
Basically, the TPM allows you to be certain that it is booting the same code that it did before, and/or was signed
Without a TPM, there is always a stage where you have to trust the code does what it ''says'' it does.
'''However''', with TPM assisted secure boot, you still have a point at which you hand over to code you don't verify. It merely comes later.
Secure boot ''only'' covers the first stages of the bootloader.
Which is absolutely ''great'' against as specific category, of early-boot attack (where malware might hide at such a low level that it can hide from the OS that loads on top of it).
It's also potentially a good protection against certain physical attacks, such as some that might try to defeat full-drive encryption.
And yes, OSes ''could'' use secure boot to carry that a ''little''' further,
basically carrying on the same code-signing concept.
But for very practical reasons, not very far.
Which makes it easily overestimated.
By the time that OS logo is animating, we're usually ''way'' past any of this protection.
Put more practically, you can still infect the OS itself, it will be loaded far before you ever see a desktop, so if that happens you are still as screwed s before.
If you installed malware, that malware is probably no loaded.
Once you're on your way to showing a desktop, you ''certainly'' are.
(Not least because any code that is updateable is hard to validate with certainty)
-->
==="Can't detect TPM device"===
Means the BIOS knows that you ''can'' plug in a TPM module, and is looking for it (configured to do so, or always does), and doesn't find one.
So tell it not to look for one (if you can), plug one in, or ignore this message.
It's often under a header named something like 'Trusted Computing'
Some BIOSes will always look for it{{verify}}, in which case you can just ignore the message.
===More acronyms===
<!--
TC - Trusted Computing (mostly as a concept)
TCPA - Trusted Computring Platform Alliance (tends to refer to systems compliant to this variant)
ESS  - Embedded Security Subsystem (IBM product), basically a chip supporting public keys systems. Sort of a cheaper version of smart card readers, without pluggability.
Palladium (Pd for short) -  Microsoft implementation of trusted computing. Distinct from TCPA (and seems to be more closed and patent-encumbered than it).
DRM
TPM - Largely a (non-persistent) storage of your encryption keys (plus some tools)
Generally, the idea of trusted computing is building security from the ground up, starting at a hardware module.
There's only so much that can do, and there is criticism about the claims and incidental power to exclude you.
Functionality of TC we have right now include:
* public key support - generating keys, checking keys. Things like TPM, ESS and such are useful in support of your encrypted filesystems.
* Trusted boot - check whether post-boot state of things seem to be okay (protection against boot-time exploits)
One important point is that TPM != TCPA != DRM.  In many ways.
What TPM does right now (~2011) is for the most part just handy, but absolutely not a full trusted-computing setup. and the criticism is primarily of what such a platform ''might'' eventually do.
A trusted-computing setup may include all of:
* A chip (e.g. TPM, SCP, ESS), with one of various functions -- although some or all functionality may eventually become inseparable, e.g. part of the CPU.
* a CPU which allows memory curtaining: application memory is only accessible by itself - not even the OS (in theory)
* OS component
* application component for those that support it
* online security servers (for whitelists, revocation, etc.)
Systems that exist mostly in specifications can easily exert draconian control of your computer.
What many critics fear is the movement, some of the people behind it, and the interests they have and, much more importantly, do not have. Smallish decisions now can have large long-term impact. You can argue that complaining now is crying wolf, but at the same time it is strange to deny the wolf that is corporate interests.
Also, some of the most interesting upsides only work when things are tightly controlled
-->
===What does TPM not (necessarily) protect?===
<!--
The only thing TPM protects while your computer is running is some of the keys contained in the TPM.
Any exploit aimed at a running system - which is most exploits in general - can get just at the same amount of your data as before, because TPM plays no role in access to RAM or IO.
For example
: It does not protect against viruses
: It does not protect against hardware misdesign (see e.g. the {{search|USB-C DMA attack}}).
: It does not protect against preinstalled backdoors
: It does not protect against manufacturer backdoors
: It does not protect against a hardware keylogger, or OS-level keylogger (malware)
: It does not necessarily protect the OS part of boot (the amount varies, but is less than you may think)
: It does not necessarily protect against malware started boot
Yes, the TPM means that fewer keys need to be in RAM because they are usable via the TPM,
''but'' that only helps so much because you can assume that any keys that ''you'' don't need to interact for, malware also doesn't.
'''What does and doesn't secure boot mean?'''
TPM is an (optional{{verfiy}} but preferred) part of secure boot.
Because you can protect boot, it can also make the boot part of full drive encryption less exploitable, which is also great.
But it basically ends there.
Secure boot can say that you are loading the same OS as you did yesterday.
Also, when you need certain code to be part of boot -- like full-disk ([[data-at-rest]] style) encryption -- you can have some strong guarantees that that code is the same code as it was yesterday.
This is important because, well, think like a malware designer.
You want your code not only to run, but also to be hidden.
If, ealier, you can be part of the "please enter key to decrypt disk" prompt, it's not hard to steal that.
If, later, you can become part of the kernel (which can see and do everything),
you can basically get it to tell everyone else you don't even exist.
The best way to do this is to load yourself before the OS, or alter the OS.
This is why "Yes, I am loading the thing you told me was the genuine OS before" is part of the puzzle.
But as mentioned, note that 'secure boot' refers only to the first stages of boot:
it does little more than
checks that the hardware option ROMs are valid (plugin cards contain code, to initialize)
and that its own and the OS bootloader is unchanged.
And that's roughly the last thing secure boot does before handing over all control.
Secure boot in itself doesn't even mean that the OS's boot sequence isn't wildly infested.
Nor can it prevent that infestation - early boot is just one of a few ways to corrupt that.
OSes protecting their own boot is probably a next piece of a more secure puzzle.
And yes, having secure boot under you makes the that a lot more meaningful.
...but even this ends somewhere. And it's far before you ever get to log in.
---
For example, on one side it's ''great'' that TPM can store part of the key used in hard drive encryption, as it means stealing/duplicating just the hard drive has little value.
At the same time, the disk encryption software basically needs a key in RAM to work, which TPM fundamentally can do nothing about.
: Not that attacks on that are simple.
: Also, various software is designed so only a derived keys are in memory, never the master key, but how much difference that makes varies.
: Apple solves this a bit differently, putting certain controllers in a separate processor - you could think of it as a beefier TPM handling accesses for you.  The key is still in ''its'' RAM, but that being separate from the general processor reduces the effectiveness of certain attack finding that key.
---
'''Secure boot''' means it (typically meaning UEFI) won't boot unless everything is signed correctly.
'''Authenticated/measured boot''' means it will boot, but will be recorded and reported (by the TPM{{verify}}).
https://en.wikipedia.org/wiki/Trusted_Platform_Module#Password_protection
https://docs.microsoft.com/en-us/windows/security/information-protection/tpm/tpm-fundamentals
-->
===TPM versus TPM2===
<!--
TPM2 is a revised implementation, that is not backwards compatible with TPM1 (1.2)
-->
===Use and criticism, strengths and weaknesses===
{{stub}}
<!--
As with many security issues, there is [[FUD]] on '''both''' sides of this argument.
Which is probably fair, simply because there are good and bad uses uses of the TPM and TC in general.
Apprehension is fair in that in the long term, we may not have a choice ''not'' to use TPM and the things that depend on it -- secure boot has been a confusing example for a while.
There are some major parties interested in a relatively draconian implementation -- that will probably not happen, but which is something to give a lot of attention to so that it doesn't.
Some interesting points
* The good things it can do should not be dismissed because of the bad things it can do
** (see fearful reactions to printing press, phones, encryption, etc.)
* Trusted Computing is a misleading and fuzzy term
** Trust what? Secure from whom? What parts of the system?
** Doesn't make the computer trustworthy
** You have to trust this system, because you have no choice.
** In the meantime, the things the system trusts are out of your control.
** It generally does not trust you (which is sensible, because)
* Users can be shielded from each other
** a good thing for security, and to a degree for privacy
* TC applications can shield data from each other and from non-TC applications
** A good thing for security (exploits are limited to the application; privilege elevation is harder)
** A potential bad thing for other reasons (some mentioned below)
** Can be bad for interoperability. TC application won't like others interfacing with it.
* It makes hardware-enforced licenses (DRM for media, software, and more) a lot easier.
** It's one of the things systems like these are best at, if not by design then by implication
** which makes a lot of people uneasy, not because that makes music playing and pirating harder, but because it's a bad prescendent - TC can in theory police everything you do with your PC
** This is also one reason PR hype and best-case theory are unrealistic. Media companies love the concept and are some of the largest forces.
* Many specific uses of TPM must be draconian to not be circumventable.
** consider administrative privileges
* Not secure against physical access
** It can make it a lot harder, particularly if you need a secret contained within an on-CPU module
* Most of TPM's uses are not about user security.
** it does not help authentication (and therefor not anything based on ''that'') because it does not address identity
** at best it can equate hardware with users -- which breaks some of the most central ideas in most security models. If my bank trusted anyone on my computer implicitly, that would be worse, not better. It doesn't add anything that two-factor security can't do better.
* excluding the user means most trust is automatic
** ...which isn't really trust. Trust is fundamentally two-sided -- and double-edged, as many stories and movies point out.
* Cannot itself easily protect your data
** that is, it can in part control data that goes elsewhere, but it can't protect against malware or hacking so easily
* can support secure key storage {{verify}}
** That can be good. For example, windows could make sure its updates are real.
** can also be used for hardware-enforced DRM. Which is good or bad depending on your views and interests.
* security this tight cannot be married with a general purpose computer
** Anything that isn't feature-limited (think iPads/phones with their feature management) will see security holes and/or exceptions
* Can be used against viruses and malware
** ...but probably only in a very full and tightly controlled TC setup
* Makes commercial and economic bullying easier.
** It ''can'' be used for this, which isn't to say that it will - but what's to keep it from happening? The worst-cases people have come up with won't happen, at least not in big steps.
** Depending on application, it can have huge, strategic, international implications
** could be used for many different anti-competitive practices  (under the excuse of security)
** can decrease market competition in many ways, some of them subtler than others
* Can be turned off -- until it can't.
** ...because of something practical. For example, you need an application that requires TC (which may be something as simple as MS Office), your admin requires it for something like login, your company implicitly tells your admin to enforce it.
** Once we reach the point where non-TC PCs become less compatible, they will disappear in many contexts.
http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html
-->
===TPM related errors===
<!--
====Trusted Platform Module malfunctioned====
My work computer tells me this every time I log into Office:
[[File:SomethingWentWrong.png]]
with the helpful error message "Something went wrong",
and error number 80090016
My guess is this a Single-Sign-On system trying to store some token in the TPM to make everything more transparent.
This is, apparently, not an error per se, depending on what you're doing.
It might just means you need to log into a bunch more things yourself rather than that happening transparently.
But if it's an app that failed to do this, it may prevent it from working at all (e.g. Teams),
or become just a viewer than an editor(e.g. Office) because this implies a failure to apply the subscription license.
-->
===See also===
* http://en.wikipedia.org/wiki/Trusted_Platform_Module
==SGX, SEV==
<!--
SGX is a trusted execution environment that allows runtime definition of private regions of RAM.
These enclaves can be encrypted transparently by the CPU,
meaning that they should be opaque to other processes.
SEV (and SME) is the closest comparable AMD thing, and it seems a little more thorough.
(TrustZone the closst ARM thing?)
SGX is now considered broken, and SEV also has good number of attacks on it,
though it's not an apples-to-apples comparison as the threat model is a little different.
Neither is immune to [[side-channel attacks]],
but most people would care more about the regular runtime protections (e.g. a cloud )
than
https://en.wikipedia.org/wiki/Software_Guard_Extensions
https://www.amd.com/en/developer/sev.html
-->


==Pre-boot authentication==
==Pre-boot authentication==
Line 460: Line 16:


Even the ''types'' of guarantees vary.
Even the ''types'' of guarantees vary.




Line 497: Line 54:


Proceeding this far ''requires'' that the lower layer is secure.
Proceeding this far ''requires'' that the lower layer is secure.
And, arguably, TXT


-->
-->
Line 503: Line 64:


https://en.wikipedia.org/wiki/Pre-boot_authentication
https://en.wikipedia.org/wiki/Pre-boot_authentication


==Nonce==
==Nonce==

Revision as of 13:20, 9 March 2024

Security related stuff.


Linux - PAM notes · SELinux

Securing services


A little more practical


More techincal waffling

Message signing notes · Hashing notes ·
Auth - identity and auth notes
Encryption - Encryption notes · public key encryption notes · data-at-rest encryption ·pre-boot authentication · encrypted connections

Unsorted - · Anonymization notes · website security notes · integrated security hardware · Glossary · unsorted

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


Pre-boot authentication

https://en.wikipedia.org/wiki/Pre-boot_authentication

Nonce

Challenge/response

JSON Web Signature, Encryption, Tokens

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

GSSAPI notes

GSSAPI is a IETF standard to make it easier for various software to do various strong auth, e.g. Kerberos.

It also allows various other auth schemes to be plugged into it.


Which also makes it potentially interesting for SSO setups within an organisation.


(not unlike SASL, which can include GSSAPI)


It's used by things like OpenSSH,

NaCl

There are two security related things called NaCl - which are completely unrelated to each other.

(There are also other things called salt, like automation software Salt (a.k.a. Saltstack)


NaCl as in libsodium

Google NaCl

Side note: Asymmetric v.s. symmetric keys

Simpler systems had symmetric keys, meaning that the encoding and decoding key was the same.


This allows encryption in both ways -- and that both parties have to trust each other mutually.

You have to trust neither will accidentally or purposefully leak the key, because that key means all possible abilities including

reading received encrypted data from, current or past
imitating the other side's data





That's usually fine between two parties, but sharing the same key between more than two is as weak as the weakest link. Again,

reading all parties' encrypted data from, current or past
imitating all parties involved


This is arguably the largest problem that public-private key systems target (there are other upsides):

given the public key of someone's (public,private) keypair, it is nearly impossible to calculate the private one
ideally even with any number of encrypted messages

...which is why it isn't a problem to hand the public ones out.