[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: [pkcs11] Re: Updates to CKA_GLOBAL, CKM_CERTIFY_KEY and CKM_SEAL_KEY
On 6/6/2013 2:53 PM, Chris Zimman wrote:
> No. Keys can already be tied to specific mechanisms. That's the way to do it. Are you referring to using CKK_?
Nope - CKA_ALLOWED_MECHANISMS
> And the "policy exception" stuff for a specifically designated key, will of course be enforced by C_Wrap/Unwrap -- where else would you do that? It's up to other functions to reject the use of this key then too (e.g. C_Encrypt())
Actually, the key gets tagged as only CKA_WRAP/UNWRAP = true, all the other CKA_SIGN/VERIFY/ENCRYPT/DECRYPT etc get set to false.
I'll add both of these items to the CKA_GLOBAL document for the respective keys.
> No. But the application that is actually talking to the token may choose to do exactly that. What's an example use case for where an application wants to create a key, move it outside of a token, and then deal with where it lives by itself? I believe you have one, I'd just like to understand it in order to getting a clearer understanding of the whole picture.
Think about something like the java keystore abstraction. I could build a keystore implementation that talks PKCS11, but does its own database management for the "swapped out" keys. As far as the key store user is concerned, all they are seeing are possibly millions of keys, while only a few keys are actually stored in the token at any given time. If the seal key mechanism becomes a standard, ANY pkcs11 module can be plugged in for the use of infinite keys.
> For this set of points, I'm going to defer. Right now, the tokens only support role logins and not user logins. And having a single key for the token and session to do > sealing is probably sufficient. The user is the token owner, regardless of who is actually using the user login. That means that the "user" is responsible for content > consistency. If we get to the point where there user based logins then we will need to readdress a bunch of things, this included. Not that it would have to be solved for this to be potentially applicable, but I think it's useful to consider something like this because if it's pushed into a release, and then people potentially have to rework it for a near major release, there's probably going to be some frustration.
The whole concept of who owns which keys/objects is going to need to be rehashed to implement user based logins - and there are a lot of different ways we could settle on. Trying to predict what solution is chosen is probably not a useful allocation of effort at this time. OTOH, figuring out how to provide at least the base mechanism for sealing keys and objects now given how long it takes between releases seems to be a good allocation of effort.
> Fair. But again, you're assuming the HSM is actually proof against hardware based attacks (DPA for example) and has better security than a blob at rest encrypted with > appropriately strong keys. Given all I know about the general HSM marketplace, I'd rate them similar in resistance to attack. I think there's a pretty clear cut reason that there are levels 1 through 4 for FIPS 140. I would not say by any means that all tokens are equal, especially after having pulled apart a handful of them. > What I'm saying is that this isn't actually an implementation guide per se. Saying PKCS11 does not imply "FIPS 140-2 level 4" guarantees of protection or other external > guidance. I'm not exactly sure we should go there. (Mainly because that opens up a food fight where we become way too proscriptive and fight over each and every > MUST....) I think it's important to consider stuff like this, even in an abstract form. My point is not that PKCS11 should imply FIPS 140 level 4, but it definitely should not preclude it.
Let me hit this from a slightly different direction. Hardware protections aren't a panacea. Cryptographic protections at least have the virtue that it's generally possible to make a mathematical estimate of the strength of such protections. Given that, and given that in many cases the hardware protections actually boil down to cryptographic protections (e.g. keys are encrypted in the flash inside the security perimeter, but the key used to encrypt those keys is kept in a tamper memory which erases if the security module is tampered), I'm not all that worried about exporting the keys under a seal key.
But this really is an implementation decision best made by the module implementer and made with an eye to whatever regulatory or certification environment the module implementer cares about.