OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

pkcs11 message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [pkcs11] CK_ULONG considered harmful?


On 05/16/2013 10:07 AM, Michael StJohns wrote:
In the current version of the spec, the definition for CK_ULONG and CK_LONG are "integers at least 32 bits wide".

Given this spec is at least in part an interface spec, there can be an issue where the client uses a CK_ULONG of 32 bits and the driver uses a CK_ULONG of 64 bits (for example) or vice versa. The text in the section says:

So in practice this has meant that CK_ULONG is the length of the unsigned long on the native platform, but needs to be at least 32 bits wide (That is some theoretical 16 bit platform needs to have CK_ULONG be able to hold a 32 bit value). Since the client and the driver are on the same platform (PKCS #11 is a linking, not wire protocol), they should always agree on the actual length.

It follows that many of the data and pointer types will vary somewhat from one environment to another (e.g., a CK_ULONG will sometimes be 32 bits, and sometimes perhaps 64 bits). However, these details should not affect an application, assuming it is compiled with Cryptoki header files consistent with the Cryptoki library to which the
application is linked.

But that means that it is difficult to build client code built for adaptation (e.g. something like java bridge to JCE and NSS for Mozilla) that is library agnostic.


Proposal:

1) Option 1 - Leave it as it is

2) "Clarify" the language and force things to 32 bits.
Change the base definitions  (page 11, section 2 of the 2.40 draft) to:

typedef  uint32_t CK_ULONG;
typedef int_32_t CK_LONG;
typedef uint8_t CK_BYTE;

and update the text accordingly.

This will break every 64 bit driver and application out there, so it's really a non-starter.

3) Fix things for new items:

Add the following definitions:

typedef uint32_t CK_ULONG32;
typedef int32_t CK_LONG32;
typedef uint8_t CK_BYTE8;

And use these values for all future definitions and mechanisms. (I mostly don't like this idea because of having CK_ULONG32 and CK_ULONG, provided for completeness).

Adding something like this is fine for creating new types, but we can't convert existing types without breaking things.

4) Option 2 or 3 above, but we provide a migration path:

Add a C_GetAlternateFunctionList:

CK_DEFINE_FUNCTION (CK_RV, C_GetAlternateFunctionList) (
   CK_FUNCTION_LIST_PTR_PTR ppFunctionList,
   CK_UTF8CHAR_PTR pListName
 );

Some drivers already do something like this to let you get the FIPs vs the non-FIPS version of the routines.

If you did C_GetAlternateFunctionList (&pk, "ULONG32"), you'd get a set of routines where the internals for CK_ULONG were all guaranteed to use 32 bits, similarly for "ULONG64".

I'm not seeing the full benefit here. Even if you regularize CK_ULONG, we still have pointers in PKCS #11. Those pointers vary in length depending on platform. In fact, in many platforms (though not all) sizeof (void *) == sizeof (unsigned long).

There would be some additional #ifdefs in the header file so that the user could do the right thing.


I think the issue we need to deal with is what is the actual problem we are trying to solve. It's not trying to load a 32-bit PKCS #11 module into a 64-bit process. If it were, we have a much bigger problem than the length of CK_ULONG. I can foresee the following separate scenarios where we may have an actual mismatch in real life:

1) persistent storage of object attributes:
In this case we only care about CKA_XXXX values that have the type CK_ULONG. Many of these values are PKCS #11 defined (like CKA_KEY_TYPE or CKA_CLASS). The PKCS #11 spec does not define any of these values to be greater than 32 bits. We're familiar with this issue in NSS because we actually do store these values. As of PKCS #11 2.20 we identified the following attributes that are CK_ULONG:
    CKA_CERTIFICATE_CATEGORY
    CKA_CERTIFICATE_TYPE
    CKA_CLASS
    CKA_JAVA_MIDP_SECURITY_DOMAIN
    CKA_KEY_GEN_MECHANISM
    CKA_KEY_TYPE
    CKA_MECHANISM_TYPE
    CKA_MODULUS_BITS
    CKA_PRIME_BITS
    CKA_SUBPRIME_BITS
    CKA_VALUE_BITS
    CKA_VALUE_LEN

The length values aren't actually stored (they are triggers to various generate calls to set key lengths), but even if they were, our key lengths (even in bits) are still well under 32-bits. The type fields I've already covered above. NSS doesn't use CKA_JAVA_MIDP_SECURITY_DOMAIN and CKA_CERTIFICATE_CATEGORY, so I don't know the status of these values. The NSS solution is to notice that all persistant CKA_ULONG usage is actually 32-bits are less. Since these values need to be stored in a standard endian order anyway (our databases need to be shared across various platforms with different endianness, we need to process CK_ULONGs anyway to store them as consistant endian values. In theory these attributes could have 64 bit values, but PKCS #11 currently does not define any of these values to be bigger than 64 bits. It might make sense to codify this restriction.

2) RPC access to a PKCS #11 module.

Not only can one envision a system in which a PKCS #11 module is running in another process, accessed by an RPC, I'm pretty sure such systems have already been built. In this case It's not just the CKA_ATTRIBUTES that can take on 32 or 64 bit values, it's all usages: handles, structures, parameters to functions, etc.

Clearly in the general case of an RPC, there needs to be some glue code that marshals all these parameters. Even if CK_ULONG was a consistent length, we still have endian issue. In additions function parameters and structures include datapointers which need to be unpacked (clearly sending a CK_ULONG_PTR across an IPC is not very useful, you need to send the data associated with it). In looking at the PKCS #11 use of CK_ULONG, we can identify 3 basic usages*:

a) Enumeration: (things like CK_MECHANISM_TYPE, CK_KEY_TYPE, CK_OBJECT_TYPE, etc.). b) Handles: Handles are ephemeral values which are completely in the control of the PKCS #11 module itself. The only have meaning to the PKCS #11 module. It is currently perfectly permissible for a 64-bit module to pass back a 64-bit handle. c) Lengths: This is one of the more common usages of CK_ULONG, to specify the length of some supplied data.

My cursory examination of the spec did not turn up any other usage (like some actual data number, PKCS #11 usually uses byte strings for this).

I've already dealt with enumerations 1 above. By definition in the PKCS #11 spec, these are fit in 32-bit values, so even though a 64 bit platform will use a 64 bit value to store them the present to problems for an rpc to convert.

Handles are a different matter. Some PKCS#11 modules cast pointers under the covers to handles, so that a handle == the underlying pointer. A 64-bit module is going to have 64 bit pointers, so the handles will be the full 64 bits. So in the RPC, if we have a 64 bit client talking to a 32 bit provider, we don't have a problem. the 32 bit provider will only supply 32 bit handles, so the RPC can simply return the 32 bit handle. If the RPC gets a handle from the client that isn't 32-bits, the client has goofed, and you can return CKR_INVALID_HANDLE. If a 32 bit client is taking to a 64 bit provider, then you need to map the handles. The RPC would need to keep a table mapping the 32 bit handle returned to the client to the 64 bit one expected by the provider. The mapping isn't hard, hash tables are you friend.

Finally we come to lengths, which I think we run into our biggest challenge. Most length will be fine. In a large fraction of the usage, we can assume the length is less then 32 bits (4 TB). The one area this becomes problematic is when the client is a 64 bit client talking to a 32 bit RPC. The client is running on a big server, which tons of memory, and it wants to process big chunks of data at a time (say decrypt an entire disk). I don't think we want to prevent that from happening in the ordinary case of a 64 bit client talking to a 64 bit provider (and particularly the ordinary case where there is no RPC, just a 64 bit module loaded into a 64 bit application), but clearly in an RPC case, this will fail if the other end is a 32 bit provider. I think this is the case we need to discuss the most.

bob



Mike





---------------------------------------------------------------------
To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail. Follow this link to all your TCs in OASIS at:
https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]