OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

pki-tc message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [pki-tc] Re: [ekmi] SKSML Specification Question


Yes, indeed. The current XML namespace for EKMI is proposed to be:

http://docs.oasis-open.org/ekmi/2008/01

This allows us to come up with multiple versions in the same year
also, if necessary.

Arshad

Tomas Gustavsson wrote:
> 
> Regarding future versions. I guess the xmlns can be used by the SKS to 
> support different versions of the protocol in parallel right?
> 
> /Tomas
> 
> 
> Anil Saldhana wrote:
>> Excellent questions, Tim.
>>
>> EKMI/SKML is viewing the world of SymKey Management way up in the 
>> stack (Application space). So I am sure there are tons of things that 
>> need to be looked at in the long run. One common pitfall of specs is 
>> that a lot is attempted for the v1.0 such that it gives rise to 
>> schedule slippage, complications in comprehension etc. The approach 
>> suggested by Arshad is valuable because we need to shed some light in 
>> this dark tunnel called as symmetric key management.
>>
>> http://dsonline.computer.org/portal/site/dsonline/menuitem.9ed3d9924aeb0dcd82ccc6716bbe36ec/index 
>>
>>
>> Some of the concerns that you have raised (primarily the keyclass 
>> classification) do aptly fall under the category of profiles/bindings 
>> etc with the specification. The specification as I see it needs to be 
>> just about structures (no details).  All the details (use cases) etc 
>> come under the auxiliary artifacts of the specification. Just like the 
>> Non-normative stuff was separated from the normative specification, we 
>> certainly can have profiles concerning browser cookies, tape drives 
>> etc, as real world use cases of SKML.
>>
>> There are a few sceptics of EKMI:
>> http://tinyurl.com/5ostxm
>>
>>
>> Bruce, Timothy R wrote:
>>> Thank you, Sir.  That answers a lot of questions but I am not sure yet
>>> whether or not there is a need for an additional subcommittee to
>>> consider further extending the standard.
>>>
>>> I am concerned about the implication of assumption 5 and your statement
>>> (which I agree with) that "It is my personal belief that within 10
>>> years, ALL data will get encrypted..."
>>>
>>> In that world 10 years from now, the Security Officer might be a very
>>> busy person if that person has to be involved in identifying,
>>> classifying and assigning key classes to every application, appliance
>>> and device in an enterprise (every hardware device, software
>>> application, smart card, memory stick, PDA, cell phone, card key, car
>>> keys, etc).  Tools can help an SO with this classification into key
>>> classes and policies, but only if standardized classification
>>> information is embedded within the protocol packet.  The protocol
>>> absolutely should be open to vendor exploitation and innovation, but
>>> that does not preclude a protocol that also facilitates automatic
>>> discovery, registration and classification into policies set up by the
>>> SO.
>>>
>>> So it seems to me that some additional standards work is needed
>>> somewhere so that the EKM infrastructure can automatically identify and
>>> classify key consumers based upon policies established by the SO.  Those
>>> additional standards may be needed in the SKSML, the PKI or in some
>>> umbrella standard.
>>>
>>> Tim Bruce Principal Software Architect, Development 5465 Legacy Drive 
>>> Plano, Tx.  75024 tel:  214-473-1917 fax:  214-473-1069
>>>  
>>> -----Original Message-----
>>> From: Arshad Noor [mailto:arshad.noor@strongauth.com] Sent: 
>>> Wednesday, June 18, 2008 6:42 PM
>>> To: Bruce, Timothy R
>>> Cc: ekmi@lists.oasis-open.org; PKI TC
>>> Subject: Re: [ekmi] SKSML Specification Question
>>>
>>> You haven't gone wrong anywhere, Tim; you just have an incomplete
>>> picture of how SKSML can be implemented to address the needs you
>>> describe.
>>>
>>> Lets establish some assumptions before we delve into the details
>>> of how SKSML will be used in an enterprise:
>>>
>>> Assumptions
>>> -----------
>>>
>>> 1) An enterprise will establish an EKMI which consists of two sub-
>>>     systems: a PKI to issue digital certificates to all client and
>>>     server devices, and an SKMS to issue and manage symmetric keys;
>>>
>>> 2) Through a process - not in scope for this TC, but within the
>>>     scope of the PKI Adoption TC which is part of the same IDtrust
>>>     Member Section we belong to - the clients and servers are issued
>>>     digital certificates to establish their identity;
>>>
>>> 3) Each of these clients and servers is known to the SKMS through
>>>     the trusted hierarchy of their digital certificates.  How the
>>>     SKMS gets that information is an implementation detail left upto
>>>     the vendors;
>>>
>>>     (StrongKey 1.0 - which implements an SKMS using the DRAFT 1 version
>>>     of SKSML - requires that each client/server be entered manually into
>>>     a webform of the SKS server; StrongKey 2.0 intends to change that by
>>>     integrating into an LDAP store of published certificates, or front-
>>>     ending a CA as an Registration Authority to ease that process);
>>>
>>> 4) The Security Officer at the enterprise defines KeyUsePolicies and
>>>     corresponding KeyClasses for some standard encryption policies.
>>>     So, to use your example, one KeyClass might be "TapeLibraryClass"
>>>     and its corresponding KeyUsePolicy might be "AES 256-bit key valid
>>>     for 10 years".  Another KeyClass might be "Web Server Cookie Class"
>>>     and its KeyUsePolicy might be "AES 128-bit and valid for 30-days".
>>>
>>>     (In StrongKey, the SO also defines a "Default" KeyClass and a
>>>     "Default" KeyUsePolicy, which might be "3-DES and 1-year" validity,
>>>     or whatever is decided by that enterprise to be their default;
>>>
>>> 5) Now the SO will assign all computers, LTOs and Management Consoles
>>>     of tape-devices (which have been previously issued certificates in
>>>     step #2) are assigned to the "TapeLibraryClass" KeyUsePolicy.  The
>>>     SO will also assign all *web-servers* and *web-applications* to the
>>>     "Web Server Cookie Class" KeyUsePolicy.
>>>
>>>     All these definitions are done BEFORE a client has requested even a
>>>     single key.
>>>
>>>     The reason the TC probably would not want to standardize on KeyClass
>>>     definitions, is because they imply a KeyUsePolicy - what type of
>>>     algorithm to use, for how long, with what applications, at what time
>>>     of day, in which location, etc.  These decisions are unique for
>>> every
>>>     enterprise, and really not within the scope of the TC's work
>>> product.
>>>     Every enterprise must decide these for themselves as part of their
>>>     SKMS deployment.  (But, if there is a plug-and-play way of doing it,
>>>     you should propose it for a sub-committee deliverable - see below).
>>>
>>> Now lets look at the mechanics:
>>>
>>> A) When the SKMS is turned on, client applications - which have
>>>     integrated an SKCL and the keystore/certificate-store that has their
>>>     digital certificate credential (this can be a smartcard, TPM, HSM,
>>>     or - yikes - a file-based software module) - will just request a
>>>     key from the pre-configured SKS servers list.
>>>
>>>     (In StrongKey, this list is a .properties file, and can be
>>> configured
>>>      in advance and distributed through SMS - for Windows machines, or
>>>      NFS for Linux/UNIX machines, or equivalent mechanisms for
>>> MVS/OS400)
>>>      much like the resolv.conf file for DNS server information;
>>>
>>> B)  The client application can choose to either specify a KeyClass, or
>>>      not designate a KeyClass in its SymkeyRequest to the SKS server.
>>> If
>>>      it does request a specific KeyClass, it makes sense for it to only
>>>      ask for classes that it knows it is authorized to request - the
>>>      developers of the application can learn that through the .property
>>>      files, if necessary.  But, I believe, it is best to leave the
>>>      KeyClass out unless it absolutely needs it to specify it;
>>>
>>> C) The SKS server, upon receiving the request and verifying the identity
>>>     and authorization of the client (through the certificate validation
>>>     process and a lookup of the DN in its own database), will determine
>>>     ALL the authorized KeyClasses for that client.
>>>
>>>     If the client did NOT request a specific KeyClass, the SKS server
>>>     will choose the most rigorous KeyClass from its list of authorized
>>>     key-classes for *that* client to generate the key.  If there is not
>>>     a single specific key-class that applies to that client, the Default
>>>     KeyClass will be used;
>>>
>>> D) SKS Server now generates the key, escrows it, and returns the object
>>>     in a SymkeyResponse using the SKSML protocol.
>>>
>>> Some important things to note:
>>>
>>> i) The SKS server NEVER deletes a key on its own regardless of when it
>>>     expires.  The SO can choose to deactivate keys they don't use, or
>>>     delete them either through the Administration Console or through a
>>>     scheduled job.  In any case, a client cannot just automatically get
>>>     an existing symmetric key because it asked for it.  It must be
>>>     explicitly authorized to get a specific key, through an individual
>>>     KeyGrant or a group KeyGrant (where the client is part of a group of
>>>     clients that are authorized to get a specific key.  Groups can also
>>>     be granted access to all keys within a KeyGroup, thereby avoiding
>>>     the need to keep providing explicit grants to every key in the
>>> SKMS).
>>>
>>> ii) An SKMS is designed to not just manage millions, but trillions -
>>>     2^64 actually - of keys (subject to hardware limitations).  It is my
>>>     personal belief that within 10 years, ALL data will get encrypted,
>>>     since it will be less expensive to encrypt all data than to make a
>>>     decision of what to encrypt and what not to encrypt.  As a result,
>>>     there will be an EKMI in every enterprise serving up billions of
>>> keys
>>>     for every application in their infrastructure.  So, the design is
>>>     deliberate and conscious.
>>>
>>> iii) A client is NEVER expected to generate a key on its own.  Why?
>>>     Here is an excerpt from the ACM paper I published some months ago,
>>>     which answers the question:
>>>
>>> -----
>>> An alternative architecture is to define policies centrally and push 
>>> them down to the clients, and similarly have the clients generate 
>>> keys locally and push them up to the server.  However, the SKMS 
>>> architecture avoided this design for one reason: to avoid the 
>>> possibility of catastrophic data-loss.
>>>
>>> If a client were to generate a symmetric key locally, encrypt the 
>>> plaintext, delete the plaintext (to eliminate the vulnerability), but 
>>> cannot persist or send the generated symmetric key to the server for any
>>>
>>> reason, the plaintext might be lost forever.
>>>
>>> While it is possible to design around such conditions, the complexity of
>>>
>>> the SKCL increases significantly because it is difficult to predict 
>>> potential catastrophic conditions on a client machine - especially 
>>> mobile devices.  With centralized policy-definition and 
>>> key-generation, this loss is avoided altogether by escrowing the 
>>> symmetric key first, and then sending it to the client for use.
>>> -----
>>>
>>> In conclusion, there are a number of things that are not explicit in the
>>> SKSML protocol.  This is deliberate because vendors need the flexibility
>>> to innovate above the protocol.  All the capability I've described above
>>> are in StrongKey 1.0 (whose source is available openly, as you know).
>>> But, if another wants to above and beyond this, they should feel free to
>>> do so, while conforming to the SKSML protocol (if they want to be OASIS
>>> standards compliant).
>>>
>>> If the TC believes that more of this capability needs to be in an OASIS
>>> standard, this can be done in a sub-committee of the EKMI TC.  As a
>>> member of the TC, you are entitled to - and even encouraged - to start
>>> new sub-committees that focus on new aspects of EKMI if you wish.  The
>>> only thing you need to be cognizant of, are the OASIS rules for how it
>>> needs to get done; that's all.
>>>
>>> For the initial goal, this TC decided that standardizing SKSML 1.0 was a
>>> priority.  It also decided that it will create Implementation and Audit
>>> Guidelines as its next steps.  But, there's nothing to prevent a new
>>> sub-committee from spinning up and starting a new activity at any time.
>>> (That's exactly what we did with the Flash Demo subcommittee a couple of
>>> months ago).
>>>
>>> I apologize for this long e-mail response, Tim, but I hope it gave you
>>> the answers you were looking for.  If not, keep asking even if it
>>> appears you are sinking the boat - you won't be, I assure you. :-)
>>>
>>> Arshad Noor
>>> StrongAuth, Inc.
>>>
>>>
>>> Bruce, Timothy R wrote:
>>>> Forgive me if I am seen as, "rocking the boat," but I do need some 
>>>> clarification on the current SKSML specification.  I do see how the 
>>>> current specification could be employed to implement a network enabled
>>>
>>>> symmetric key management architecture, but I am concerned about the
>>> lack
>>>> of information about the client application or device as seem from the
>>> SKS.
>>>>  
>>>>
>>>> The vision as described by the OASIS TC assumes the existence of a 
>>>> variety of traditional and non-traditional applications and devices 
>>>> scattered about within an enterprise all capable of cryptography and 
>>>> therefore all needing an integrated key management solution.  The 
>>>> proposed architectural solution from the TC involves the client-server
>>>
>>>> model enabled by a standardized protocol for requesting and delivering
>>>
>>>> key materials.
>>>>  
>>>>
>>>> Clearly the proposed SKSML specification delivers such a protocol, but
>>>
>>>> what I am concerned about is how well the server will scale based upon
>>>
>>>> the proposed specification.  To get right to the point, what I see 
>>>> as missing from an SKS server standpoint is the ability to 
>>>> intelligently create and maintain policy-based key pools at the 
>>>> server where
>>> different
>>>> pools or types of pools have different characteristics based upon 
>>>> how the key is used by the application requesting the key.
>>>>
>>>>  
>>>>
>>>> Take a simple scenario where I have a need to encrypt at-rest data 
>>>> on backup tape devices and ensure that the keys are retained for 
>>>> 10-years
>>>
>>>> from their last use in encrypting data, and I have a second need 
>>>> like the one Arshad mentioned during the June meeting where I want to
>>> encrypt
>>>> browser cookies across every desktop and laptop in my enterprise 
>>>> (say 60,000).  Clearly from a server perspective I probably do not 
>>>> need to keep the key used to encrypt cookies for 10-years, and with 
>>>> 60,000 desktops and laptops running browsers and requesting new keys 
>>>> on a regular basis this could result in millions of keys being 
>>>> retained at the server way beyond their usefulness.
>>>>
>>>>  
>>>>
>>>> Now I could define key classes on the server and establish policies
>>> for
>>>> how to manage the keys based upon the key class, but that means that
>>> for
>>>> every application or device I want to bring on-line and manage in
>>> unique
>>>> ways I will need to define a new key class at the server and every 
>>>> instance of that application or device would need to be configured 
>>>> to request keys from that key class.
>>>>
>>>>  
>>>>
>>>> What I was hoping to see in terms of an EKMI specification was one
>>> which
>>>> supported plug-and-play as well as customized configurations.  To be 
>>>> plug-and-play, I believe the standard needs to also contemplate 
>>>> standardized protocol elements and values to place client applications
>>>
>>>> and devices into a finite series of classifications.  And so backup 
>>>> applications and/or devices could be easily identified as such by 
>>>> the SKS server and a policy at the server could associate all backup 
>>>> applications and devices with a specific key class (key pool with
>>> common
>>>> management characteristics).  A web browser would also have a assigned
>>>
>>>> value that was part of the SKSML specification, which again would
>>> allow
>>>> the SKS server to properly classify the client application and 
>>>> assign the proper key class without having to configure anything 
>>>> outside of
>>> the
>>>> SKS server (plug-and-play).  The SKSML already allows an application
>>> to
>>>> request a key from a specific key class, so the flexibility is there
>>> to
>>>> customize applications and SKS policies could choose to honor keys
>>> from
>>>> specific classes or could decide to ignore the key class requested 
>>>> by the application and based upon other controlling SKS server policies
>>> and
>>>> the application's "classification" the SKS server would decide upon
>>> the
>>>> proper key class.  SKS server policies could also be written to
>>> combine
>>>> information based upon the application's or device's classification
>>> with
>>>> information from the CN of its X.509 certificate and/or it's requested
>>>
>>>> key class to provide a more granular set of managed key classes.  But
>>> if
>>>> the SKS server is not provided with some sort of information that
>>> would
>>>> allow the server to classify the type of application or device, then 
>>>> plug-and-play becomes very difficult without treating all keys as
>>> equals
>>>> in terms of retention, access and caching policies.
>>>>
>>>>  
>>>>
>>>> In the interest of plug-and-play, I also believe it is necessary for
>>> the
>>>> SKSML to allow the client to request a key for a specific algorithm in
>>> a
>>>> strength that is supported by the client application.  Of course, 
>>>> one implementation would be to have key classes associated with 
>>>> specific algorithms and key strengths but if we are to enable 
>>>> industry-wide plug-and-play then the SKSML should establish 
>>>> standardized key classes
>>>
>>>> that are part of the specification and must therefore be supported 
>>>> by any SKS server implementation.
>>>>
>>>>  
>>>>
>>>> To achieve some of this application and device awareness at the SKS 
>>>> server, the key class is one possibility or this information could 
>>>> be passed via the CN information of the X.509 certificate, but 
>>>> either way
>>>
>>>> if the values to classify the generic types of applications and
>>> devices
>>>> as well as the algorithms supported by the client are not industry 
>>>> standardized values then I do not see how this could foster a 
>>>> plug-and-play key management capability.
>>>>
>>>>  
>>>>
>>>> Where have I gone wrong?
>>>>
>>>>  
>>>>
>>>> */Tim Bruce/*
>>>> /Principal Software Architect, Development/
>>>> /5465 Legacy Drive/
>>>> /Plano, Tx.  75024/
>>>> /tel:  214-473-1917/
>>>> /fax:  214-473-1069/
>>>>
>>>> <http://www.ca.com/>
>>>>
>>
>> --------------------------------------
>> Anil Saldhana
>> Leader, JBoss Security & Identity Management
>> Red Hat Inc
>> URL: http://jboss.org/jbosssecurity
>> BLOG: http://anil-identity.blogspot.com
>> ---------------------------------------
>>
>> ---------------------------------------------------------------------
>> To unsubscribe from this mail list, you must leave the OASIS TC that
>> generates this mail.  You may a link to this group and all your TCs in 
>> OASIS
>> at:
>> https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]