OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

trust-el message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Minutes from March 21 Trust-el call


Minutes for the meeting of the Electronic Identity Credential Trust Elevation Methods (Trust Elevation) Technical Committee

March 21, 2013.

1. Call to Order and Welcome.

 

2. Roll Call

Attending (please notify me if you attended the meeting but are not on the list below)

 

Abbie Barbir, Bank of America - y

Anil Saldhana, Red Hat  

Bob Sunday

Brendan Peter, CA

Carl Mattocks, Bofa 

Cathy Tilton, Daon

Charline Duccans, DHS

Duane DeCouteau

Colin Wallis, New Zealand Government  

Dale Rickards, Verizon Business 

David Brossard, Axiomatics 

Dazza Greenwood 

Debbie Bucci, NIH 

Deborah Steckroth, RouteOne LLC

Detlef Huehnlein, Federal Office for Information

Don Thibeau, Open Identity Exchange   -y

Doron Cohen, SafeNet

Doron Grinstein, BiTKOO

Gershon Janssen 

Ivonne Thomas, Hasso Plattner Institute

Jaap Kuipers, Amsterdam  

James Clark – Oasis

Jeff Broburg, CA

John Bradley 

John "Mike" Davis, Veteran's Affairs 

John Walsh, Sypris Electronics

Jonas Hogberg

Julian Hamersley, Adv Micro Devices

Kevin Mangold, NIST

Lucy Lynch  ISOC

Marcus Streets, Thales e-Security

Marty Schleiff, The Boeing Company

Mary Ruddy, Identity Commons  - y

Massimiliano Masi, Tiani "Spirit" GmbH 

Nick Pope, Thales e-Security

Peter Alterman, SAFE-BioPharma,  - y

Rainer Hoerbe

Rebecca Nielsen, Booz Allen Hamilton 

Rich Furr

Ronald Perez, Advanced Micro Devices

Scott Fitch Lockeed Martin

Shaheen Abdul Jabbar, JPMorgan Chase Bank, N.A. - y 

Shahrokh Shahidzadeh (Intel Corp)  -y 

Suzanne Gonzales-Webb, VA  - y

Tony Rutkowski

Tony Nadlin

Thomas Hardjono, M.I.T.  

William Barnhill, Booz Allen Hamilton

Adrianne James, VA - y

Patrick, Axiomatics

 

71 percent of the voting members were present at the meeting.  We did have quorum.

 

 

2. Agenda review and approval

 

 

We used the following chat room for the call: http://webconf.soaphub.org/conf/room/trust-el   chat room text is included at the end of the minutes.

 

There were no additions to the agenda.

Agenda was approved.

 

 

3. Approval of the Minutes

 

Abbie asked if here were any objections to approving the minutes from the last meeting on March 7, 2013.

None heard.

The minutes were approved.

   

 

3. Ant Allan Guest Speaker on GAMES

 

Abbie thanked Ant, a real expert. 

 

The slides are in the chat room. https://www.oasis-open.org/apps/org/workgroup/trust-el/documents.php

 

Ant explained that this is an outline of something he developed a few years ago. Can one use the NIST guidelines, yes, but strength isn’t the only criteria.  It is a key part of the assessment, but also need to think about UX and business issues, especially customer retention in business services and TCO.  The other point is that guidelines lie. NIST provides some ranking of authentication methods but doesn’t’ say why they are ranked that way, and if there is something not on the list, how do you assign it.  That might be moot if you are required to use NIST, but for others, some are looking at biometric methods.  So need method of general ranking. So GAMES is filling a gap.  GAMES was initial publish in 2008. It is a tool kit: spreadsheets for documenting the evaluation.  Then he did a second iteration that broke things down into sections (4 shorter documents.)  There may be a third iteration this year.  What has changed?  It has gone from a general description, to a full chart of accounts using a framework from Gartner’s PC TCO.  It has gone from considering ease of use to considering UX in the round.  Looking at authentication strength, there were quite a few changes. The question of accountability came to the fore in conversations. For example, the idea of what can happen internally when credentials are shared, like at Societe Generale. That is part of the overall assurance, but we try to call it out separately. The original strength analysis was very complex, with lots of precise numeric values that suggested a level of precision in the results that couldn’t be justified. So he has switched to a more qualitative analysis. The focus is on two aspects of assurance, the methods resistance to attack and resistance to willful misuse.  It isn’t just about willful misconduct, if a method is susceptible to that, it is probably susceptible to social engineering.

 

Slide 4 shows the framework. We will go thru these steps. It indicates what you evaluate and how it is combined for the score. One of the other things, the blue pieces are inherent to the particular method and the green piece is what else influences that. If you have a strong method fully implemented, password reset, can make the method weaker if it didn’t use best practice.  He saw people were strengthening password rules but where not strengthening password update. In the real world, you need to think of those elements and how they can reduce security. Also, one can look at putting compensating controls in there, such as fraud detection. Of course we need to be careful about double counting. The mindset is important. It is very easy to understand how an authentication method works and how it provides confidence, but we need to try to break it. We encourage that mindset.

 

Slide 5 is a general illustration of how we combine different characteristics. There is a little math behind this. 

 

Slide 6 is about how we get to accountability. The particularity of the method is that it is unique and uniquely mapped to the user. The second becomes interesting when we talk about behavior patterns, and context.  Context could be common to many users, so the value is less. The second point is binding. We changed the meaning of binding in the second version.  Now we talk about the ease of sharing the credential with another.  So combining these we get a level of accountability.

 

Slide 7 is the thinking about masquerading attacks. This is broken into four parts. This was informed by the biometric evaluations methodology. So there are four things: can you get possession of the item, can you capture the authentication information that is exchanged; can what goes across the wire be captured and replayed; and can you modify the server side to use your own credential (get the password changed or insert your own biometrics.) The 4th is modifying the decision or how it is informed. This is greyed out as this isn’t about the authentication method itself but the overall system. Can you tamper with the ticket after it is generated, or make it so the response isn’t tested? Some attacks are around the infrastructure itself, rather than being specific to the method. The key here is which is the easiest for an attacker. That is what determines the strength of the method. If method 3 is easy, that is the way the attackers will go.

 

Slide 8 is about how can we approach this, how can we get something that is reliable and repeatable, and minimize subjectivity. So we tried to think about breaking things down discretely. Is it a proven risk, or a theoretical one?  Here it is leveraging a vulnerabilities scoring system.  This is to provide some structure.

 

Slide 9, in looking at the green bits, how do we look at resistance to attack?  Blue is raw resistance. This is about how does this work in practice, independent of any vendor, though some of these stem from aspects of the authentication method. If you have a poor UX, users will take steps to improve the experience for themselves, which typically weakens the method (such as writing passwords down.)

 

Slide 10, this and the following slides are examples of the scorecard. This is something that takes us step wise through the flow. One of the things to call out is the attack worksheet on the next slide.

 

Slide 11 is looking at ways of the second type of attack, what are the different ways to get a hold of the authentication information, that gets plugged into the previous scorecard.  The scorecard is something important in itself. Having documentation allows you to reassess your evaluations and understand why and how you came to those answers. This also gives you ammunition against auditors. Risk and compliance colleagues like documentation.

 

Slide 12 is a summary of the framework again.

Ant asked for questions before going on to the UX and TCO parts.

 

Abbie commented that if you finish, and then go to Q&A, that will be best.

 

Slide 13 looks at UX considerations.  In the published research, we had 4 bullet points, distilled from UX experts. This is a useful framework, rather than an ideal framework. Goal was to provide consistency and resource subjectivity. The other considerations have influences on other parts of the assessment. Learnability has impact on the TCO. Is support and education needed? The aesthetic appeal and utility are about acceptance. Can the user see the value in this method?  Is it appealing to use? This is not an overriding concern in an enterprise context, but is anywhere you have an elective user. There is potential to damage the relationship and loose customer retention. One bank said these devices (hardware OTP) look cheap and don’t reflect well on the bank.  They have seen from surveys that customers have left banks. Some methods reduce fraud but create business risk.  Privacy concerns possibly fit in here. Do we use biometrics and start folding in social networking material? Is this something that will effect user acceptance?

 

Slide 14 is a very high level TCO. Based on TCO for a PC model, we extrapolated various aspects of this and tailored them to this situation.  A lot of the decision makers think about license cost and replacement cost of tokens. The logistics of sending tokens comes up fairly often. There are some things that are pain points. They can be important. One reference customer said, actually when they were choosing between two vendors of one method, it was the hardware costs that drove the final decision. One vendor required two servers rather than one, and that was the final determinant. The reality seems to be in practice, that our customer wasn’t able to get a good estimate of what are the cost elements.

 

Abbie thanked Ant, and opened the floor for Q&A.

 

Don asked Ant if he could talk more about the survey results of lost customers.

 

Ant said that yes, he can pass that reference via Mary. Three years ago there was a survey of banks responding to FFIEC guidance. We found that 12% considered changing their banks because they didn’t like the new steps. 3% actually did.

 

Don said one other notion you talked about is the notion of compensating controls. Can you speak more to that notion?

 

Ant replied one of many compensating controls is another’s novel authentication method. One example is the success seen with web fraud detection tools. I have a method that I know is weak compared to the potential risk, but I have a way of dynamically assessing the level of risk. The situation has become less clear now, as we are seeing some of the features of the web fraud tools being folded into user authentication products. So rather than it being an indicator of risk when a score falls outside of norms, it is seen as a method of assessing trust. If you have one of these evolved products, and have a web fraud detector from another vendor, you have to worry about interactions between the two and potentially double counting.  One of the things you might do, which falls under the category of intelligence is say: I don’t really trust you, but I’ll keep an eye on you.  Even if you have an active compensation process, I’ll just have more granular monitoring.  Use monitoring as a way of detecting things after the event, so you can do some kind of remediation. The web fraud detection is key. Outside the banking context, we see the motivation to use those techniques has been to improve the user experience, by minimizing the impact of a traditional strong authentication method. Using these techniques, what do you do when things fall outside norms, you need to have step-up, so you are minimize the number of occasions you have to so that. One of the key areas where we see that trade-off is in healthcare for hospitals with affiliated physicians. The physicians end up with multiple tokens from each hospital. Using this approach to allow a physician to get access from a known machine in a known location smooths relationships. So when you make a different decision about basic approaches you have a compensating mechanism.

 

Abbie said what we are doing, is we are doing a dataflow for authentication from enrollment to registration, to delivery, and use; from one database to transmission, as we have some problems identifying internally what is the authoritative source to determine the binding. It is not easy to determine the value to put in the scorecard. Some of the info is subjective.

 

Ant suggested having some structure to the kinds of attacks that are of interest and also to assess the difficulty of the attack.

 

Abbie said what you are doing is an evaluation of strength in one organization. If you want to operate in a federated environment, should we sit down and update the SAML profile? This may be one of our deliverables in the first deliverable.

 

Ant replied this was designed for use within an enterprise. One of the factors was that although the methodology was structured to reduce subjectivity as far as possible, you can’t eliminate it all together. One may over time get different assessments.  People and environments and assumptions change.  If you took two very different organizations, they might come up with different results. Another point is around the role of accountability and assurance. Some of the administrative things Kantara has worked on in this area concern trust in the IDP and how that actually influences the trust in the authentication method used is not surfaced by that approach.  It doesn’t address risk from specific methods. The classic example was with bank payment cards, if the card was misused the bank would say the customer wasn’t careful enough, until there was a case of a bank employee who got himself sent a new card and committed fraud, then changed the address back. So yes, I control the bank, but, it is the nitty gritty that can undermine a particular transaction.

 

Mary thanked Ant for coming here under short notice and providing his insights.

 

Abbie said we need to build on experts and on and on Ant’s GAMES work. We are still working on how we will proceed with the third deliverable.

 

Mary said we need to move forward to structuring the actual third document.

 

Abbie called for more editors.  The editors have a biweekly meeting at 9:00 AM on weeks when there is no TC call. If you would like to be an editor, let Abbie, Mary or Don know and we will add you to the list.

 

 

4. Attendance Update

We achieved quorum.

 

5. Adjournment

Abbie asked for a motion to adjourn.

Don made a motion to adjourn.

Shaheen seconded it.

The meeting was adjourned.

 

>>>>>>>>>>>>>>>>>>>> Please change your name from 'anonymous' using the Settings button

 

 

abbie barbir bofa: CHAT ROOM

 

http://webconf.soaphub.org/conf/room/trust-el

 

Passcode: 637 218 8139

 

US toll free 1-866-222-6652

 

Int'l Toll: 1-980-939-6928

 

abbie barbir bofa: 1. roll call

 

2. agenda bashing

 

3. minutes approval

 

4. presentation by Ant Allan from Gartner

 

5. Editors update

 

6. roll call

 

7. close meeting

 

abbie barbir bofa: presentation is at

 

abbie barbir bofa: https://www.oasis-open.org/apps/org/workgroup/trust-el/documents.php

 

anonymous morphed into Suzanne Gonzales-Webb

 

anonymous1 morphed into Don Thibeau

 

anonymous1 morphed into Mike Harrop

 

abbie barbir bofa: mike i did send u the slides on your gmail account

 

anonymous morphed into M. Simon

 

Don Thibeau: please comment on the application of Levels of Assurance in the US and UK for programs like FICAM and the UK IDAP WRT to government security standards and commercial risk management - two different mindsets as you note

 

anonymous11 morphed into shahrokh-Intel



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]