[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]
Subject: Re: [xacml] Follow up on TC call
On Mon, 16 Sep 2002, Daniel Engovatov wrote: > I will try to list some of my specific objections to the proposed schema > modification. > > 1) Target matching. I believe that since any complicated logic, like > applying some rule to everybody whose second letter in last name is > "n", can be easily and effectively expressed in the condition, target > matching logic should be limited to operation allowing for effective > indexing. Namely - simple equality comparison that can be > equivalently expressed as equality comparison of some canonic string > representation of a particular data type, if not integer or string. Or > matching that can be reduced to such comparison for "hierarchical" > data types, such as X500 names, where the "parent" name can be > inferred from the supplied value. Applying of simple set operations, > based on equality operation, such as, for example set-equal, or > member-of, can also be effectively "flattened out". I can not say > that about arbitrary binary predicate operations - I and did not see > any value in supporting such operations in target matching, as all the > logic can be expressed in condition. Target matching, i.e. the MatchId function is only allowed (by typechecking) to allow such algorithms as "*-equal", or some boolean binary function other than equality like X500Name-match. Some maybe easily indexable and some may be more difficult depending on the amount of work you put in to it. You do not specify functions that process lists or sets, as the *Match construct carries those semantics, i.e. those of the function we have specified, "function:any-of". > 2) Extensibility. While it is true that supporting just the few > "higher-order" predicates is not trouble some, including of the > <function>, as a legal argument to any extension function complicates > the development of proper extension interface. Who said you have to create "extension" functions that take function as an argument? You don't have to do that to be conformant or compliant with the specification. > For example take an extension function "all-greater" that takes an > integer, sequence of integers and returns a Boolean. If some policy > requires it, this can be implemented as a library with a very simple > interface - you pass in canonical representation of integer data type > and it returns a canonical string representation of a Boolean value. That is fine for your version of XACML. What about people who want standard functionality? They can get that with applying "all-of" to "integer-greater-than", a value and a bag of values. > Implementing such extension with <function> passed to it will require > an interface with "callback" functionality - to allow the extension to > invoke an arbitrary other operation. For multiple bindings it becomes > troublesome. > But since such argument is allowed - it has to be be > implemented.. That is not the case. XACML does not preclude extensions, but the don't *have* to be implemented. You can very well say that the higher-order functions do not have to be implemented for any functions other than the ones listed in the spec. That is you don't have to allow "function:any-of" to be used with any of your functions that extend XACML. In fact, you are still free to "extend" XACML with a function that performs your extension comparision for lists in your own search pattern, i.e. you are allowed to extend XACML with a fucntion of "all-integer-greather-than". Since it is outside of the specification, you can make it do anything you want. Your customer will have to have your manual, not XACML, in her hand to know what that function does, no matter what. > 3) Redundancy. There are easy ways to accommodate the same > functionality in the Apply construct without extending the schema. The extension of the schema is minimal, and it is SOUND. The <Function> element as opposed to naming the fucntion in an <AttributeValue> element separates the name of the function into the function space, as opposed to the value space. This is important, because if the function were named in the value space, you could perhaps pull it out of some attribute designator. That would offer a little more of a problem, because you wouldn't be able to check for support for the name of the named function at compile time. > One either can add a specialized sequence functions for different > underlying binary operations: such as aforementioned "all-greater". > This has a disadvantage of involving a large number of functions - > several per each binary predicate, one per each *-map" function, but > an advantage of simple interface and the fact that the implementation > of the low level functionality can be done in some general purpose > language not only for the build in operations. I agree, but to each his own implementation. > Other way is to mimic <function> operation by including new data type > listing allowable function names for a particular operation. This still has to be done as you have to check whether the function names a function of the correct type. > Inferring the proper data type can be done by either looking at the > other arguments or by restricting the list of operations - it to be > done by implementation of a particular functions to determine what > operations are legal. As I said before, creating a new data type (i.e. AttributeValue) for function names is problematic, as you might then be able to retrive them from an AttributeDesignator lookup. That complicates typechecking if you don't have a grasp of what function names are stored in Attributes outside of the policy. > This approach has an added flexibility that the operations can be specified > in context as attribute values. Just the thing that is doable, but is frightening because now you cannot do assurance analysis on your policy when it looks up a function name as an attribute value. That is because you may not know what the semantics of the function named in the attribute, or even if it is supported by your XACML processor. > 4) Non sufficiency. While the proposed extensions solve some problems, any > complicated logic: such as "true if half the values is "foo" and non is > "bar"" will still require extensions. So what? If you are going to add extensions, publish your manual and its specs, and use away. > IF such extensions can follow some easily portable API, > interoperability of implementations will be greatly enhanced. I see no real problem in doing exactly what you say. > In my opinion a large portion of policies will require such extensions. Okay. > If there are no "operations" passed as values to such API, one can easily > design a schema for such interoperability APIs, implement it as web service > or commonly used component. I don't see why you can't. -Polar > > > Daniel; > > >
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]
Powered by eList eXpress LLC