Hi Rich,
I need to parse your other email carefully, and I will respond to
it. :-)
But regarding this comment, the fact that the algorithm is defined
as if all children are evaluated, does not mean that an
implementation has to work like that. This is explicitly stated in
the spec as well as that no XACML function may have side effects. So
out of order evaluation, lazy evaluation, etc, are all permitted, as
long as the result is the same.
The spec should strive for the simplest possible explanation of the
behavior, not the most efficient implementation.
Best regards,
Erik
On 2011-05-16 21:12, rich levinson wrote:
4DD17711.8030906@oracle.com" type="cite">
Hi Erik,
One more point that may not be obvious at first look at C.2.
Even though C.2 does have the statement:
if (decision == Deny) {
return Deny;
}
the problem is that the "evaluate" would have to be done for all
the nodes
before the algorithm is even called, because the parameter is
(Decision[] decisions)
which to me at least implies that all the evaluation has already
been done to
determine those decisions.
The change I am recommending is passing in (Node[] nodes),
which have not been evaluated, and do not get evaluated until
they are encountered in the loop.
Thanks,
Rich
On 5/16/2011 2:45 PM, rich levinson wrote:
4DD170D6.7030102@oracle.com" type="cite">
Hi Erik,
Responses inline:
On 5/16/2011 11:42 AM, Erik Rissanen wrote:
4DD145FB.6000509@axiomatics.com"
type="cite">
Hi Rich,
I am about to post a new draft, but I noticed I need to get
some clarifications on your comments before I do that.
See inline.
On 2011-05-11 16:43, rich levinson wrote:
4DCAA0A5.3010808@oracle.com" type="cite">
Hi Erik,
Yes, I am referring to WD-19, when I am saying "leave it as
it is".
However (sorry), that being said, I do think there is some
additional "clean-up"
required on this overall issue.
There is also an additional unrelated typo we found in
section A.13.
I will just list the following "clarifications" to WD-19 and
Erik's current email:
- "from the appendix to section 9" ?? I think you mean
section 7.13 (or earlier (see below)).
Yes.
<Rich>
ok
</>
4DD145FB.6000509@axiomatics.com"
type="cite">
4DCAA0A5.3010808@oracle.com" type="cite">
- Since we are introducing Ind{P,D,DP} in section 7, I
think it needs to also be included
in Table 4 for Rule evaluation, and possibly other
places.
Yes, to make it clearer. Though, the text in appendix C (to be
moved to Section 7) does contain the base case for the rule
already, but it's better to put in the table to avoid
confusion.
<Rich>
That's what I was thinking as well, except possibly even more
so, because one of the effects
of this change is that, at least for this type of processing
(analyzing the Indeterminate to
determine D, P, or DP), the Rule, Policy, and PolicySet
processing is now the same, which
I believe makes the whole thing easier to understand, although
it becomes more abstract
by no longer differentiating between these element types. But I
think the next bullet is the
more important one to discuss, so let's leave this more
"philosophical" aspect until later.
</>
4DD145FB.6000509@axiomatics.com"
type="cite">
4DCAA0A5.3010808@oracle.com" type="cite">
- I think we also need to consider having more
explanation in the old C.1 section about
the "extended Inds", which describes the underlying
cause: which imo is that when
you have an "-overrides" type comb-alg, the relative
"weight" of the return values
suddenly has a "precedence" that would otherwise not be
there, namely:
- for example for a "deny-overrides Policy:
Ind{P} < P < Ind{D} < D
which means if a Rule that evaluates to D is
encountered, the processing for
the policy can end, since that is the final answer, no
matter what follows.
However, until a D is encountered processing must
continue.
When all Rules are processed, the "answer" is the
greatest value in the
precedence chain above.
I am not sure about this. I don't want to fill up the
normative sections with long examples. We could say something
short like "The extended Indeterminate values allow combining
algorithms to treat Indeterminate smarter, so that in some
cases an Indeterminate can be ignored when the extended
Indeterminate value shows that the section of the policy which
produced the Indeterminate could not have influenced the final
decision even when there would not have been an
Indeterminate." And clearly mark this non-normative.
<Rich>
This is somewhat subjective in terms of how to best explain the
situation, and I was considering
writing a separate email totally focused on that since w the
resolution of the Policy/Target concept
of Indeterminate, I think I can explain it all fairly concisely
in a systematic way.
To add to what's above:
- For "biased" combining algorithms:
- deny-biased or deny-overrides:
- N/A < Ind{P} < P < Ind{D} < D
- permit-biased or permit-overrides:
- N/A < Ind{D} < D < Ind{P} < P
- non-biased (ex. first-applicable):
- ( N/A = Ind {*} ) < ( D = P )
(i.e. (D = P) means either one that comes first is good,
and both
of them override either N/A or Ind{*})
While these statements might at first be confusing, I think that
ultimately they dictate
the processing algorithm and so from that perspective can
provide a sanity check on
both the p-code and the text.
But, again, this falls into the "explanatory" category, the
significant point I think is in the
next bullet.
</>
4DD145FB.6000509@axiomatics.com"
type="cite">
4DCAA0A5.3010808@oracle.com" type="cite">
- Also, considering the above bullet, I think the
current algorithms should be modified
back to look more like the original 2.0 algs. For
example compare the
"denyOverridesCombiningAlgorithm" of section C.2 (new) w
C.10 (legacy):
- In C.2 the parameter to the algorithm is "Decision[]
decisions", where as
in C.10 the parameter to the algorithm is "Rule[]
rules".
- Also, in section C.10, within the loop, the first
thing is:
decision = evaluate(rules[i]);
if (decision == Deny) return Deny;
- I think it is important to retain this logic so it
can be shown where the breakout
occurs, which cuts off unnecessary evaluation of
subsequent rules.
- Also, we can retain the criteria for choosing Ind{d}
vs Ind{p} where the
if (effect(rules[i]) ...
is evaluated.
Is this just to refactor the algorithms to look prettier, or
are there actual errors in them? If there are no errors, could
we just keep them as they are, so we don't risk breaking them?
This is not just to refactor the algorithms. When I looked over
the new algorithms in
detail, I noticed there was a quantitative difference between
them. i.e. comparing C.10 and C.2:
- In C.10 (legacy deny-overrides), line 6131, there is a
loop that begins:
- for (i=0; i<lenghtOf(rules); i++ ) {
decision = evaluate(rules[i])
if (decision == Deny)
return Deny;
...
}
- With the new algorithms, this logic also applies to
Policies as well as rules.
Furthermore, starting from the top of the PolicySet tree,
and working down,
this logic is recursive, and the processing should be
identical for PolicySet,
Policy, and Rule (although the "evaluate" function will be
different for each).
- A second important point here is that it is significant
that the first "if" stmt
causes a "return" as opposed to the subsequent "if" stmts
that set various
booleans, but allow the loop to continue processing.
The importance is that in a biased algorithm, if the
decision to which the
algorithm is biased is found, then processing of the loop
can end right there,
and there is no need to evaluate the rest of the rules,
policies, or policysets,
within the current node. i.e. if I am in a deny-overrides
node, if, when I
evaluate the next node, it returns a Deny, I am done.
Clearly in the case of Policy and PolicySet, there is no
ability to get either
a Permit or Deny until the leaf Rules are processed.
However, if I am in a
deny-overrides PolicySet, and there are 10 child Policy
elements, for example,
and I "evaluate" the first Policy and it in turn evaluates
its child Rules, if its
first, or any other, rule returns a Deny, then the first
Policy will return a Deny,
and there is no need to "evaluate" the other 9 Policy
elements since the
decision will be a Deny regardless of what they return.
- Therefore, I think a better algorithm would look like this
for C.2 (note the
"evaluate" operation in the first line of the for loop which
is not currently
in C.2):
- Decision denyOverridesCombiningAlgorithm(Node[] nodes) {
...
for ( i=0; i<lengthOf(nodes); i++ ) {
Decision decision = evaluate(nodes[i]);
if (decision == Deny) {
return Deny;
}
// I believe the rest of the logic remains the
same as currently in C.2:
...
} // end for loop
// logic should also be same as C.2 after loop
} // end algorithm
- Now, if we look again at C.10, and try to see how it
relates to the new algorithms,
I think it would go something like this, where Rule is
subclass of Node:
- Decision denyOverridesRuleCombiningAlgorithm(Node[]
nodes) {
Boolean atLeastOneError = false;
Boolean atLeastOneErrorD = false;
Boolean atLeastOneErrorP = false;
Boolean atLeastOneErrorDP = false;
Boolean atLeastOnePermit = false;
for ( i=0; i<lengthOf(rules); i++ ) {
Decision decision = evaluate(nodes[i]);
if (decision==Deny) {
return Deny;
}
// the next 2 "if"s are the same as C.10:
if (decision==Permit) {
atLeastOnePermit = true;
continue; // i.e. skip the rest of the logic
current iteration of loop
// and start next iteration
}
if (decision==NotApplicable) {
continue;
}
if (decision==Indeterminate) { // this can only be
returned for rules
if ( effect((Rule)nodes[i])==Deny) ) { // cast
to Rule to get effect
atLeastOneErrorD = true;
}
else {
atLeastOneErrorP = true;
}
continue;
}
// the following is same as C.2 and will evaluate
the 3 types
// of Indeterminate, which can only be returned
for Policy and PolicySet
... same as lines 5762->5776 (not repeated
here)
} // end for loop
if (atLeastOneErrorD==true &&
(atLeastOneErrorP==true ||
atLeastOnePermit==true) {
atLeastOneErrorDP = true;
}
if (atLeastOneErrorDP==true) {
return Indeterminate(DP);
if (atLeastOneErrorD==true) {
return Indeterminate(D);
}
if (atLeastOnePermit==true) {
return Permit;
}
if (atLeastOneErrorP == true) {
return Indeterminate(P);
}
return NotApplicable;
} // end algorithm
- The representation above clearly shows:
N/A < Ind{P} < P < Ind{D} < D
by simple following the "return" statements up the
algorithm.
- I think the above algorithm also shows the origin of
D,P,DP coming
from the effect of the rules, and then being percolated
thru Policy
and PolicySet.
So, basically, although the above algorithm may look a little
more complicated than
the current C.2, I think it retains 2 things from C.10 that the
current C.2 drops:
- It retains the break of the loop when the biased Decision
is returned
- It retains the logic that creates the breakout of
Indeterminate to Ind(D,P,DP)
Comments?
Thanks,
Rich
</>
4DD145FB.6000509@axiomatics.com"
type="cite">
4DCAA0A5.3010808@oracle.com" type="cite">
- Finally, rather than passing in (Rule[] rules), or
(Policy[] policies), we might want to
consider using a neutral term, such as (Node[] nodes) or
(Child[] children) where
Node or Child could refer to either a Rule or Policy.
- And, finally, the typo in section A.3.14, under
rfc822Name-match:
In cs-01 line 4992 (next to last para), the phrase:
"matches a value in the first argument"
should say "matches a value in the second argument".
I think this is just a typo, esp when
compared w the next para.
Yes, this is a typo. I will fix it.
4DCAA0A5.3010808@oracle.com" type="cite">
Thanks,
Rich
On 5/11/2011 5:37 AM, Erik Rissanen wrote:
4DCA58DB.4080406@axiomatics.com"
type="cite">
Hi All,
Rich, when you say "leave it as it is", I assume you mean
the new working draft which evaluates the children of
policy sets.
If so, I think everybody is in agreement.
I will still post an updated draft which moves the
definitions of the text from the appendix to section 9, so
everything is in one place.
Best regards,
Erik
On 2011-05-09 06:27, rich levinson wrote:
4DC76D48.7030208@oracle.com"
type="cite">
Hi again, Paul, Erik, Hal, and TC:
I have spent some additional time looking at this
problem and I am now leaning
toward leaving the spec as is, at least as far as I have
analyzed it.
For anyone interested, my reassessment is based on the
following:
The intention has always been to maintain consistency
with XACML 2.0, while at
the same time enabling the "D" and "P" types of
Indeterminates to propagate up
the PolicySet hierarchy in addition to the "DP" which
was all that was propagated
up in 2.0, despite the fact that D and P were determined
and used on the first
hop up, they were unnecessarily cut off at that point
and information was lost.
It appears that I inadvertently lost sight of this big
picture when looking at the
details from the top down. However, in order to go from
the top down one
has to allow the existing algorithms on the bottom level
to remain the same,
and obviously by assuming that the Rules do not need to
be evaluated is a
direct contradiction with the existing XACML 2.0
algorithms which first evaluate
the Rule, then look directly at the effect later if
there was an indeterminate.
Bottom line: I withdraw this sidebar issue, about not
needing to evaluate the
Rules when the Policy or PolicySet Target produces an
Indeterminate. In 2.0 the
spec was able to say that because it did not propagate
the D and P properties up,
however, to do the complete job of propagating all the D
and P properties, we
do need to evaluate the Rules, and the changes in the
spec to this effect I believe
are correct.
Thanks,
Rich
On 5/6/2011 11:06 PM, rich levinson wrote:
4DC4B750.2070402@oracle.com"
type="cite">
Hi Paul and TC,
I think the toothpaste is out of the tube on this one:
i.e. I think too much has been
invested in the analysis for one member to
unilaterally shut down the issue by
"withdrawal". In any event, that's my opinion, but,
regardless, based on
yesterday's mtg, I believe there is more to be said on
this issue, and hopefully
we can channel it to a clean resolution.
That being said, following is additional analysis I
have done and some conclusions that
I believe we can reach agreement on, and that I think
I can describe in terms that everyone
can follow (for "clarity" I will just add an "s" for
the plural of "Policy"). There are 2 arguments
I would like to make.
Argument 1:
- First, there are 3 "types" of Policys:
- Policys{P} where all Rules have
Effect="Permit" and therefore
these Policys can never return a "Deny".
- Policys{D} where all Rules have Effect="Deny"
and therefore
these Policys can never return a "Permit"
- Policys{DP} where there are a mix of Rules,
some of which are "Permit",
and some of which are "Deny", and therefore,
there is no apriori way
to look at such a Policy and know whether or not
it can return either
a Permit or a Deny.
- Therefore, the 3 types of Policys each have an
inherent property, which
can be determined simply by inspection of the
Policy w/o regard to
evaluation of any Attributes.
- In fact, 2 out of 3 of the types retain their
"property" regardless of
evaluation of the attributes.
i.e. Policy{P} is always Policy{P}, it can never
change its property and
become either Policy{D} or Policy{DP}
i.e. same can be said for Policy{D}
I would therefore refer to these as "static
properties"
- The third type Policy{DP} has a run-time
characteristic, where if
current values of the Attributes happen to
exclude all the Rules
of either D or P, then the current run-time
"property" of the Policy{DP}
for a single evaluation can effectively become
either Policy{P} or Policy{D}.
On subsequent evaluations the Policy{DP} can
again by happenstance
become any one of the 3 types.
I would therefore consider this a "runtime
property" if we allow its
definition to be subject to Attribute
evaluation.
Therefore, I think we can say that the problem we are
discussing reduces to only the
evaluation of Policy{DP} elements.
We can then ask whether we want our combining
algorithms to be subject to runtime
values of Attributes that on any given evaluation can
cause a Policy{DP} to become a
Policy{D} or a Policy{P}, thus rendering the property
of the Policy indeterminate
until runtime values are plugged in.
I would also suggest that it is this indeterminacy,
which would cause Policys not to
be comparable for "equivalence", because the Policys
themselves have a built-in
uncertainty depending on how one regards this
property.
I would also suggest that for the purpose of
"equivalence" this runtime characteristic could
be considered a "performance optimization", which
could be a property of the Policy Engine,
whereas the inherent D and P properties can be
considered a Policy language characteristic
independent of runtime, which could be included in an
equivalence algorithm.
Argument 2:
There is one additional argument I would like to add
for consideration. In XACML 2.0,
there is a statement in section 7.10 for Policy
Evaluation, which says:
'If the target value is "No-match" or
“Indeterminate” then the policy value SHALL be
“NotApplicable” or “Indeterminate”, respectively,
regardless of the value of the rules.
For these cases, therefore, the rules need not be
evaluated.'
By comparison, in XACML 3.0, WD 19, the corresponding
statement in section 7.11 has
been modified to say:
'If the target value is "No-match" then
the policy value SHALL be
"NotApplicable", regardless of the value of the
rules.
For this case, therefore, the rules need not be
evaluated.'
The "Indeterminate" part of this statement has been
modified to say:
'If the target value is "Indeterminate",
then the policy value SHALL be
determined as specified in Table 7, in section
7.13.'
Therefore, the "meaning" of the spec has been changed,
because in order to select
an entry in Table 7, now the rules do have to be
evaluated, which is not obvious
unless one does a very careful and complete reading of
the changes that are
being proposed.
Additional Consideration:
One other side effect that I think is of concern, is
that if we allow the Policy property
(P, D, or DP) to be subject to runtime determination
then when an Indeterminate is
obtained at the top of the tree, then it would be
necessary to evaluate the complete
subtree in order to determine what this property is.
By comparison, the static property
can be determined at any time by processing the tree
once and recording the property
for all subsequent evaluations.
My Conclusions:
Bottom line: my recommendation is that we define the
D,P,DP property in such a way
that it is a static characteristic of the Policy
definition, which presumably allow it to
be used in "equivalence" determinations. I would also
recommend that runtime
optimization be a configurable option, and it will be
clear that if this option is activated,
that any presumption of equivalence should be
disregarded as far as runtime behavior
would be concerned.
Comments, suggestions welcome.
Thanks,
Rich
On 5/6/2011 12:51 PM, Tyson, Paul H wrote:
3898C40CCD069D4F91FCD69C9EFBF096064B3D1C@txamashur004.ent.textron.com"
type="cite">
I withdraw my objection to the Section 7 changes made by Erik in the 3.0
core spec wd-19.
I'm still concerned that the policy evaluation specification (in section
7) may cause unexpected variations in the results from two seemingly
"equivalent" policies, but I need to produce some theoretical or
empirical evidence to demonstrate this (or to relieve my concern). In
any case, the wd-19 changes probably do not make this any better or
worse.
Regards,
--Paul
---------------------------------------------------------------------
To unsubscribe from this mail list, you must leave the OASIS TC that
generates this mail. Follow this link to all your TCs in OASIS at:
https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php
|