OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

ebxml-msg message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [ebxml-msg] Pim's multihop use case


 
A third approach would be to allow the routing function to determine the MPC that can be pulled from. Just as the URL in HTTP posts is rewritten from hop to hop.
 
The default option would be the identity mapping:  to pull a <eb3:UserMessage mpc="abc">..</eb3:UserMessage>, the eb3:SignalMessage MUST have an <eb3:PullRequest mpc="abc">. 
 
But the intermediary could have a mapping "abc" --> "xyz", specifying that the message can only be pulled from the "xyz" MPC. This routing rule would only affect this intermediary, as the message itself is not changed and still has "abc".  So if the pulling client is itself an intermediary, it would apply its own routing rules to the UserMessage with the original MPC value, if any.   This assumes a pulling ebMS 3.0 MSH does not reject a pulled user message that has a different MPC value for the mpc attribute than it specified in the PullRequest.
 
Pim
 
 


From: Jacques R. Durand [mailto:JDurand@us.fujitsu.com]
Sent: 16 April 2009 02:50
To: ebxml-msg@lists.oasis-open.org
Subject: [ebxml-msg] Pim's multihop use case

 We need to decide how to handle this specific multi-hop situation (a Use Case originated by Pim), which could mean deciding which sub-case is most important :
 

Use Case:

A party is sending messages to a large number of recipients over the I-Cloud. These recipients are supposed to pull these messages from their edge-intermediary (the forwarding pattern used by this intermediary is either "push-pull" or "pull-pull".) Each recipient must pull its own messages. However the sending party does not want to be concerned with several MPCs, one for each recipient, and is sending all messages over the same MPC. The last intermediary alone is aware of which message should be forwarded to which recipient - i.e. must be able to distinguish pulling recipients and return only a message intended to its recipient.

Two solutions are considered:

Solution (1): the "sub-channels" solution described in 1.6.4 of latest posted V30 draft (4/15).

Solution (2): An Intermediary MAY be able to associate several "access points" with an MPC. Each access point has its own authorization data. When "multiple access points" are supported for an MPC, the Intermediary MUST be able: (a) to associate each received user message over this MPC with a particular access point, based on header information. (b) when receiving a PullRequest for the MPC , to determine which access point is concerned based on authorization credentials, and to only return messages matching this access point to the requestor.

Advantage of Solution 1: works well in cases when there is no authorization data associated with PullRequest (see sub-case B), as it works independently whether authorization is used or not.

Advantage of Solution 2: On the endpoint, does not require knowledge of any "sub-channel": only authorization info is needed to pull messages. The PullRequest is still targeting the same MPC. Seems better for sub-case (A).

Sub-case (A): This sub-case applies when all recipients are un-related and yet associated with the same Intermediary (Hub): Each recipient must be prevented from pulling messages of other recipients. Each pull signal is therfore authorized differently from one recipient to the other. All PullRequest signals must be authorized.

Sub-case (B): This sub-case applies when all recipients belong to the same entity (say various departments of a same company). Pull authorization is not considered worth the overhead, as it is not critical if a department accidentally pulls a message for another department. However each pulling must be selective for efficiency reason - each dept pulling its own messages.

 

Jacques



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]