|I cannot speak details for IPAWS specifically, because not all aspects have been built or even designed, but I can give some general direction. IPAWS-OPEN (yep 3.0 will be called IPAWS-OPEN) is planned to have a single CAP input interface that takes any valid CAP message. Depending on content (does it have the appropriate CMAS structures, EAS structures, NWEM structures, etc), user permissions, and user direction, it will be made available to multiple other networks, gateways, interoperating systems, push capabilities, etc., as applicable. Data driven, as simple as possible, as complex as necessary. That is the goal. |
On Mar 9, 2010, at 3:27 PM, Paulsen,Norm [Ontario] wrote:
I both agree and disagree with your following line..."The TC’s official answer to documentation issues and referenced schemas shouldn’t be to tell developers to go off and make their own profiles…I think we are just shooting ourselves in the foot."
I agree that developers should not go off an make their own profile. Developers of equipment and software should build to the standard. The TC should not be promoting profiles.
However, when a system is designed to solve a business need, there will be business and system issues to deal with. Some of these issues can be profiled as long as the resulting CAP message is still valid CAP as per the CAP standard. For example, if a system makes <info> or <language> required instead of optional then the resulting message is still valid CAP and any receiving equipment built to CAP will still work. On the other hand, if a system were to make a required element optional then a message without the element would not be valid CAP, and hence the schema it is based on is not a true CAP profile.
Receiving equipment should definitely be built to the CAP standard (the TC should encourage this strongly). On the other hand, the CAP message generation equipment should build valid CAP messages, even if it does accommodate a profile in doing so, but again, as long as it is valid CAP. The TC, while not necessarily promoting this, should not lobby against it either.
So what is a profile?....
The Canadian Profile starts right off by saying that any CAP-CP message must be valid CAP (its our rule 1). Therefore, we expect our messages to be fully interoperable with everyone else's technically. Our profile goes on to comment on a few other similar technical issues like <language>, but really the bulk of our profile addresses the business of Alerting for Canada...such as our event code lists. We just happen to call all of it (the technical and the business) our profile. Therefore, we would expect a transform between our CAP message and event list to the IPAWS CAP message and event list. This is easy to do in a gateway function if IPAWS wants to use our CAP. This is how we make the business side of the message interoperable.
Alternatively, CAP also allows for both profiles to be accommodated in a single message through the use of multiple entries of the same element; in other words I can place two <eventCode> elements in one message with one having the SAME code and the other having the Canadian Code. This is easy to do especially if value list URN's or URI's are used. Hence my generation software can accommodate both profiles in one message.
We also have separated our profile into three documents where two of them we wouldn't even dare to forward to OASIS or the TC for comment or approval as they simply address the business side of the system. The one document that we would send up for review just talks about the few technical concerns that we feel does not violate rule 1. And if they are found to do so, we will fix and change them, not rule 1.
We are of the opinion that a CAP validator should not validate a system (such as IPAWS or CAP-CP) but just validate CAP. This is a technical validator. Additionally, the system that operates a profile should be responsible to have a system profile validator that validates the business system. My concern is in how much of a distinction FEMA/DHS have made in this regard.
Developers that develop to CAP will be the ones far better off.
I’m sorry...Standards are to guarantee interoperability. That’s why they are called standards. HTML, HTTP, XML, TCP, UDP, IP, 802.11, XHTML, Unicode, CSS, SOAP, WSDL, XSLT, XML Schema, Ethernet, DNS, Arp, RIP, ICMP, Telnet, FTP, SMTP to name a few. What if cisco made their own profile for RIP? What if Sun made their own profile for TCP/IP in unix? EDXL-HAVE and RM need to work without a developer pow-wow beforehand. It’s not CIQ’s fault, we just copy-pasted their schema. If we’re all gonna go off and make our own profiles…why have the standard? I think when you combine the context above standards list into what the “internet” is today you see why…The TC’s official answer to documentation issues and referenced schemas shouldn’t be to tell developers to go off and make their own profiles…I think we are just shooting ourselves in the foot. NIEM is not a standard…it’s a standard process model for developing data interchanges based on standard terminology; similar to what goes on in a TC, or in Engineering shops across the world every day, it’s a great process and model for developing defined data interchanges based on a common dataset and allowing for cross organization reuse. From: David RR Webber (XML) [mailto:email@example.com]
Sent: Tuesday, March 09, 2010 1:40 PM
To: McGarry, Donald P.
Cc: firstname.lastname@example.org; Dwarkanath,Sukumar - INTL
Subject: RE: [emergency] HAVE Conformance vs. Documentation vs. Released Schemas
I hear you but I don't believe that a standard can guarantee interoperability - and especially not through the use of XSD schema alone. May be if there is only one XML instance that everyone has to adher to - but that is not what people expect. Notice OASIS standards in general - provide the schema framework for the exchange content - implementers expect to have to test conformance (see Drummond Group work on OASIS conformance testing) and declare interoperability - and someone can still send you something that passes the schema but breaks your backend application. And to Gary's point - yes - optional is not the schema default - but most standards use optional since the context is unknown and rather than have a situation where a required element is being fudged - its made optional. CIQ is a point in case - which part of an address is required? That is impossible to determine for all 207 postal authorities and then in country mail handling. E.g. USA has 5 possible address formats that the USPS will accept. Mentioning context - that is another weakness in XSD Schema design - no explicit context mechanism - that allows you to control when something is mandatory or optional. You will be shocked to know that OASIS CAM has explicit context mechanisms - so you can dynamically control that. Don - at this point in the process here - the schema is what it is. My suggest is to augment that with additional profile tools that can provide the types of interoperability measures you are looking for. BTW - OASIS CIQ now have the v3 format which is a significant improvement on matching addressing needs and removing the ugly from CIQ v2.
-------- Original Message --------
Subject: RE: [emergency] HAVE Conformance vs. Documentation vs.
From: "McGarry, Donald P." <email@example.com>
Date: Tue, March 09, 2010 12:34 pm
To: "David RR Webber (XML)" <firstname.lastname@example.org>,
"Dwarkanath,Sukumar - INTL" <Sukumar_Dwarkanath@sra.com>
Cc: "email@example.com" <firstname.lastname@example.org>
By this assessment what distinguishes a standard from a common data dictionary? I envision a standard as defining interoperability in that if two systems have never “met” before being able to expect exactly what the other system is generating so that they can generate messages for data sharing and process the other messages system. By this assessment, if I go by the schemas I have to implement the entire xPIL standard, if I go by the HAVE document, I’m not exactly sure what out of xPIL I have to implement, which means that I could conceivably represent my Hospital’s location information by its stock ticker symbol. I don’t think this is what we intended to do as a TC. If we all go off making our own tailored profiles, then when our two systems meet we will discover they can’t interoperate because the “MITRE” profile only works with stock symbols, while the “Other” profile only works with Membership information. This doesn’t seem like what a standard is supposed to do. You are encountering the limitation of schema itself. Everything has to be defined as optional. If you are following the NIEM IEPD approach - you would publish your IEPD and subset schemas as your profile. The CAM toolkit provides full support for this. Ingest the HAVE XSD into CAM template - tailor that as you desire - use excludeTree() rules to prune out pieces you don't need (to match EDXL conformance) - and then add other rules as desired to show dependencies on other parts that you do, and or your content restrictions. Then run File / Export / Compress process - to complete your template. You can then generate the subset schema, via File / Export / Template to XSD - to build either a flattened schema, or a NIEM compliant subset schema (depending on what type of application development tooling you are using). You can also build the business documentation, XML examples, cross-reference to NIEM spreadsheet and NIEM wantlist - all as required for NIEM IEPD publishing. This gives you a true complete profile of your use of EDXL HAVE, derived from the original OASIS schema. Interoperability is then dependent on the conformance to that profile. There is also the CAMV engine - which you can use in lieu of schema checks for production runtime. This has added benefit of providing graduated failure levels - error, warning, info - rather than the XSD which only has error. This allows you to tailor the runtime actions of your backend systems to respond to differences in XML instances. An upcoming Developerworks article will be covering this with an example use case.
-------- Original Message --------
Subject: RE: [emergency] HAVE Conformance vs. Documentation vs.
From: "Dwarkanath, Sukumar - INTL" <Sukumar_Dwarkanath@sra.com>
Date: Tue, March 09, 2010 10:08 am
To: "McGarry, Donald P." <email@example.com>,
The restrictions on using CIQ were considered to be business rules and the intention was not create a profile as far as I remember. I am not against creating a CIQ Profile but if we go down that path, we should consider requirements across the other standards such as EDXL RM, DE etc. We have dealt with this particular issue quite a few times and it is a balance – offering flexibility vs ensuring interoperability. After spending some time doing some coding this weekend I noticed something that we may want to address:
- HAVE uses xPil which in turn uses xAL and xNL
- We included the full schemas for all of these referenced schemas on the OASIS page to download the standards.
I think the problem here is that when I went to implement this the documentation states that we are using a “profile” recommendation to limit the choices for xPil to “maximize interoperability”. It then goes on to state that <have:Organization> should have the sub-elements OrganizationInformation and OrginizationGeoLocation. OrganizationInformation should have the sub-elements as defined in the CIQ standard:
It also states that we won’t use georss but will use the gml in the OrganizationGeoLocation Section. It also refers me to Appendix C which suggests that I refer to the CIQ TC website, and also states that for the purpose of HAVE the naming & location elements are used. The use of other elements is left to implementation choices. Conformance is defined in the document as:
- Validating to the schema
- Meets the mandatory requirements of section 3
My concern is that the referenced xPil schemas (and in turn the xAL and xNL) are the FULL SCHEMAS. There is no restriction in the HAVE schema enforcing our smaller profile of CIQ. Additionally the reference to the georss namespace or elements was not removed. Furthermore, the document is somewhat confusing in that it states what elements to use, but then tells the develop that it’s an implementation choice whether to use the other elements or not. Right now as it stands I can generate an XML document that has a bunch of xPIL fields that we didn’t include in our documentation, but will validate against our schemas. With the vagueness in the document I could argue that this was an implementation choice and my document is valid according to the conformance section, but I suspect my document may break some systems. So which is it? If I am building an XML processor to ingest HAVE documents I need to know what to expect. If I need to be prepared to handle Accounts, Documents, Revenues, Stocks, etc. as defined in xPIL because some system out there decided that they wanted to do it, that makes HAVE more heavyweight that I think the designers intended. If indeed we are using a CIQ “profile” we should develop the schema for that profile and post it with the standard and add some more info to our documentation so it isn’t as vague. I’ll upload my generated sample file as HAVE_FullToSchemaButNotDocument.xml to the TC page so you can check it out. This example validated against the schemas from our page. I added in Geo-RSS as well (which will validate if you reference the georss schema)…