OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

xliff message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Fwd: XLIFF TC Meeting Notes - 05/11/13


Sorry for for the delay in posting these minutes. Thanks to Asanka for taking them..


dF: Quorum reached, 12 out of 14 voters in attendance
Victor (returning from LOA), Helena, Shirley, Tom, Fredrik, DavidF, Yves, Bryan, Joachim, Uwe, DavidW, Asanka


B:
Approve previous meeting minutes, 15 October 2013
https://lists.oasis-open.org/archives/xliff/201310/msg00042.html
Bryan moves, Yves seconds no objects..
We assigned a few of the items to Fredrik - do you have any
objections? do you have any bandwidth constraints?
F:
Now I will have some bandwidth and I will go for it and try to send
out proposals or changes during this week.
dF:
Important to state in your timeline whatever the scenario we go for:
we need to implement items in the specification by next week;
F:
I can't promise I would have the bandwidth to resolve everything by
next meeting. I would probably make proposals on how to resolve;
B:
That's good. The owners of the items at least need to have proposals
no later than the start of the next meeting; in cases where we have
non-controversial issue, a simple call for dissent is fine with a
reasonable time-frame; we don't have Ryan on the call, so I can't
confirm with him; but I will reach out Uwe and Ryan to make sure the
issues that are assigned to him can be dealt with a timely manner;
I invite everybody to take look at the comments tracker and add any
useful feedback;
Next agenda item 2.2: i.e. the timeline; I have URLs for both our
initial timeline and the timeline if we determined that a 3rd public
review is required; when I sent out the note I had lost track of which
conversations that I numbered which are public and which numbers not
publicly available in list; as a result of that ...
<noise/feedback/echo>
We had a reply from Yves, stating that at least some of the issues
that he had pointed out would probably result in substantive changes;
David also had concerns to some of the issues; I guess the definition
of a change that would require a feature public review would be a bug
in the specification, or a feature that simply will not work as
specified, requiring a non-trivial change to the specification; let's
try to determine whether we think that a changes would merit a third
public review;
Yves:
I've sent an email with the ID of the issue which I thought is significant;
dF: I have it and I was looking at it. Links to those emails:
https://lists.oasis-open.org/archives/xliff/201310/msg00079.html
https://lists.oasis-open.org/archives/xliff/201310/msg00071.html
List of issues: 103, 109, 111, 132 and 140
Yves:
Bryan, may be as we move forward and trying to resolve the issues, the
easiest way to know whether an issue is a significant change or not is
that each time we fix an issue, decide whether it is editorial or not;
and mark it so;
dF:
I agree sort of ... so far everything that was implemented in the
specification were editorial; we should record whether the change was
substantial or editorial;
B:
I agree with that approach; there is a column in our tracker called
the type of change and this might be a place where we can specify
this;
dF:
We did that during the first review and it was not so important during
the first review;
B:
Are we saying that the criteria for any of us implementing one of
these issues ... we cannot implement unless we have a change to the
schema or something beyond editorial;  it would be useful for us to
have a definition, in order to include a check mark to indicate a
substantive change;
dF:
It is not that very difficult; if it is a  schema change then it is
automatically substantive; if it is anything touching 'must statement'
or 'must not statement', that's basically normative, so it is
substantive.
F:
I think we have one more clause; changes to defined values or allowed values;
dF:
Yes. those are schema impacts
F:
They might not be schema impacts; if you look at item 141, although it
looks an editorial change it is a functional change.
B:
141: xliff: prefix in size restriction
F:
If the name of the standard defined profile changes ... it is
definitely a quite editorial, but it is impacting implementations;
B:
I have already run into troubles understanding how to implement 139;
that is the change track; in the specification it is not specifically
allowed anywhere.
dF:
Most of the issues that have potential to be substantive haven't been
resolved; well the change track can be resolved in editorial manner
...
B:
if Ryan were to take a look at this issue and he thinks for example
change track should be allowed everywhere validation is allowed, is
that an editorial change or substantive change?
dF:
I think it is substantive, because currently the change track has been
defined as a module; the places where it is allowed are only by
general extensibility ... it is a substantive feature change, if you
explicitly list places where it is allowed
B:
That helps me to understand; if this discussion we are having now is
whether we anticipate a third public review or not, I think that issue
alone, the change track not having any places where it is allowed ...;
I think the only way a correction will not be substantive is if we
drop the the change track module;
dF:
No. We can say that this is by design that change track is only
allowed by the means of general extensibility.
B:
I think it is a very valuable module that I want to keep; we have
other issues, for example there is an issue raised that implementers
cannot truly implement core based on the way the specification is
written now. I think that might also lead to a substantive change.
dF:
We should not limit ourselves to make editorial changes to solve
issues. While some issues could be resolved in editorial manner, they
might not be the best solution. e.g. the group of referencability
issues that I own;  I fully expect these will have schema impact
regarding IDs being required and so on;
B:
I agree; Yves' proposal that you seconded was a good one; as we try to
solve each of these issues we should track whether the proposed or
accepted solution is either editorial or substantive; if we get any
substantive checks, that means we will have a 3rd public review, if we
do not, then we will be back on our originality timeline;
agenda item 2.a.2 remains not resolved; we have a good approach;
moving to agenda item 2.b;
dF:
I think that the statement of use is very important; we can not really
avoid the 3rd public review; it is better to focus on some of the
issues discussed in the mailing list; <requesting feedback from the
other TC members regarding the mailing list discussions>;
B:
Moving on to agenda item 2.c; Would you like to start with 1.c.1?
dF:
May be Yves could give us a brief summery about the translatability of
content, about the algorithm etc.
Yves:
When you are trying to implement anything that has to do with
translating the content, you have to look at the translate attribute
at some point, in some cases if any of the translate annotations that
allow you to override the information that you have at the unit level
let you override something, that may be a problem if those annotation
marker ... outside of the segment itself; so you cannot rely just on
looking at the value of the translate attribute of the current
element/content, but you have to also look at the sibling element of
the segment itself; that's what I've noticed; then we had some
discussions with some examples ... we came up with an agreement
between David and I; we came up with a few ideas that David summarised
in the last email; I think nothing needs to be changed on the default
value definition, but we are missing probably some PR/constraints
saying something like e.g. end marker for the end of an annotation
should be always after the start marker; otherwise you will run into
problems; the other one is  that we should probably have explicit text
somewhere telling: when you are processing the content, you need to
look at the translate state, not just by looking at the attribute of
that element, but looking also at the whole content in the unit;
providing an example..
 dF:
 This is a good summery; it would be good to have at least two
examples of the algorithm: how to determine the translatable state of
content anywhere; it was not trivial; override mechanisms are more
powerful; Yves describe a possible algorithm; I described another
high-level algorithm; probably better to have another algorithm based
on XSLT or so.
B:
Anybody else have an opinion on this? this is 147 in the tracker - no
owner has been assigned;
dF:
I made myself an owner;
B
Thank you.
dF:
I can own it and implement what we agree with Yves; it is pretty
heavy; should be reviewed by someone else; preferably we should have a
co-owner who should provide an alternative algorithm; a standard
cannot prescribe an algorithm; it is quite important to be specific
and unambiguous; it is not trivial;
B:
In the actions to be taken columns, if we can document two algorithms
proposed I am willing to take a look at an XSLT based algorithm;
so the goal then is to not specify what the algorithm is but to have a
couple of examples in the specification?
dF:
Yves put forward about five use-cases; we have a common understanding
on how overrides should work with the defaults or not; we agreed how
it should work; we should give hints to the implementer on how to
implement; if Yves implement this in Okapi or XLIFF Toolkit that's
grand but we need another low level implementation to identify other
issues;
F:
I think the most important thing for the spec is that it's perfectly
clear how .. results of implementation would be; just having an
example is not strong enough it is relatively complicated topic..if
you can't unambiguously look at the words of the standard to see if
the implementation meets the standard or not, then there is an issue;
B:
Are you calling for more specific processing requirements?
F:
Processing requirements are more ... explanation of how things works,
if it needs to be processing expectations, just a paragraph describing
it ...
dF:
Basically the defaults are well defined; the complexity comes with the
behaviours of the overrides through the local .. mark-ups
F:
For example, the uniqueness of unit IDs in xliff 1.x, w.r.t. multiple
file elements where there seem to be a 50-50 divide in the
implementation of whether they enforce uniqueness within the file
elements or whole xliff file; because the wording ... you need to read
it carefully;
dF:
I don't think  we can add anything normative ... the result is encoded
in the specification; you need to ensure everyone reads in the same
way; you need to provide pseudo-code..
F:
I think just adding ... the information provided by the <start and end
annotation markers?> applied to whole text ...
dF:
I basically suggested this as a warning in the translate;
F:
It could go there, but also generally under explanation of annotations
so it is easy to understand; I think it should be in the text somehow;
Y:
I agree; we need something that is normative; if we look at the
example, no where we say which one take the precedence; if it is
actually the override that's coming from the previous segment or the
translate value of the segment itself; that's not clear at all; you
could read the spec in both ways; we don't forbid to do either and
definitely needs to be clarified;

B:
Shall we try to clarify now?
Y:
It is clear for me; it is not in the spec.
dF:
What is not in the spec?
Yves:
In the spec, we talk about the translate attribute; we talk about
default values etc; when you implement things, when you reach to the
point of finding which default value you need to apply to the content,
then you'll have a problem; suddenly you realise that the value can
come from two different sources .. one of those has to take precedence
and that is not defined anywhere;
dF:
If we need to add anything normative then we should include in the
normative defaults definition ... information about the overrides;
Y:
I think the default value is fine and it does not change.. it's how
you apply it..
dF:
Value will state that the .. normal form markers
Yves:
It is not there currently..
dF:
I though it would be a normative warning;
Y:
Frederick is pointing to the right place; it is when we talk about the
translate annotations;
dF:
In the annotation section, it should have a PR saying that the ..
markers take the precedence over the ...
Y:
Inline marker should have the precedence over the translate attribute
value on the segment and that applies to the whole content; basically
what Fredrick stated a few minutes ago;
dF:
Defaults are not defined only for segments, they are also defined for mrk;
Yves:
Defaults have no incident on that; the default just provide the value;
the problem is what to do with the value afterwards;
F:
.. the parent value takes precedence, if you read the standard
interpreting word by word thinking in XML terms, the parents of any
segment is definitely not the  ... non-wellformed annotations sitting
in sibling elements, it is the unit
Y:
Yes, that's why it is dangerous...
dF:
We need to say that it is not the case; obviously the default is given
in the .... override is clearly there; we are using unit as the
logical unit so that's the direct consequence; normal form markers
will have ...
B:
You don't need lot of time for sub-committee meeting debrief? I think
we can continue this discussion;
dF:
I think we should agree that someone proposes the normative text to be
added to the translation annotation in the mailing list;
F:
I have a simple PR proposal; the translate state of the each piece of
text in the unit should be the same regardless if it is segmented or
all segments are merged into one
dF:
that's the idea of logical unit of text is unit;
F:
Yes. But if you state that explicitly then there is no room for
interpretation or misinterpretation;
dF:
Yes, then you can add the algorithm examples to this;
Yves:
I agree with the statements, but that changes how the algorithm works;
B:
is the next good step to agree on the statement?
dF:
We can not agree if it changes the behaviour of the algorithm. We need
to follow-up in the mailing list;
Y:
May be Frederick can email the actual statement, so we can have a look
at it and the algorithms and follow up;
F:
I will send it.
B:
Any more discussion on the item #147?
dF:
This is good for this item, for the group of issues regarding IDs, I
split that into unrelated issues:a,b,c.. these have been discussed
between Yves and myself, it would be very good if others can follow
up; Tom did actually follow up.. this ended up with the confirmation
of the idea that we cannot use XML IDs , we should stay with NMTOKENS.
B:
Did Tom's comment also rule out the idea of using ID/IDREF between
target and source, where we can .. have duplicated IDs in target and
source or non-unique IDs?
Yves:
Not sure if I understand the question;
B:
One of the possible solutions was to overcome the constraint that you
would need to have unique IDs, of course you would want to have the
same IDs ,matching IDs in source and target so you cannot enforce
uniqueness; I thought somebody proposed that it could be ID IDREF
where the uniqueness is enforced on the source element and the target
would not carry the ID, but would rather carry the IDREF..
Y:
You are correct, that's the conclusion we had.
F:
That would also mean ... only exist in target and has no tag in
source; it  would look different than a target tag that has ... in
source;
Y:
 that's one of the reason that it is not practical;
B:
e.g. if I wanted to split a segment of bold text into two non-joining
segments, then that would require me to add a new set of markers in
the inline codes that would not match; i.e. I would have an IDREF with
no ID
F:
It is not a problem. You still enforce ID uniqueness across the unit
itself, but it is relatively a major change within inline mark-up
dF:
I think that there are three outcomes of the ID discussions so far, 1)
stay with the NMTOKENS and that way we manage the uniqueness
constraints by constrains .. PRs 2) we like to have segment ID's
required, one more thing that was agreed between Yves and myself was
that different elements cannot compete for the same IDs to make
referencability simple;
F:
That would only work as long as if you only have these unique IDs on
core elements within unit, an extensions would have it's own IDs..
dF:
if you are adding an modules to the core I think you need to conform
it to the uniqueness requirements of the unit;
F:
<gives an example scenario taking matches module>
dF:
I think that this is a separate issue with the matches modules or the
translation candidate modules;
F:
Both that, if you want to have IDs being unique on all elements within
in UNIT<unique content?> then you have a problem with allowing any
kind of extensions and modules within.. all you have to say is that
uniqueness constrains is... for core element, which kind of limits the
usefulness of the unique IDs
dF:
we don't have  any extended elements under units, we only have module elements;
F:
I think we are ... extension in unit;
B:
Invites David to report on sub-committee;
dF:
Lucia continues to work on the  XLIFF 2.0 questionnaire; we'd keep one
questionnaire and shrink the XLIFF 1.2 to basics and concentrate on
2.0 version; I agreed with the LocWorld organisers, that they would be
happy to host FEISGILT in Dublin in next year; we will make sure the
registration cost is lower than in London. No SC meeting today.

B:
No new business. Continuing the previous discussion.
dF:
Frederick  said that he thinks it's impractical to require IDs to be
unique disregarding enclosing elements because of modules and
extensions;
F:
yes, if you limit it to ...  that core elements need to have unique
IDs, then it is not a problem, but I am not sure you've a big value in
doing that since you have other things which ... conflicting IDs
dF: .. tools that don't understand the extended or module based ...
are not able to control the uniqueness in the whole unit?
F:
In order to control the uniqueness, you need to parse all content
including the content within modules; which is not something that you
should not necessarily have to do in order to process the file;
B:
As the schema exists, now we do not have to way to even ... a way to
support just core? this is a separate issue, but Frederic's
observation is true; ... there is no way to support just core;
dF:
Why do you think so? I know that I own one of the related issue; I
don't think that the candidate annotations ... may be splitting up the
structure?
F:
It is a conceptual problem, in order to enforce unique id, you need to
consider all the content, which means you've to consider content that
is not part of the core.
dF:
all content is just isn't core; because you are translating the
content; content is unit;
F:
Plenty of extensions and modules sitting on units;
dF:
Content is only source and target in segments;
F:
You are talking about having all elements contained in a unit having
unique IDs.  If you are talking about all the inline markups contained
with source and target to be unique then it would be different;
dF:
I can see there is an issue; the reason why we wanted it to be unique
was to be able to reference simply by ...
F:
It is a question about what the scope is; if the scope is only within
the translatable content, so the text and the inline markup then
obviously it is not a problem; you need to change how the inline
annotation work and it should technically work;
dF:
why do you need to change?
F:
we use the same ID in the the source and target to link the tags together;
dF:
that's a misunderstanding;
B:
<presents an example scenario>
Y:
I think you are talking about different things: like Fredrick said,
the scope is what you need to define first; Once you establish that,
then you can discuss about same


Dr. David Filip
=======================
LRC | CNGL | LT-Web | CSIS
University of Limerick, Ireland
telephone: +353-6120-2781
cellphone: +353-86-0222-158
facsimile: +353-6120-2734
http://www.cngl.ie/profile/?i=452
mailto: david.filip@ul.ie


---------- Forwarded message ----------
From: Asanka Wasala <Asanka.Wasala@ul.ie>
Date: Thu, Nov 7, 2013 at 3:09 PM
Subject: XLIFF TC Meeting Notes - 05/11/13
To: "Dr. David Filip" <David.Filip@ul.ie>
Cc: "Schnabel, Bryan S" <bryan.s.schnabel@tektronix.com>


Dear Dr. David,

I have pasted the formatted notes below. I was having problems with
grasping Frederick's and your comments. So bits and pieces are missing
here and there. It great if you could verify the notes before
publishing. You might also want to add the notes of any discussions
happened after I left the meeting.

Thank you,
Kind Regards
Asanka
-------------------------------------------------------------------------
B:
Approve previous meeting minutes, 15 October 2013
https://lists.oasis-open.org/archives/xliff/201310/msg00042.html
Bryan moves, Yves seconds no objects..
We assigned a few of the items to Fredrick - do you have any
objections? do you have any bandwidth constraints?
F:
Now I will have some bandwidth and I will go for it and try to send
out proposals or changes during this week.
dF:
Important to state in your timeline whatever the scenario we go for:
we need to implement items in the specification by next week;
F:
I can't promise I would have the bandwidth to resolve everything by
next meeting. I would probably make proposals on how to resolve;
B:
That's good. The owners of the items at least need to have proposals
no later than the start of the next meeting; in cases where we have
non-controversial issue, a simple call for dissent is fine with a
reasonable time-frame; we don't have Ryan on the call, so I can't
confirm with him; but I will reach out Uwe and Ryan to make sure the
issues that are assigned to him can be dealt with a timely manner;
I invite everybody to take look at the comments tracker and add any
useful feedback;
Next agenda item 2.2: i.e. the timeline; I have URLs for both our
initial timeline and the timeline if we determined that a 3rd public
review is required; when I sent out the note I had lost track of which
conversations that I numbered which are public and which numbers not
publicly available in list; as a result of that ...
<noise/feedback/echo>
We had a reply from Yves, stating that at least some of the issues
that he had pointed out would probably result in substantive changes;
David also had concerns to some of the issues; I guess the definition
of a change that would require a feature public review would be a bug
in the specification, or a feature that simply will not work as
specified, requiring a non-trivial change to the specification; let's
try to determine whether we think that a changes would merit a third
public review;
Yves:
I've sent an email with the ID of the issue which I thought is significant;
dF: I have it and I was looking at it. Links to those emails:
https://lists.oasis-open.org/archives/xliff/201310/msg00079.html
https://lists.oasis-open.org/archives/xliff/201310/msg00071.html
List of issues: 103, 109, 111, 132 and 140
Yves:
Bryan, may be as we move forward and trying to resolve the issues, the
easiest way to know whether an issue is a significant change or not is
that each time we fix an issue, decide whether it is editorial or not;
and mark it so;
dF:
I agree sort of ... so far everything that was implemented in the
specification were editorial; we should record whether the change was
substantial or editorial;
B:
I agree with that approach; there is a column in our tracker called
the type of change and this might be a place where we can specify
this;
dF:
We did that during the first review and it was not so important during
the first review;
B:
Are we saying that the criteria for any of us implementing one of
these issues ... we cannot implement unless we have a change to the
schema or something beyond editorial;  it would be useful for us to
have a definition, in order to include a check mark to indicate a
substantive change;
dF:
It is not that very difficult; if it is a  schema change then it is
automatically substantive; if it is anything touching 'must statement'
or 'must not statement', that's basically normative, so it is
substantive.
F:
I think we have one more clause; changes to defined values or allowed values;
dF:
Yes. those are schema impacts
F:
They might not be schema impacts; if you look at item 141, although it
looks an editorial change it is a functional change.
B:
141: xliff: prefix in size restriction
F:
If the name of the standard defined profile changes ... it is
definitely a quite editorial, but it is impacting implementations;
B:
I have already run into troubles understanding how to implement 139;
that is the change track; in the specification it is not specifically
allowed anywhere.
dF:
Most of the issues that have potential to be substantive haven't been
resolved; well the change track can be resolved in editorial manner
...
B:
if Ryan were to take a look at this issue and he thinks for example
change track should be allowed everywhere validation is allowed, is
that an editorial change or substantive change?
dF:
I think it is substantive, because currently the change track has been
defined as a module; the places where it is allowed are only by
general extensibility ... it is a substantive feature change, if you
explicitly list places where it is allowed
B:
That helps me to understand; if this discussion we are having now is
whether we anticipate a third public review or not, I think that issue
alone, the change track not having any places where it is allowed ...;
I think the only way a correction will not be substantive is if we
drop the the change track module;
dF:
No. We can say that this is by design that change track is only
allowed by the means of general extensibility.
B:
I think it is a very valuable module that I want to keep; we have
other issues, for example there is an issue raised that implementers
cannot truly implement core based on the way the specification is
written now. I think that might also lead to a substantive change.
dF:
We should not limit ourselves to make editorial changes to solve
issues. While some issues could be resolved in editorial manner, they
might not be the best solution. e.g. the group of referencability
issues that I own;  I fully expect these will have schema impact
regarding IDs being required and so on;
B:
I agree; Yves' proposal that you seconded was a good one; as we try to
solve each of these issues we should track whether the proposed or
accepted solution is either editorial or substantive; if we get any
substantive checks, that means we will have a 3rd public review, if we
do not, then we will be back on our originality timeline;
agenda item 2.a.2 remains not resolved; we have a good approach;
moving to agenda item 2.b;
dF:
I think that the statement of use is very important; we can not really
avoid the 3rd public review; it is better to focus on some of the
issues discussed in the mailing list; <requesting feedback from the
other TC members regarding the mailing list discussions>;
B:
Moving on to agenda item 2.c; Would you like to start with 1.c.1?
dF:
May be Yves could give us a brief summery about the translatability of
content, about the algorithm etc.
Yves:
When you are trying to implement anything that has to do with
translating the content, you have to look at the translate attribute
at some point, in some cases if any of the translate annotations that
allow you to override the information that you have at the unit level
let you override something, that may be a problem if those annotation
marker ... outside of the segment itself; so you cannot rely just on
looking at the value of the translate attribute of the current
element/content, but you have to also look at the sibling element of
the segment itself; that's what I've noticed; then we had some
discussions with some examples ... we came up with an agreement
between David and I; we came up with a few ideas that David summarised
in the last email; I think nothing needs to be changed on the default
value definition, but we are missing probably some PR/constraints
saying something like e.g. end marker for the end of an annotation
should be always after the start marker; otherwise you will run into
problems; the other one is  that we should probably have explicit text
somewhere telling: when you are processing the content, you need to
look at the translate state, not just by looking at the attribute of
that element, but looking also at the whole content in the unit;
providing an example..
 dF:
 This is a good summery; it would be good to have at least two
examples of the algorithm: how to determine the translatable state of
content anywhere; it was not trivial; override mechanisms are more
powerful; Yves describe a possible algorithm; I described another
high-level algorithm; probably better to have another algorithm based
on XSLT or so.
B:
Anybody else have an opinion on this? this is 147 in the tracker - no
owner has been assigned;
dF:
I made myself an owner;
B
Thank you.
dF:
I can own it and implement what we agree with Yves; it is pretty
heavy; should be reviewed by someone else; preferably we should have a
co-owner who should provide an alternative algorithm; a standard
cannot prescribe an algorithm; it is quite important to be specific
and unambiguous; it is not trivial;
B:
In the actions to be taken columns, if we can document two algorithms
proposed I am willing to take a look at an XSLT based algorithm;
so the goal then is to not specify what the algorithm is but to have a
couple of examples in the specification?
dF:
Yves put forward about five use-cases; we have a common understanding
on how overrides should work with the defaults or not; we agreed how
it should work; we should give hints to the implementer on how to
implement; if Yves implement this in Okapi or XLIFF Toolkit that's
grand but we need another low level implementation to identify other
issues;
F:
I think the most important thing for the spec is that it's perfectly
clear how .. results of implementation would be; just having an
example is not strong enough it is relatively complicated topic..if
you can't unambiguously look at the words of the standard to see if
the implementation meets the standard or not, then there is an issue;
B:
Are you calling for more specific processing requirements?
F:
Processing requirements are more ... explanation of how things works,
if it needs to be processing expectations, just a paragraph describing
it ...
dF:
Basically the defaults are well defined; the complexity comes with the
behaviours of the overrides through the local .. mark-ups
F:
For example, the uniqueness of unit IDs in xliff 1.x, w.r.t. multiple
file elements where there seem to be a 50-50 divide in the
implementation of whether they enforce uniqueness within the file
elements or whole xliff file; because the wording ... you need to read
it carefully;
dF:
I don't think  we can add anything normative ... the result is encoded
in the specification; you need to ensure everyone reads in the same
way; you need to provide pseudo-code..
F:
I think just adding ... the information provided by the <start and end
annotation markers?> applied to whole text ...
dF:
I basically suggested this as a warning in the translate;
F:
It could go there, but also generally under explanation of annotations
so it is easy to understand; I think it should be in the text somehow;
Y:
I agree; we need something that is normative; if we look at the
example, no where we say which one take the precedence; if it is
actually the override that's coming from the previous segment or the
translate value of the segment itself; that's not clear at all; you
could read the spec in both ways; we don't forbid to do either and
definitely needs to be clarified;

B:
Shall we try to clarify now?
Y:
It is clear for me; it is not in the spec.
dF:
What is not in the spec?
Yves:
In the spec, we talk about the translate attribute; we talk about
default values etc; when you implement things, when you reach to the
point of finding which default value you need to apply to the content,
then you'll have a problem; suddenly you realise that the value can
come from two different sources .. one of those has to take precedence
and that is not defined anywhere;
dF:
If we need to add anything normative then we should include in the
normative defaults definition ... information about the overrides;
Y:
I think the default value is fine and it does not change.. it's how
you apply it..
dF:
Value will state that the .. normal form markers
Yves:
It is not there currently..
dF:
I though it would be a normative warning;
Y:
Frederick is pointing to the right place; it is when we talk about the
translate annotations;
dF:
In the annotation section, it should have a PR saying that the ..
markers take the precedence over the ...
Y:
Inline marker should have the precedence over the translate attribute
value on the segment and that applies to the whole content; basically
what Fredrick stated a few minutes ago;
dF:
Defaults are not defined only for segments, they are also defined for mrk;
Yves:
Defaults have no incident on that; the default just provide the value;
the problem is what to do with the value afterwards;
F:
.. the parent value takes precedence, if you read the standard
interpreting word by word thinking in XML terms, the parents of any
segment is definitely not the  ... non-wellformed annotations sitting
in sibling elements, it is the unit
Y:
Yes, that's why it is dangerous...
dF:
We need to say that it is not the case; obviously the default is given
in the .... override is clearly there; we are using unit as the
logical unit so that's the direct consequence; normal form markers
will have ...
B:
You don't need lot of time for sub-committee meeting debrief? I think
we can continue this discussion;
dF:
I think we should agree that someone proposes the normative text to be
added to the translation annotation in the mailing list;
F:
I have a simple PR proposal; the translate state of the each piece of
text in the unit should be the same regardless if it is segmented or
all segments are merged into one
dF:
that's the idea of logical unit of text is unit;
F:
Yes. But if you state that explicitly then there is no room for
interpretation or misinterpretation;
dF:
Yes, then you can add the algorithm examples to this;
Yves:
I agree with the statements, but that changes how the algorithm works;
B:
is the next good step to agree on the statement?
dF:
We can not agree if it changes the behaviour of the algorithm. We need
to follow-up in the mailing list;
Y:
May be Frederick can email the actual statement, so we can have a look
at it and the algorithms and follow up;
F:
I will send it.
B:
Any more discussion on the item #147?
dF:
This is good for this item, for the group of issues regarding IDs, I
split that into unrelated issues:a,b,c.. these have been discussed
between Yves and myself, it would be very good if others can follow
up; Tom did actually follow up.. this ended up with the confirmation
of the idea that we cannot use XML IDs , we should stay with NMTOKENS.
B:
Did Tom's comment also rule out the idea of using ID/IDREF between
target and source, where we can .. have duplicated IDs in target and
source or non-unique IDs?
Yves:
Not sure if I understand the question;
B:
One of the possible solutions was to overcome the constraint that you
would need to have unique IDs, of course you would want to have the
same IDs ,matching IDs in source and target so you cannot enforce
uniqueness; I thought somebody proposed that it could be ID IDREF
where the uniqueness is enforced on the source element and the target
would not carry the ID, but would rather carry the IDREF..
Y:
You are correct, that's the conclusion we had.
F:
That would also mean ... only exist in target and has no tag in
source; it  would look different than a target tag that has ... in
source;
Y:
 that's one of the reason that it is not practical;
B:
e.g. if I wanted to split a segment of bold text into two non-joining
segments, then that would require me to add a new set of markers in
the inline codes that would not match; i.e. I would have an IDREF with
no ID
F:
It is not a problem. You still enforce ID uniqueness across the unit
itself, but it is relatively a major change within inline mark-up
dF:
I think that there are three outcomes of the ID discussions so far, 1)
stay with the NMTOKENS and that way we manage the uniqueness
constraints by constrains .. PRs 2) we like to have segment ID's
required, one more thing that was agreed between Yves and myself was
that different elements cannot compete for the same IDs to make
referencability simple;
F:
That would only work as long as if you only have these unique IDs on
core elements within unit, an extensions would have it's own IDs..
dF:
if you are adding an modules to the core I think you need to conform
it to the uniqueness requirements of the unit;
F:
<gives an example scenario taking matches module>
dF:
I think that this is a separate issue with the matches modules or the
translation candidate modules;
F:
Both that, if you want to have IDs being unique on all elements within
in UNIT<unique content?> then you have a problem with allowing any
kind of extensions and modules within.. all you have to say is that
uniqueness constrains is... for core element, which kind of limits the
usefulness of the unique IDs
dF:
we don't have  any extended elements under units, we only have module elements;
F:
I think we are ... extension in unit;
B:
Invites David to report on sub-committee;
dF:
Lucia continues to work on the  XLIFF 2.0 questionnaire; we'd keep one
questionnaire and shrink the XLIFF 1.2 to basics and concentrate on
2.0 version; I agreed with the LocWorld organisers, that they would be
happy to host FEISGILT in Dublin in next year; we will make sure the
registration cost is lower than in London. No SC meeting today.

B:
No new business. Continuing the previous discussion.
dF:
Frederick  said that he thinks it's impractical to require IDs to be
unique disregarding enclosing elements because of modules and
extensions;
F:
yes, if you limit it to ...  that core elements need to have unique
IDs, then it is not a problem, but I am not sure you've a big value in
doing that since you have other things which ... conflicting IDs
dF: .. tools that don't understand the extended or module based ...
are not able to control the uniqueness in the whole unit?
F:
In order to control the uniqueness, you need to parse all content
including the content within modules; which is not something that you
should not necessarily have to do in order to process the file;
B:
As the schema exists, now we do not have to way to even ... a way to
support just core? this is a separate issue, but Frederic's
observation is true; ... there is no way to support just core;
dF:
Why do you think so? I know that I own one of the related issue; I
don't think that the candidate annotations ... may be splitting up the
structure?
F:
It is a conceptual problem, in order to enforce unique id, you need to
consider all the content, which means you've to consider content that
is not part of the core.
dF:
all content is just isn't core; because you are translating the
content; content is unit;
F:
Plenty of extensions and modules sitting on units;
dF:
Content is only source and target in segments;
F:
You are talking about having all elements contained in a unit having
unique IDs.  If you are talking about all the inline markups contained
with source and target to be unique then it would be different;
dF:
I can see there is an issue; the reason why we wanted it to be unique
was to be able to reference simply by ...
F:
It is a question about what the scope is; if the scope is only within
the translatable content, so the text and the inline markup then
obviously it is not a problem; you need to change how the inline
annotation work and it should technically work;
dF:
why do you need to change?
F:
we use the same ID in the the source and target to link the tags together;
dF:
that's a misunderstanding;
B:
<presents an example scenario>
Y:
I think you are talking about different things: like Fredrick said,
the scope is what you need to define first; Once you establish that,
then you can discuss about same



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]