[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: [docbook-apps] hours to process chunks
On Thu, Sep 1, 2011 at 5:33 PM, Jirka Kosek <jirka@kosek.cz> wrote: > On 1.9.2011 21:03, Tim Arnold wrote: > >> What factors make processing go so slowly. It must not be as simple as >> filesize or olinks. > > Hard to say without having your files available. You can try to use > --profile option to xsltproc to see which templates are eating most time. > > Also I would suggest switch to Saxon XSLT processor. My experience is > that on large files it outperforms xsltproc as it has more advanced > optimizations under the hood. > > Generating of index can took some time for larger documents, but other > chunks should be created relatively quickly. > > Jirka > Hi Jirka, I used the profile option on a typical chapter. Here are the top 20 results: number match name mode Calls Tot 100us Avg 0 d:footnote footnote.number 70 8366230 119517 1 gentext.template 54424 434861 7 2 d:citation 130 372890 2868 3 process.footnotes 1 320460 320460 4 l10n.language 48140 304491 6 5 length-magnitude 86800 245441 2 6 simple.xlink 2653 164340 61 7 process.image 3967 155726 39 8 gentext.template.exists 46341 140527 3 9 d:figure|d:table|d:example label.markup 160 130026 812 10 * html.title.attribute 15447 95353 6 11 filename-extension 31736 65191 2 12 mediaobject.filename 11901 59758 5 13 html:p|p unwrap.p 13692 56589 4 14 lookup.key 25841 49347 1 15 length-in-points 11848 45894 3 16 filename-basename 31736 43678 1 17 length-spec 11742 39144 3 18 * unwrap.p 33042 34417 1 19 length-units 11742 29896 2 20 select.mediaobject.index 4020 29553 7 Well great, at least in my browser the columns don't line up any longer. I'll try to attach a file with the profile results. I'm not sure what to make of the information--does it mean that the most time was taken up by footnotes? I was able to process the chapters by themselves and compare the conversion times to the filesizes--it looks like an exponential relationship. I guess I can fall back to chapter-level processing, but I would really really like to keep processing at the book level; I also have to provide a specially formatted toc.xml, index.xml, and htmlhelp supplemental files. Keeping processing at the book level I get all these things; otherwise I'll have to write my own code to produce the supplemental files. Not terrible, but I'd rather avoid it. thanks for any insight into why the book takes so many hours to complete. --Tim
number match name mode Calls Tot 100us Avg 0 d:footnote footnote.number 70 8366230 119517 1 gentext.template 54424 434861 7 2 d:citation 130 372890 2868 3 process.footnotes 1 320460 320460 4 l10n.language 48140 304491 6 5 length-magnitude 86800 245441 2 6 simple.xlink 2653 164340 61 7 process.image 3967 155726 39 8 gentext.template.exists 46341 140527 3 9 d:figure|d:table|d:example label.markup 160 130026 812 10 * html.title.attribute 15447 95353 6 11 filename-extension 31736 65191 2 12 mediaobject.filename 11901 59758 5 13 html:p|p unwrap.p 13692 56589 4 14 lookup.key 25841 49347 1 15 length-in-points 11848 45894 3 16 filename-basename 31736 43678 1 17 length-spec 11742 39144 3 18 * unwrap.p 33042 34417 1 19 length-units 11742 29896 2 20 select.mediaobject.index 4020 29553 7
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]