Compare commits

...

345 Commits

Author SHA1 Message Date
Martin A. Brown 4947542e01
Merge pull request #12 from martin-a-brown/mabrown/version-0.7.15
bump version to 0.7.15
2022-10-23 20:42:44 -07:00
Martin A. Brown 6a00bd0b1e bump version to 0.7.15 2022-10-23 20:41:45 -07:00
Martin A. Brown 8f34526e4f
Merge pull request #11 from martin-a-brown/master
catch up to 0.7.14 (from May 2016) in tLDP repo:  add --version flag
2022-10-23 20:39:08 -07:00
Martin A. Brown 0f70fd1aad
Merge pull request #10 from martin-a-brown/mabrown/support-python-3.10-and-ubuntu-22.04
support Python3.8+: fix import for MutableMapping and other minor fixes
2022-10-23 15:33:28 -07:00
Martin A. Brown 8be6d14517 drop Python2.x testing, support Python3.9+ 2022-10-23 12:12:22 -07:00
Martin A. Brown fafc30ac0e argparse drops private method _ensure_value
The `argparse` library refactored to improvem performance of common use
cases.  Handling a list is not a common scenario and the function
_ensure_values (which was obviously private) disappeared.

This commit simply uses the same strategy that the upstream `argparse`
maintainers used.  Since this is a subclass of a private object type
anyway, this is expected sort of behaviour.
2022-10-23 18:46:01 +00:00
Martin A. Brown 4cb5e881d2 MutableMapping moves into collections.abc 2022-10-23 18:28:50 +00:00
Martin A. Brown aa17eb26fd Merge pull request #7 from mwhudson/python3.6-compat
fix guess(non-string) with Python 3.6
2017-07-13 17:02:01 -07:00
Michael Hudson-Doyle 21cfd681b9 fix guess(non-string) with Python 3.6
The os.path.* functions now consistently raise TypeError rather than something
more random when called with inappropriate types.

Fixes #6
2017-07-14 11:16:57 +12:00
Martin A. Brown 7b756c7a18 bumping version to tldp-0.7.14 2016-05-16 09:56:15 -07:00
Martin A. Brown beb920dd58 add a test for the new --version CLI 2016-05-16 09:38:09 -07:00
Martin A. Brown 9f5b7c2ded use unicode literals here, too 2016-05-16 09:37:46 -07:00
Martin A. Brown a7fa31e35e add entry for --version to manpage 2016-05-16 09:30:30 -07:00
Martin A. Brown 47b2930264 add --version handling logic in driver.py 2016-05-16 09:30:18 -07:00
Martin A. Brown e879f2e638 adding --version to CLI config 2016-05-16 09:29:53 -07:00
Martin A. Brown f3f06f372f Merge pull request #5 from martin-a-brown/master
corrections to changelog from Gianfranco
2016-05-13 12:49:21 -07:00
Martin A. Brown 058e0562d5 corrections to changelog from Gianfranco 2016-05-13 12:42:03 -07:00
Martin A. Brown 2c15d05f97 Merge pull request #4 from martin-a-brown/master
allowing test suite to succeed when run as root
2016-05-13 09:59:28 -07:00
Martin A. Brown 8a7c041508 Merge branch 'master' of http://github.com/tLDP/python-tldp 2016-05-13 09:56:38 -07:00
Martin A. Brown d15fe74517 add .coverage and .tox to "clean" 2016-05-13 12:42:18 -04:00
Martin A. Brown fb027155ed bumping version to tldp-0.7.13 2016-05-13 12:29:20 -04:00
Martin A. Brown 48daa85fd8 adjust testing for root-run tests (changelog) 2016-05-13 12:27:17 -04:00
Martin A. Brown d78b53c91e accommodate root-run tests 2016-05-13 12:25:45 -04:00
Martin A. Brown 83cbeb4cb5 reset mode during testing; specify errnos to expect 2016-05-11 08:57:54 -07:00
Martin A. Brown 0908985916 add py35 to tox list 2016-05-11 08:57:29 -07:00
Martin A. Brown c0e5144a3c some notes for self 2016-05-01 11:10:49 -07:00
Martin A. Brown 1af6a90146 Merge pull request #3 from martin-a-brown/master
mostly just (re-)arranging for Debian packaging
2016-04-30 15:33:58 -07:00
Martin A. Brown 59fc4484fe bumping version to tldp-0.7.12 2016-04-30 18:18:29 -04:00
Martin A. Brown 0a63392c44 adding updated ChangeLog for 0.7.12 release 2016-04-30 18:18:10 -04:00
Martin A. Brown f0c8a73613 prefer checkout over installed version 2016-04-30 18:13:45 -04:00
Martin A. Brown 7bea460452 Merge branch 'master' of github.com:martin-a-brown/python-tldp 2016-04-30 14:57:29 -07:00
Martin A. Brown d16be1b076 adding 2016 LDP copyright to each file 2016-04-30 17:49:20 -04:00
Martin A. Brown 8626c6d7ae adjusting to reflect changes to packaging and Python versions 2016-04-30 17:48:41 -04:00
Martin A. Brown 9cb6aacbf4 adding the short block for the GFDL-1.2 2016-04-30 17:26:38 -04:00
Martin A. Brown b5717f9861 do not make bad define 2016-04-30 16:25:58 -04:00
Martin A. Brown 704393aee9 correcting the minimal list of (runtime) dependencies 2016-04-29 11:27:38 -04:00
Martin A. Brown e22ec70f6a moved extra clean needs to debian/clean 2016-04-29 11:27:09 -04:00
Martin A. Brown 437ca8953d adding extra directories to list for "clean" (advice from debian-python) 2016-04-29 11:26:43 -04:00
Martin A. Brown f85e395be5 ne need to refer to these files in testing; remove 2016-04-29 11:10:32 -04:00
Martin A. Brown d8b6005d04 fdupes will not find these guys now 2016-04-29 11:10:18 -04:00
Martin A. Brown b9e6db4d18 removing unnecessary sample files 2016-04-29 11:09:59 -04:00
Martin A. Brown 0fe6f9c64e adding 2016 LDP copyright 2016-04-29 11:04:43 -04:00
Martin A. Brown 8a554a430d adding 2016 LDP copyright to each file 2016-04-29 11:02:02 -04:00
Martin A. Brown 32d2d55c28 switching to absolute_import 2016-04-29 11:01:37 -04:00
Martin A. Brown da409d8bd7 adding "upstream/metadata" 2016-04-29 09:51:32 -04:00
Martin A. Brown ab8f03d220 correcting path names and omitting symlinks from copyright 2016-04-28 12:22:24 -04:00
Martin A. Brown 36e34764ab another round of trying to correct the copyright file 2016-04-28 12:18:51 -04:00
Martin A. Brown 53c476cebc pyflakes/pep8 adjustments 2016-04-28 12:12:41 -04:00
Martin A. Brown dec201f50c pyflakes/pep8 adjustments 2016-04-28 12:12:27 -04:00
Martin A. Brown 174e759d14 bumping version to 0.7.11 2016-04-28 11:05:36 -04:00
Martin A. Brown 66291ca237 flakes: remove reference to operator 2016-04-28 11:05:15 -04:00
Martin A. Brown 35b80575b6 remove reference to unused statfiles 2016-04-28 11:05:05 -04:00
Martin A. Brown d873a73490 ${python3:Depends} should handle the dependency 2016-04-28 10:53:49 -04:00
Martin A. Brown 395ef66925 removed at Gianfranco's suggestion 2016-04-28 10:53:24 -04:00
Martin A. Brown e60d4859d5 Merge pull request #2 from martin-a-brown/master
handle generation of Debian packaging
2016-04-27 14:47:28 -07:00
Martin A. Brown bc096e4938 Merge branch 'master' of http://github.com/tLDP/python-tldp 2016-04-27 14:46:42 -07:00
Martin A. Brown 32cec7e21f suppress warning about library-package-name-for-application 2016-04-27 16:16:16 -04:00
Martin A. Brown 28e1b9c0fc and get rid of that eggy stuff 2016-04-27 16:00:19 -04:00
Martin A. Brown d8e9ef181f bumping version to 0.7.10 2016-04-27 15:57:33 -04:00
Martin A. Brown 8b6cac5656 commit of ChangeLog created from git history 2016-04-27 15:56:11 -04:00
Martin A. Brown 676fcde2b1 reverting 20432ef524 2016-04-27 14:27:20 -04:00
Martin A. Brown 4e337f2e0b allow to pass through options to debuild 2016-04-27 14:20:00 -04:00
Martin A. Brown 20432ef524 try to make multiple installables 2016-04-27 14:19:51 -04:00
Martin A. Brown f33966956c bumping version to tldp-0.7.9 2016-04-27 14:12:42 -04:00
Martin A. Brown 081c91ce30 fixing the generation of VERSION 2016-04-27 14:11:40 -04:00
Martin A. Brown 0bea124838 generated specfile 2016-04-27 14:03:52 -04:00
Martin A. Brown 211d3b93d7 find the parent ./tldp/ directory 2016-04-27 14:03:43 -04:00
Martin A. Brown a3180238bd move VERSION to a single location 2016-04-27 13:52:56 -04:00
Martin A. Brown e8769f3905 a small script to generate/update the spec file for "rpmbuild -ta" 2016-04-27 13:52:43 -04:00
Martin A. Brown f79a78fc9c generate the RPM thingy at release time 2016-04-27 13:52:09 -04:00
Martin A. Brown 552f2e46f6 tiny shell script to automate debian release 2016-04-27 10:23:30 -07:00
Martin A. Brown 1264a20f97 adjust version number and make initial release again 2016-04-27 10:17:56 -07:00
Martin A. Brown 1c8ff1b109 apparently, this must be an application 2016-04-27 10:17:41 -07:00
Martin A. Brown 0e2ac9125a switching to quilt format; native is forbidden 2016-04-27 10:17:25 -07:00
Martin A. Brown 6db48cd99e suppress lintian complaints
comment out any copyright information about the contents of the source
tarball, which includes the testing suite data and support tools
2016-04-27 09:30:32 -07:00
Martin A. Brown 5e74fbf7d0 Merge pull request #1 from martin-a-brown/master
Debianizing, adding manpage, standardizing tokens for doctype listing
2016-04-26 11:32:30 -07:00
Martin A. Brown 9ebee12b6e correcting date for changelog on 0.7.8 release 2016-04-26 07:58:22 -04:00
Martin A. Brown 3beb166643 bumping version to tldp-0.7.8 2016-04-26 07:53:42 -04:00
Martin A. Brown 7685175f30 adding dependency on sphinx 2016-04-26 07:52:27 -04:00
Martin A. Brown d78430af84 change the name of the source package 2016-04-22 07:04:05 -04:00
Martin A. Brown a54d0ec47e get closer to expectations/convention on copyright file 2016-04-22 07:03:48 -04:00
Martin A. Brown 87fd09b351 head for "unstable"; mention the ITP bug 2016-04-22 07:03:20 -04:00
Martin A. Brown cde9cf2632 decrease verbosity of pybuild 2016-04-21 16:14:58 -04:00
Martin A. Brown 3393a63c8b be just a touch cleaner (when cleaning) 2016-04-21 16:10:08 -04:00
mabrown fdf3b15889 bumping version to tldp-0.7.7 2016-04-21 12:05:18 -04:00
Martin A. Brown fe3c19722c bumping version to tldp-0.7.6 2016-04-21 15:51:53 -07:00
Martin A. Brown fc6069bd32 Merge branch 'debian' 2016-04-21 15:51:04 -07:00
Martin A. Brown 32d5591a73 and put manpage in %files list 2016-04-21 15:50:24 -07:00
Martin A. Brown ae41ac7ec4 include in MANIFEST.in and spell "docs" correctly 2016-04-21 15:37:30 -07:00
Martin A. Brown 034a6caffb include manpage 2016-04-21 15:34:46 -07:00
Martin A. Brown d2bfbf42d7 statically created for inclusion into RPM 2016-04-21 15:34:34 -07:00
Martin A. Brown 48298f3cfb initial crack at debian packaging files 2016-04-21 11:21:49 -07:00
Martin A. Brown 044e140920 adding manpage 2016-04-21 11:19:10 -07:00
Martin A. Brown 3693264ca2 switch to using the doc.__name__, not doc.formatname 2016-04-20 22:38:45 -07:00
Martin A. Brown ed9ec4cc66 adjusting file sources for Debian copyright explanations 2016-04-20 07:34:12 -07:00
Martin A. Brown a1c20ce125 to fix licensing issues 2016-04-19 21:13:57 -07:00
Martin A. Brown 7f3af31a16 bumping version to tldp-0.7.5 2016-04-19 19:09:55 -07:00
Martin A. Brown c290aaad03 fix lifecycle test (since deleting build directory) 2016-04-19 19:09:17 -07:00
Martin A. Brown 5fd42a93a6 bumping version to tldp-0.7.4 2016-04-18 12:26:00 -07:00
Martin A. Brown 872f5a5ed2 match the test to the prior commit 2016-04-18 12:25:31 -07:00
Martin A. Brown c947cfaf57 and remove the build directory, if empty 2016-04-18 12:25:20 -07:00
Martin A. Brown c3e2237539 bumping version to tldp-0.7.3 2016-04-15 07:44:31 -07:00
Martin A. Brown e268b80e84 remove "random" text from MD5SUMS file
remove the date/time and hostname from the MD5SUMS file (why did I ever think
that was a good idea?)
2016-04-15 07:43:29 -07:00
Martin A. Brown 03929b3519 putting MIT License there 2016-04-09 15:51:11 -07:00
Martin A. Brown 66b34000c6 run the new long_inventory.py test, too 2016-04-02 12:45:19 -07:00
Martin A. Brown 363269beb1 bumping version to tldp-0.7.2 2016-04-02 12:27:28 -07:00
Martin A. Brown 72614dd22b test missing output MD5SUMS file, too 2016-04-02 12:24:17 -07:00
Martin A. Brown 4f73310eea comment correction 2016-04-02 12:17:48 -07:00
Martin A. Brown c45fc01109 yes, but make certain it is an ISO-8859-1 file, not just by name 2016-04-02 12:17:29 -07:00
Martin A. Brown 7904944e09 deliberately test the ISO-8859-1 file 2016-04-02 12:15:55 -07:00
Martin A. Brown d922c04d02 exercise another section of the typeguesser 2016-04-02 12:15:01 -07:00
Martin A. Brown 5e06fd5ed6 add a longer, lifecycle test 2016-04-02 12:00:45 -07:00
Martin A. Brown d893f20968 new file which disappears during lifecycle test 2016-04-02 12:00:33 -07:00
Martin A. Brown 42e25ee115 process some images to get better lifecycle test 2016-04-02 12:00:01 -07:00
Martin A. Brown 2913600928 adding a copytree function to the testing framework 2016-04-02 11:59:29 -07:00
Martin A. Brown 6ec7e84c2d remove reference to mtime functions 2016-04-02 11:58:56 -07:00
Martin A. Brown 0b8ae435f8 correct the justification of the text 2016-04-02 11:58:35 -07:00
Martin A. Brown af80925d70 remove references to unused statinfo stuff 2016-04-02 11:58:19 -07:00
Martin A. Brown 5ca7ee9a16 not using statinfo any longer 2016-04-02 11:54:05 -07:00
Martin A. Brown a920a317c7 adding some images for longer lifecycle test 2016-04-02 11:27:58 -07:00
Martin A. Brown d4237eec58 report on failure/success count during the job
this allows somebody to kill the build in the middle, if there were any
failures
2016-04-02 10:54:35 -07:00
Martin A. Brown 44f5afb714 adapt to new output message 2016-04-02 10:54:20 -07:00
Martin A. Brown 239fc83222 minor adjustments to names (easier to find when tests fail) 2016-04-02 10:54:04 -07:00
Martin A. Brown 38c667691f change reference to md5sums 2016-04-02 10:53:38 -07:00
Martin A. Brown 4bfcda8602 update testing tools to wrangle MD5s
add some logic for generating, reading and comparing MD5s since the proper
code base is no longer using statinfo, but rather content checksums to
determine whether a rebuild is necessary
2016-04-02 10:52:39 -07:00
Martin A. Brown 45a0d2120c fetch MD5s for sources; ignore index.sgml files
instead of fetching statinfo, switch to using the MD5 of all files in the
source file set
also ignore any (annoying and) stray index.sgml files
2016-04-02 10:50:54 -07:00
Martin A. Brown 5a06b49967 load the MD5 file, if present
if the MD5 file is not present, then an earlier version of the tldp package
generated the output directory, and we should re-run
if the MD5 file is present in the output directory, load it into the dict()
data structure and return it, so that a stale-check can be completed
2016-04-02 10:49:15 -07:00
Martin A. Brown 5dcc255cc6 calculate stale by MD5s; swap stale/broken
move the stanza that identifies the broken output directories up higher in the
file; it's a simpler chunk of code
adjust the detection of stale-ness by referring to an output MD5 file and
compare with the available source files
2016-04-02 10:47:45 -07:00
Martin A. Brown dccbab5b39 call the MD5 generation util function 2016-04-02 10:47:07 -07:00
Martin A. Brown 691a4bf8d6 make sure to sort the files by name 2016-04-02 10:46:43 -07:00
Martin A. Brown 93390cd467 add tools for computing and sorting source MD5s 2016-04-02 10:45:53 -07:00
Martin A. Brown 84c703a89e skip the MD5SUM file 2016-04-01 23:23:23 -07:00
Martin A. Brown 5530c8f38f and use the MD5SUMS location specified in the OutputDocument 2016-04-01 22:38:35 -07:00
Martin A. Brown 49b2ee57ae add a place to capture the MD5 data of the source 2016-04-01 22:37:53 -07:00
Martin A. Brown 753774c5e9 compute and generate an MD5SUMS file for each source document 2016-04-01 22:19:25 -07:00
Martin A. Brown 9fac28f160 record md5sum info for each source file 2016-04-01 22:18:39 -07:00
Martin A. Brown 673ddaf3e9 skip any directories on stat expedition 2016-04-01 22:17:53 -07:00
Martin A. Brown f5af96d1bf make pep8 a bit happier 2016-04-01 21:33:23 -07:00
Martin A. Brown aef4d6e3ee and generate the directory listing with full (relative) path 2016-04-01 21:10:43 -07:00
Martin A. Brown 832daee384 preserve the (relative) full path, silly! 2016-04-01 21:10:12 -07:00
Martin A. Brown 7d46e59efa missed the unicode_literals here 2016-04-01 21:09:49 -07:00
Martin A. Brown df38deed8f bumping version to tldp-0.7.0 2016-03-28 14:08:12 -07:00
Martin A. Brown f5cb7c9e8b support better handling of verbose CLI/config
now ldptool understands --verbose, --verbose yes, --verbose false
2016-03-28 14:06:46 -07:00
Martin A. Brown c0e477b5c7 updating the stock and sample, commented config file 2016-03-28 14:06:38 -07:00
Martin A. Brown 6b3a26b366 bumping version to tldp-0.6.8 2016-03-28 11:13:16 -07:00
Martin A. Brown 8cbdeab558 provide directory existence feedback to user
instead of bailing with an obnoxious error message, or silently ignoring a
command-line option of a directory, squawk to STDERR with the problem and
provide (possibly redundant, but maybe informative) traceback
2016-03-28 11:11:22 -07:00
Martin A. Brown a84a285168 pep8 fixes 2016-03-28 11:11:12 -07:00
Martin A. Brown 8de5158eb7 flakes noticed an extra import, removing 2016-03-28 11:10:45 -07:00
Martin A. Brown c442debc51 specify default loglevel in function signature 2016-03-28 11:10:27 -07:00
Martin A. Brown 4658f3101e pep8 improvement 2016-03-28 11:09:43 -07:00
Martin A. Brown 9b5d6674f9 add a TODO item for improving CLI error reporting 2016-03-27 09:53:32 -07:00
Martin A. Brown 62de5354fb comment out the "False" verbose for now 2016-03-27 02:56:23 -07:00
Martin A. Brown 4afdb1bd81 bumping version to 0.6.7 2016-03-27 02:27:03 -07:00
Martin A. Brown 9c3ed36bc2 fix publish function so it propagates return code 2016-03-27 02:26:42 -07:00
Martin A. Brown 74544227b8 toss in the sample broken docbook4xml file 2016-03-27 02:26:15 -07:00
Martin A. Brown 7c98b13db6 add an example to prove that publish() exits non-zero 2016-03-27 02:25:46 -07:00
Martin A. Brown 6d2040c671 add a broken example docbook4xml file 2016-03-27 02:25:23 -07:00
Martin A. Brown 30594d5e4e using unicode everywhere else 2016-03-27 02:14:10 -07:00
Martin A. Brown 2ce65990c4 correcting and moving test because it runs long 2016-03-27 02:11:52 -07:00
Martin A. Brown 8be3395f5f using unicode everywhere else 2016-03-27 02:11:30 -07:00
Martin A. Brown 4960f2a2a2 bumping version to 0.6.6 2016-03-27 01:03:06 -07:00
Martin A. Brown c0233a73b2 deal with 2/3 naming for stringy things 2016-03-27 01:02:44 -07:00
Martin A. Brown b68a1c3ae4 bumping version to 0.6.5 2016-03-27 00:51:49 -07:00
Martin A. Brown deaee034fc make an end run around the XSL/fop problem
teach the DocBook4 XML utility itself to set fop.extensions = 0 and
fop1.extensions = 1 until such time as the upstream ldp-docbook-xsl packages
can be repaired and/or adjusted
2016-03-27 00:44:00 -07:00
Martin A. Brown a4d5c5a4c6 bumping version to 0.6.4 2016-03-26 09:57:47 -07:00
Martin A. Brown 0808a14f14 adding a section on minimal configuration 2016-03-26 09:57:23 -07:00
Martin A. Brown 199db5d91d bumping version to 0.6.3 2016-03-25 15:23:28 -07:00
Martin A. Brown 8e2b480a7b bumping version to 0.6.2 2016-03-24 09:52:44 -07:00
Martin A. Brown 8d7bb3ef84 add link to travis-ci.org build status 2016-03-24 09:45:59 -07:00
Martin A. Brown a2534197f5 long_driver.py not long_tests.py 2016-03-24 09:42:23 -07:00
Martin A. Brown 7844af3576 and run the full test suite (why not?) 2016-03-24 09:37:39 -07:00
Martin A. Brown 9b95c0a071 add html2text to required install set 2016-03-24 09:32:00 -07:00
Martin A. Brown b4c0afc873 add --assume-yes for unattended install 2016-03-24 09:25:43 -07:00
Martin A. Brown 571f25feb5 try to get the 14.04 Ubuntu release 2016-03-24 09:14:58 -07:00
Martin A. Brown 0419849ebf adjust requirements 2016-03-24 09:14:25 -07:00
Martin A. Brown d4a667ce77 set up build environment in travis 2016-03-24 09:01:15 -07:00
Martin A. Brown f2ef7d2184 support --script mode anywhere and don't chdir() 2016-03-24 09:00:20 -07:00
Martin A. Brown 4499cd6181 adding a requirements.txt for travis-ci.org 2016-03-24 07:51:11 -07:00
Martin A. Brown 741d3c448f see if this thing builds on travis-ci.org 2016-03-24 07:39:06 -07:00
Martin A. Brown 9b12e6a9ea correction to --list example 2016-03-18 20:07:23 -07:00
Martin A. Brown 295bb5f147 may as well specify explicitly 2016-03-18 20:07:08 -07:00
Martin A. Brown 746197d954 match up the detail method on the OutputDirectory with the SourceDocument 2016-03-18 20:06:52 -07:00
Martin A. Brown 45ee1cf4c8 and add the README.rst explicitly to the MANIFEST.in 2016-03-18 20:06:05 -07:00
Martin A. Brown d094e9365a specify the footer to use 2016-03-16 16:12:25 -07:00
Martin A. Brown 402bda5fb7 bumping version to 0.6.1 2016-03-15 20:15:11 -07:00
Martin A. Brown 1c7af7b634 fixing a dumb spelling error 2016-03-15 15:49:01 -07:00
Martin A. Brown 0461c27d7a bumping version to 0.6.0 2016-03-15 13:26:19 -07:00
Martin A. Brown e4b5c5d8bb need to fall back to iso-8859-1 for SGML docs 2016-03-15 13:26:03 -07:00
Martin A. Brown 912cda9328 adding a few comments about supported Pythons 2016-03-15 13:04:02 -07:00
Martin A. Brown e98f0db3da more adjustments to README 2016-03-15 12:52:36 -07:00
Martin A. Brown 36a5b6f324 refer to canonical source location; improve flow 2016-03-15 12:38:06 -07:00
Martin A. Brown 6c9f86a364 adjust requirements so tox will run 2016-03-14 22:28:41 -07:00
Martin A. Brown c7b463f21c initial commit of configuration file for testing against Python3 (as well) 2016-03-14 22:27:57 -07:00
Martin A. Brown dfd65b43f3 use an empty (unicode_literal) string to trick future print() [function] to producing unicode strings rather than Py2 byte strings 2016-03-14 22:27:15 -07:00
Martin A. Brown 1eec325d9e Exception.message deprecated; just make sure which returns not None 2016-03-14 22:26:15 -07:00
Martin A. Brown ec472d8e37 Exception.message deprecated 2016-03-14 22:25:43 -07:00
Martin A. Brown 23aab88e95 Exception.message deprecated; use io.StringIO() 2016-03-14 22:25:27 -07:00
Martin A. Brown 3f92a7a95c everybody gets unicode_literals 2016-03-14 22:18:09 -07:00
Martin A. Brown 7c17c0dc5b switch to codecs.open and expect UTF-8 data 2016-03-14 21:51:14 -07:00
Martin A. Brown 0d93e6fca1 switch to codecs.open and expect UTF-8 data 2016-03-14 21:48:01 -07:00
Martin A. Brown 946b839b60 switch to codecs.open and expect UTF-8 data 2016-03-14 21:47:54 -07:00
Martin A. Brown daf272a329 switch to codecs.open and expect UTF-8 data 2016-03-14 21:47:48 -07:00
Martin A. Brown dcb8b3a217 switch to codecs.open and expect UTF-8 data 2016-03-14 21:42:31 -07:00
Martin A. Brown 2afbc7a147 switch to codecs.open and expect UTF-8 data 2016-03-14 21:42:21 -07:00
Martin A. Brown bb7fbccc6b switch to codecs.open and expect UTF-8 data 2016-03-14 21:42:07 -07:00
Martin A. Brown 2c4f8407b5 convert explicitly to a list before return (Python3) 2016-03-14 21:41:46 -07:00
Martin A. Brown d99448609d prepare to support Python3 and Python2 (utf-8 all in/out) 2016-03-14 21:18:33 -07:00
Martin A. Brown 5824be20da remove import of makefh(), too 2016-03-14 20:52:53 -07:00
Martin A. Brown f20fb1c481 no point in using makefh() now; remove 2016-03-14 20:51:55 -07:00
Martin A. Brown 10854ad6df get rid of another FD leakage (in tests, not a big deal) 2016-03-14 20:50:59 -07:00
Martin A. Brown 514af22aea io.StringIO exists in both Python 2 and Python 3 2016-03-14 20:38:50 -07:00
Martin A. Brown 30a5950564 close the file before passing to guess() 2016-03-14 20:35:23 -07:00
Martin A. Brown 26de64a2bb stop leaking FDs when guessing doctypes 2016-03-14 20:32:42 -07:00
Martin A. Brown a2daee9425 use absolute_import here, too (Python 3) 2016-03-14 20:32:09 -07:00
Martin A. Brown ad681cd618 assertEquals becomes assertEqual 2016-03-14 20:14:24 -07:00
Martin A. Brown a720f4e4b6 switich to io.StringIO for Python 3 and assertEqual (drop the s) 2016-03-14 20:14:10 -07:00
Martin A. Brown 762870209c assertEquals becomes assertEqual 2016-03-14 20:11:17 -07:00
Martin A. Brown 5af0c2a955 use the proper Python 3.x name for [Safe]ConfigParser 2016-03-14 20:11:04 -07:00
Martin A. Brown def752760e use a context to prevent FD leakage 2016-03-14 20:07:40 -07:00
Martin A. Brown bdbccb6823 use symbolic mode composition 2016-03-14 20:05:25 -07:00
Martin A. Brown 04208d1a90 switch to using nose.collector here (prep for tox) 2016-03-14 14:44:11 -07:00
Martin A. Brown a25aee33c8 bumping version to 0.5.5 2016-03-14 11:07:16 -07:00
Martin A. Brown 00b8831186 clarifying TODO item on DocBook XSL for 5.0 2016-03-14 10:35:04 -07:00
Martin A. Brown 6227e7d8da adjusting the reporting of discovered document counts 2016-03-14 10:34:44 -07:00
Martin A. Brown 01756a16ec add support for sgmlcheck (linuxdoc) 2016-03-13 09:41:55 -07:00
Martin A. Brown e35e070cae bumping version to 0.5.4 2016-03-12 19:10:08 -08:00
Martin A. Brown 821967b257 ref. python-epub 2016-03-12 19:00:38 -08:00
Martin A. Brown 61d55a9f69 CLI-tool friendly handling of EPIPE and INT
And, correcting from the name of the Python class to the format name processed
by the Python class (class.__name__ vs. class.formatname).
2016-03-11 14:21:56 -08:00
Martin A. Brown 2f71c66b5a documentation nit 2016-03-11 14:21:45 -08:00
Martin A. Brown 73c1b7ab23 bumping version to 0.5.3 2016-03-10 11:56:26 -08:00
Martin A. Brown 5f6ff9ca3d report the output directory first 2016-03-10 11:55:55 -08:00
Martin A. Brown 57c7eb2b06 minor tweaks to documentation 2016-03-10 11:55:37 -08:00
Martin A. Brown 3a478dea65 bumping version to 0.5.2 2016-03-10 11:46:15 -08:00
Martin A. Brown d6671e7380 improving the documentation and adding refs to --publish 2016-03-10 11:45:49 -08:00
Martin A. Brown 0afacc5da3 must have the docbook-utils for backend support for PDF output from jw 2016-03-10 11:43:25 -08:00
Martin A. Brown d59850d433 skip adding to removals if in --script mode 2016-03-10 11:42:59 -08:00
Martin A. Brown 71fcbb8925 adding Requires: libxslt-tools 2016-03-10 11:19:51 -08:00
Martin A. Brown 7dc96bf6f9 adding Requires: python-networkx 2016-03-10 11:18:40 -08:00
Martin A. Brown f7505627a5 pep8/pyflakes fixes 2016-03-10 11:17:09 -08:00
Martin A. Brown 793a810b8f add some Requires: to the specfile 2016-03-10 11:11:39 -08:00
Martin A. Brown bf543b3ad8 bumping version to 0.5.1 2016-03-10 10:43:31 -08:00
Martin A. Brown 190483d2f6 adjust to slightly different output formattincg 2016-03-10 10:42:33 -08:00
Martin A. Brown 90f8d6e690 add format name to --list output (for sources) 2016-03-10 10:38:40 -08:00
Martin A. Brown 7fbe4b80ec add width entry for doctype (output formatting) 2016-03-10 10:38:20 -08:00
Martin A. Brown 42bd3e699f setting version to 0.5.0 2016-03-10 10:29:03 -08:00
Martin A. Brown a1b11e6336 add testing of docbook5xml 2016-03-10 10:27:24 -08:00
Martin A. Brown ab60e4c2b4 figured out suppression of default --configfile 2016-03-10 10:27:08 -08:00
Martin A. Brown c214724fcb group methods a bit more by their similarity
and change naming slightly so log lines at loglevel INFO align
2016-03-10 10:25:39 -08:00
Martin A. Brown 54cb36f31f adding another bad invocation test 2016-03-10 09:30:53 -08:00
Martin A. Brown 70a10a5f7c adding a single index entry 2016-03-10 09:28:55 -08:00
Martin A. Brown bcb38bdd7d tiny index 2016-03-10 09:27:54 -08:00
Martin A. Brown cd7e325c59 add small image file 2016-03-10 08:57:19 -08:00
Martin A. Brown 7bd32896eb create variable DEFAULT_CONFIGFILE
so that at runtime (during testing), it can be overridden
2016-03-10 08:50:31 -08:00
Martin A. Brown 8a99c39d93 switched to set(), use .add() instead of .append() 2016-03-10 08:49:37 -08:00
Martin A. Brown f0cb2c3dfe only try to remove files once 2016-03-10 08:49:03 -08:00
Martin A. Brown 2180755a97 pep8/pyflakes fixes 2016-03-10 08:48:17 -08:00
Martin A. Brown abbd433ea1 removed longer tests; pep8/pyflakes fixes 2016-03-10 08:47:44 -08:00
Martin A. Brown 4ad4fae41b creating separate file for longer-running tests 2016-03-10 08:47:34 -08:00
Martin A. Brown 96aacb8f5c skip these tests during usual testing (try to stay under 1 sec.) 2016-03-10 08:27:53 -08:00
Martin A. Brown a3e3e6106f suppress the "system" config during testing 2016-03-10 08:27:07 -08:00
Martin A. Brown c978cf5dff report on documents by document format, too 2016-03-09 23:41:50 -08:00
Martin A. Brown 1ed38c7c0c improving testing coverage in driver.py 2016-03-09 21:57:25 -08:00
Martin A. Brown 4410fd8fc9 DocBook SGML document with index 2016-03-09 21:49:41 -08:00
Martin A. Brown e2532d4ffb use the already written darned function 2016-03-09 20:53:21 -08:00
Martin A. Brown 7e3fa95813 minor simplifications to testing tools 2016-03-09 20:52:45 -08:00
Martin A. Brown 46d16f4ccb complete the propagation of **kwargs 2016-03-09 20:41:18 -08:00
Martin A. Brown 4f05c202be slight reorganizing of test sets in file 2016-03-09 18:16:02 -08:00
Martin A. Brown 4901ed94d2 improve testing coverage 2016-03-09 18:11:38 -08:00
Martin A. Brown d2657321d6 create generic functions for runtime config 2016-03-09 18:11:04 -08:00
Martin A. Brown 29d5739d1e send runtime parameters to processors 2016-03-09 18:10:23 -08:00
Martin A. Brown 94ab1ac5d2 pass **kwargs through all processor tools
adjust all processor tools so they take runtime parameters through **kwargs
2016-03-09 18:08:56 -08:00
Martin A. Brown 2d75d3c4de remove old boilerplate from markdown and rst stock code 2016-03-09 18:08:18 -08:00
Martin A. Brown c581980aaf remove old boilerplate from markdown and rst stock code 2016-03-09 18:07:52 -08:00
Martin A. Brown a06e1955b7 tweak logging outputs, lower some to debug() 2016-03-09 10:06:59 -08:00
Martin A. Brown 68b16b42d8 simplify function docbuild: logging by caller
make the core docbuild function even simpler; have it determine the result
and return it, as well as the individual build success/failure vector
move all logging logic into the caller function, so that script(), publish()
and build() can log whatever they like
2016-03-09 10:00:25 -08:00
Martin A. Brown c66b325530 improving test coverage of driver.py 2016-03-09 09:35:11 -08:00
Martin A. Brown f4367e943f improve testing coverage of driver.py
adjust calling pattern for prepare_{script,build}_mode so that they are easier
to test
embed the creation of build directories into the prepare_build_mode
2016-03-09 09:33:04 -08:00
Martin A. Brown a488ae53de add testing support for new format Asciidoc 2016-03-09 08:09:52 -08:00
Martin A. Brown 631e8fed83 . 2016-03-09 08:09:36 -08:00
Martin A. Brown 0c433d7306 removing text (will be supported by asciidoc) 2016-03-09 07:54:55 -08:00
Martin A. Brown 885d6a12f3 adding support for format asciidoc
simply using DocBook4XML to provide most of the effort
2016-03-09 07:39:14 -08:00
Martin A. Brown 0117209034 better abstraction of --script/--publish complete 2016-03-08 09:47:01 -08:00
Martin A. Brown 7aa99ec502 moving sameFilesystem to utils.py 2016-03-08 09:45:54 -08:00
Martin A. Brown ea2139373c simplify docbuild function; factor out logic
remove the logic from the docbuild function which executes both the --script
generation and the --build generation as the core loop over the document set.
create ancillary functions to prepare the document set for --script mode or
--build mode
add functions to create and remove the build directories in --build mode
add a --publish cleanup function to leave our --builddir clean (unless there
was a failure)
2016-03-08 09:16:46 -08:00
Martin A. Brown ffe327ced0 removing boilerplate; go from asciidoc to docbook45 2016-03-08 09:16:33 -08:00
Martin A. Brown 62c9cef32f docbooksgml needs to know about --script mode 2016-03-08 09:16:12 -08:00
Martin A. Brown ca8b8e211f minor cosmetic improvements to generated shell file 2016-03-08 09:11:09 -08:00
Martin A. Brown 19f04a2c23 adjusting TODO after the work of today 2016-03-07 22:17:47 -08:00
Martin A. Brown f848040e60 improving overall coverage 2016-03-07 22:13:38 -08:00
Martin A. Brown e7794eea20 relocating function 2016-03-07 22:13:23 -08:00
Martin A. Brown b2e01eec73 removing useless vestiage 2016-03-07 22:12:59 -08:00
Martin A. Brown 6a97e4058a improve orphan verbosity testing 2016-03-07 20:01:26 -08:00
Martin A. Brown 0ff6dc5594 support the config.builddir in the testing suite 2016-03-07 20:00:22 -08:00
Martin A. Brown 29674dd7f0 add Asciidoc example to typeguesser 2016-03-07 19:59:48 -08:00
Martin A. Brown 904f0004bd add new coverage for swapdirs; improve coverage, as well 2016-03-07 19:59:27 -08:00
Martin A. Brown c3a2152e6c improve covareg testing of source.py 2016-03-07 19:58:31 -08:00
Martin A. Brown f8bb88c518 improve coverage testing of outputs.py 2016-03-07 19:58:14 -08:00
Martin A. Brown 730010dc5b adding sample Asciidoc document 2016-03-07 19:57:48 -08:00
Martin A. Brown be78491fd6 do not allow None as the b argument (crazy) 2016-03-07 19:57:17 -08:00
Martin A. Brown 99d93fb4ce allow None as the b argument 2016-03-07 19:56:27 -08:00
Martin A. Brown 4fbc08c1be adjust to deal with new action --publish 2016-03-07 18:56:07 -08:00
Martin A. Brown 88f07a1c69 switch from attribute "type" to "doctype" 2016-03-07 18:55:47 -08:00
Martin A. Brown 9dd87b4d7a add an empty entry for "working" attribute 2016-03-07 18:54:32 -08:00
Martin A. Brown bd7b08ce23 OK, so a can be anything, not just a directory 2016-03-07 18:54:08 -08:00
Martin A. Brown 87e1161212 add support to driver for --builddir logic
create the --builddir before building
if --publish, swap the built directory with the output directory
then --remove the old content
2016-03-07 18:52:13 -08:00
Martin A. Brown b2a8ac28d2 adding an empty attribute called "build" 2016-03-07 15:04:54 -08:00
Martin A. Brown 8c6ceba912 adding a function to swap directories 2016-03-07 15:04:16 -08:00
Martin A. Brown 5f24879875 adding a function to swap directories 2016-03-07 15:04:04 -08:00
Martin A. Brown 8064ef71e8 changed the error message 2016-03-07 13:54:55 -08:00
Martin A. Brown 015b3459f4 more refactoring, heading toward --publish
created function removeOrphans() and removeUnknownDoctypes()
and the function runbuild() ugly name; which is called from all three of the
main work functions, build(), publish() and script()
2016-03-07 13:50:15 -08:00
Martin A. Brown a4331ac48e fix the invocation of summary() in the test 2016-03-07 13:24:40 -08:00
Martin A. Brown dab1fc6bbc abstract error-handling; prepare for --publish
abstract the error-handling away from the one-large run() function into each
of the functions (show_doctypes, show_statustypes, detail, summary, etc.)
add the function publish(), which will call build() and ensure success before
running any of the publishing
2016-03-07 13:21:38 -08:00
Martin A. Brown 30d44a75f8 better refactoring of the large "run" method 2016-03-07 12:39:22 -08:00
Martin A. Brown 814dfec181 pep8/pyflakes fixes 2016-03-07 12:12:34 -08:00
Martin A. Brown 76fd27d1fa pep8/pyflakes fixes 2016-03-07 12:10:49 -08:00
Martin A. Brown 517a29b4a8 switch to using os.EX_OK for sys.exit()
also more preparation for switching to use --builddir
2016-03-07 12:02:25 -08:00
Martin A. Brown dfc20c5617 move directory-handling logic to the processor
In preparation for supporting a separate --builddir (allowing minimal
disruption of real output directory during rebuild) factor all output
directory handling logic into the main processor object (BaseDoctype).
Simplify the generate() method.
Centralize all pre-build logic in hook_build_prepare().
Remove all hook logic from the OutputDirectory.
2016-03-07 11:34:09 -08:00
Martin A. Brown c390e71b4a removing all chdir() and resource copying logic
the logic for making sure to chdir() into the build directory has been
sequestered into doctypes/common.py (and output.py); additionally, it is
smarter to put the resource copying logic, there, as well
2016-03-07 10:20:55 -08:00
Martin A. Brown 98b19ac5ce do not mkdir() if we are in --script mode 2016-03-07 10:01:42 -08:00
Martin A. Brown fc4c83307f do not mkdir() if we are in --script mode 2016-03-07 10:01:26 -08:00
Martin A. Brown 99d6232259 simplify text in commented lines 2016-03-07 10:00:51 -08:00
Martin A. Brown f87458c461 rearranging and renaming build setup methods
in preparation for supporting a build-directory, moving the os.chdir() and the
copying of image files into tldp/doctypes/common.py and adding a few hooks in
the main logic for building
2016-03-07 09:50:22 -08:00
Martin A. Brown 7308f331ff parameterize the --resources to copy at build time 2016-03-07 09:39:34 -08:00
Martin A. Brown 9c8746e486 and report on the output directory, if present 2016-03-07 09:15:17 -08:00
Martin A. Brown b30e2af282 require exact signature match; stop with .lower()
was comparing for case insensitive matches when locating signatures; probably
a bad idea; better to simply require an exact match
2016-03-07 09:11:35 -08:00
Martin A. Brown 853aec028b a bit more info, when --verbose 2016-03-07 09:04:25 -08:00
Martin A. Brown 59bcafb874 add --publish option
in preparation for separating --publish and --build, add the option
2016-03-07 09:03:57 -08:00
Martin A. Brown a002fd926d add --publish option
in preparation for separating --publish and --build, add the option
2016-03-07 09:00:15 -08:00
Martin A. Brown 0e955fce06 check --builddir/--pubdir on same filesystem
ensure that both the --builddir and the --pubdir are on the same filesystem
so that we can (reasonably) safely os.rename() after the build is done
2016-03-07 08:39:34 -08:00
Martin A. Brown bfc8328181 be just a touch more informative about what went wrong 2016-03-07 08:13:50 -08:00
Martin A. Brown 88ee1bf5fa support new option --builddir 2016-03-07 08:06:17 -08:00
Martin A. Brown 0889c79830 adjust the TODO list 2016-03-06 14:52:43 -08:00
Martin A. Brown d1d5b13989 remove one item; add two more 2016-03-06 11:40:20 -08:00
Martin A. Brown e6b6ea7b40 add support for --doctypes and --statustypes
provide CLI-discoverable listing of supported source document types and status
types
2016-03-06 11:29:13 -08:00
Martin A. Brown bfd6c1a0a1 tldp/doctypes/docbook4xml.py
correct dependency listings for validated source removal
2016-03-05 19:24:05 -08:00
Martin A. Brown 760cd392f4 use newer, simpler topo-sort for dependency tracking 2016-03-05 17:08:33 -08:00
Martin A. Brown 4c01ae4af7 simplify topological dependency solution
This patch prepares the way for simplifying the topological sort solution for
the classes which implement the document building logic.  Formerly, each
doctype class had to import networx itself and the @depends decorator stuffed
the dependencies into a graph in the class variable.

Now, each method tracks its dependencies (same decorator trick), but the
topological sort is not computed until just before running the job.  This is
more flexible, more obvious, simpler and features less code replication.

The next commit or two will convert the remaining doctype classes to use this
techinque.
2016-03-05 17:04:45 -08:00
Martin A. Brown 7d287b44e5 the required items should say asciidoc_ 2016-03-05 15:58:26 -08:00
Martin A. Brown 428e577c0d try not to go over 80 chars (attempt #3) 2016-03-04 21:44:50 -08:00
Martin A. Brown cec2730e9a and add a2x in the software dependency list 2016-03-04 21:34:59 -08:00
Martin A. Brown 93fb5b3356 quote that argument 2016-03-04 21:34:44 -08:00
Martin A. Brown d8f14c9e55 initial support for asciidoc format 2016-03-04 17:10:30 -08:00
Martin A. Brown f617cca3d3 add support for Asciidoc detection 2016-03-04 17:10:21 -08:00
Martin A. Brown 50af88ebde switch to os.path.exists(), prep for chunked is own subdir 2016-03-04 17:07:36 -08:00
Martin A. Brown 1dbc0e5f8b always log the contents of the tldp-build-* files in --debug mode 2016-03-04 17:06:55 -08:00
81 changed files with 4671 additions and 1297 deletions

41
.travis.yml Normal file
View File

@ -0,0 +1,41 @@
language: python
sudo: required
dist: trusty
before_install:
- sudo apt-get -qq update
- sudo apt-get --assume-yes install htmldoc fop jing xsltproc asciidoc docbook docbook5-xml docbook-xsl-ns linuxdoc-tools-latex linuxdoc-tools-text sgml2x ldp-docbook-xsl ldp-docbook-dsssl html2text
python:
- "2.7"
- "3.4"
script: nosetests --cover-erase --with-coverage --cover-package tldp -- tests/long_driver.py tests/long_inventory.py tests/
# -- comments on install set on an Ubuntu system:
# Here is the full set of packages that need to be installed in order for
# this software to work/build. The leftmost string should say 'ii' for
# each of the packages listed in this command-line:
#
# dpkg-query --list \
# asciidoc \
# docbook \
# docbook-dsssl \
# docbook-xsl \
# docbook-utils \
# docbook-xsl-ns \
# docbook5-xml \
# fop \
# htmldoc \
# htmldoc-common \
# html2text \
# jing \
# ldp-docbook-xsl \
# ldp-docbook-dsssl \
# libxml2-utils \
# linuxdoc-tools \
# linuxdoc-tools-text \
# linuxdoc-tools-latex \
# opensp \
# openjade \
# sgml2x \
# xsltproc \
#
#

220
ChangeLog Normal file
View File

@ -0,0 +1,220 @@
2016-05-13 Martin A. Brown <martin@linux-ip.net>
* bumping version to tldp-0.7.13
* accommodate root-run tests (used by Deb-O-Matic)
2016-04-30 Martin A. Brown <martin@linux-ip.net>
* bumping version to tldp-0.7.12
* adding ChangeLog (this file)
* cosmetic changes; deduplication of test data, copyright in many files
* add contrib/debian-release.sh
* put version number in tldp/__init__.py
* generate specfile after tagging, using contrib/rpm-release.py
* Debian packaging issues larger addressed
2016-04-21 Martin A. Brown <martin@linux-ip.net>
* bumping version to tldp-0.7.7
* Debian packaging attempt #1, created build with 'native' source format
which will not be accepted
* add debian/copyright file
* ldptool manpage (sphinx-generated for Debian; statically installed in RPM)
* switch --detail reporting to use predictable DOCTYPE and STATUSTYPE names
2016-04-09 Martin A. Brown <martin@linux-ip.net>
* bumping version to 0.7.5
* remove 'random' text from .LDP-source-MD5SUMS
* remove the --builddir if empty after complete run
2016-04-02 Martin A. Brown <martin@linux-ip.net>
* bumping version to 0.7.2
* using filesystem age for determining build need will not work; switch
to using content hash (MD5) to determine whether a rebuild is necessary or
not
* create .LDP-source-MD5SUMS in each output directory that lists all of
the hashes of the source files used to create that output directory
* remove testing and references to statfiles() and supporting friends
* add a 'lifecycle' test to the testing suite
* report on running success and failure counts during the run (to allow
interruptability if the user wishes)
2016-03-28 Martin A. Brown <martin@linux-ip.net>
* bumping version to 0.7.0
* support better handling of --verbose; --verbose yes, --verbose false
* update and improve documentation in stock configuration file
* provide better feedback on directory existence (or not) rather than
silently doing something unpredicable
2016-03-27 Martin A. Brown <martin@linux-ip.net>
* bumping version to 0.6.7
* correct situation where publish() was not propagating errors returned
from the build() function; add test
* add broken example Docbook 4 XML file to test suite
* use unicode_literals in all testing code, too
2016-03-24 Martin A. Brown <martin@linux-ip.net>
* bumping version to 0.6.2
* fix all sorts of runtime requirements to build under Ubuntu
and run the full test suite on Travis CI
2016-03-15 Martin A. Brown <martin@linux-ip.net>
* bumping version to 0.6.0
* full support for Python3, all unicode-ified and happy
* add test to fall back to iso-8859-1 for SGML docs
* success testing with tox under Python 2.7 and 3.4
2016-03-14 Martin A. Brown <martin@linux-ip.net>
* bumping version to 0.5.5
* use sgmlcheck for Linuxdoc sources
* adjust reporting of discovered documents
* use context to prevent more FD leakage
* begin changes to support Python3; e.g. io.StringIO, absolute_import
unicode changes, lots of codecs.open(), unicode_literals,
2016-03-11 Martin A. Brown <martin@linux-ip.net>
* handle EPIPE and INT with signal.SIG_DFL
2016-03-10 Martin A. Brown <martin@linux-ip.net>
* bumping version to 0.5.3
* create long running tests that exercise more of the code in the likely
way that a user would use the utility
* add testing for Docbook 5 XML
* improve look and consistency for --list (--detail) output
* improve README.rst
2016-03-09 Martin A. Brown <martin@linux-ip.net>
* remove unused markdown and rst skeleton processors
* pass **kwargs through all processor tools
2016-03-07 Martin A. Brown <martin@linux-ip.net>
* add support for --builddir, ensure that --builddir is on the same
filesystem as --pubdir
* add new option --publish; can't replace a directory atomically, but
get as close as possible by swapping the newly built output (from
--builddir) with the old one (formerly in --pubdir)
* switch to using 'return os.EX_OK' from functions in driver.py that
can be tested and/or wrapped in sys.exit(function(args))
* testing improvements for Asciidoc and driver.py
2016-03-06 Martin A. Brown <martin@linux-ip.net>
* provide user-discoverable support for --doctypes and --statustypes
* correct removal of Docbook4XML generated source document during build
2016-03-05 Martin A. Brown <martin@linux-ip.net>
* use a simplified technique (arbitrary attributes on function objects)
to generate the DAG used for topological sorting and build order
generation (thanks to Python mailing lists for the idea)
2016-03-04 Martin A. Brown <martin@linux-ip.net>
* bumping version to 0.4.8
* add FO generation XSL
* do not set a system default for --sourcedir / --pubdir (user must
specify, somehow)
* DocBook5/DocBook4: process xincludes before validation with xmllint
* add support for AsciiDoc detection and processing
2016-03-03 Martin A. Brown <martin@linux-ip.net>
* validate all documents (where possible) before processing
* provide support for DocBook 5.0 (XML)
* correct --loglevel handling in driver.py (finally works properly!)
* complete support for --script output
2016-03-02 Martin A. Brown <martin@linux-ip.net>
* bumping version to 0.4.5
* fix handling of STEMs which contain a '.' in the name
* review signature identification in each DOCTYPE processor and
validate and reconcile errors with PUBLIC / SYSTEM identifiers
for the SGML and XML declarations
* make sure that build() exits non-zero if ANY build fails
2016-03-01 Martin A. Brown <martin@linux-ip.net>
* bumping version to 0.4.2
* support a system configuration file /etc/ldptool
* add entry points and make first full installable build
* allow empty OutputDirectory() object
* begin overhauling the porcelain in driver.py
2016-02-29 Martin A. Brown <martin@linux-ip.net>
* overhaul generation of inventory object from sources/outputs
* add command-line features and options; actions in particular
* continue improving coverage, at 100% on utils.py
* complete CascadingConfig object creation
2016-02-26 Martin A. Brown <martin@linux-ip.net>
* generate a DAG for each processor class, so dependencies can
be localized (controlled, abstracted) to each processor class
* use topological sort of the DAG to drive generation of the shellscript,
which leads to massive simplification of the generate() method
* user can specify explicit file to process
* better PDF generation logic (relying on jw)
* provide support for --script outputs (logical equiv. of --dryrun)
* if a document processor is missing prerequisites, gripe to logging
and skip to the next document
* support a SourceDocument named by its directory
* add timing to each processor (some documents take minutes to process,
others just a few seconds; good for users trying to understand which...)
2016-02-25 Martin A. Brown <martin@linux-ip.net>
* overhaul where and how logging module gets called; driver.py is main
* adding --skip feature; can skip STEM, DOCTYPE or STATUSTYPE
* automatically detect configuration fragments in document processors
with object inspection
2016-02-23 Martin A. Brown <martin@linux-ip.net>
* add support for --detail (and --verbose) for both source and output docs
* pass args into all driver functions
* get rid of platform.py and references (not necessary any longer)
* fix FD leakage in function execute() and add test case (prevent reversion)
(and start switching to contextlib 'with' usage to avoid in future)
* start generalizing the build process for all doctypes in common.py
* move all generic functionality into BaseDoctype object
* revise fundamental execution approach; generate a shellscript (which can
be executed or simply printed)
* make logging readability improvements: clarity, relevance and alignment
2016-02-22 Martin A. Brown <martin@linux-ip.net>
* adding ArgumentParser wrapper so can support config file + envars
* all sorts of work for support cascading configuration
* allow each processor to have its own configuration fragment, e.g.
--docbook4xml-xmllint; owned by the Docbook4XML object
* add support for --dump-cfg, --dump-env, --dump-cli, --debug-options
* adding the license text (MIT) and all of that stuff
* creating and fixing the setup.py
2016-02-19 Martin A. Brown <martin@linux-ip.net>
2016-02-18 Martin A. Brown <martin@linux-ip.net>
* process and report on documents in case-insensitive stem-sorted order
* add many docstrings for internal usage
* move all source directory scanning logic out of the SourceCollection
object; easier to test and simpler to understand
2016-02-17 Martin A. Brown <martin@linux-ip.net>
* add logic for testing file age, assuming a fresh checkout of the
source documents; use filesystem age to determine whether or not
a document rebuild is necessary
* initial support for driver.py (eventually, the main user entry point
and inventory.py (for managing the identification and pairing of
source and output documents)
2016-02-16 Martin A. Brown <martin@linux-ip.net>
* adding tons of testing for document types, edge cases, duplicate
stems, sample valid and broken documents
2016-02-15 Martin A. Brown <martin@linux-ip.net>
* first processor, Linuxdoc, reaches success
* provide better separation between a SourceCollection and the
individual SourceDocuments; analogously, between OutputDirectory
and OutputCollection
* provide similar dict-like behaviour for SourceCollection and
OutputCollection (which is known to the user as --pubdir)
2016-02-12 Martin A. Brown <martin@linux-ip.net>
* first processor, Linuxdoc, fleshed out, created (failed)
* generate skeletons for other supported source document formats
* automate detection of source document format; add initial testing tools
2016-02-11 Martin A. Brown <martin@linux-ip.net>
* core source collection and output directory scanning complete
2016-02-10 Martin A. Brown <martin@linux-ip.net>
* initial commit and basic beginnings

View File

@ -1,3 +1,5 @@
Copyright (c) 2016, Linux Documentation Project
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation files
(the "Software"), to deal in the Software without restriction,

View File

@ -1,2 +1,4 @@
include README.rst
recursive-include etc *
recursive-include docs *
recursive-include contrib *

18
NOTES.rst Normal file
View File

@ -0,0 +1,18 @@
Notes to future self
++++++++++++++++++++
To release a new version for different software consumers.
* commit all of the changes you want
* bump version in tldp/__init__.py
* adjust debian/changelog in accordance with Debian policy
N.B. the version must match what you put in tldp/__init__.py
* run 'python contrib/rpm-release.py' which will regenerate a
contrib/tldp.spec with the correct version
* commit debian/changelog tldp/__init__.py and contrib/tldp.spec
* tag the release
* run 'git push origin master --tags'
* run 'python setup.py sdist upload -r pypi'
* run 'bash contrib/debian-release.py' (on a Debian-ish box)

View File

@ -1,13 +1,32 @@
tldp - tools for publishing from TLDP sources
=============================================
This package was written for the Linux Documentation Project to help with
management and automation of publication of source documents. The primary
interface provided is a command-line toolset.
The supported source formats can be listed, but contain at least, Linuxdoc,
DocBookSGML and DocBook XML 4.x.
.. image:: https://api.travis-ci.org/martin-a-brown/python-tldp.svg
:target: https://github.com/tLDP/python-tldp
TLDP = The Linux Documentation Project.
.. image:: http://img.shields.io/badge/license-MIT-brightgreen.svg
:target: http://opensource.org/licenses/MIT
:alt: MIT license
This package was written for the Linux Documentation Project (TLDP) to help
with management and publication automation of source documents. The primary
interface provided is a command-line tool caalled `ldptool`. The canonical
location of this software is:
https://github.com/tLDP/python-tldp/
The `ldptool` executable can:
- crawl through any number of source collection directories
- crawl through a single output collection
- match the sources to the outputs (based on document stem name)
- describe supported source formats (`--formats`)
- describe the meaning of document status (`--statustypes`)
- describe the collection by type and status (`--summary`)
- list out individual document type and status (`--list`)
- build the expected (non-configurable) set of outputs (`--build`)
- build and publish the outputs (`--publish`)
- produce runnable shell script to STDOUT (`--script`)
The tools in this package process source documents in the `TLDP document
repository <https://github.com/tLDP/LDP>`_ and generate the following set of
@ -18,25 +37,149 @@ outputs from each source document.
- -single.html, a one-page HTML document
- .html, a multipage HTML document
(We may add other output formats.)
(We may add other output formats; an epub format is under consideration.)
Supported input formats are:
- Asciidoc
- Linuxdoc
- Docbook SGML 3.x (though deprecated, please no new submissions)
- Docbook SGML 4.x
- Docbook XML 4.x
- Docbook XML 5.x
- Docbook XML 5.x (basic support, as of 2016-03-10)
Behaviour
---------
There's a source repository which has many source directories containing
documents. Each directory containing (sourcedir).
Example usages
--------------
If your attempts to run the below commands don't work or generate errors, see
also `Minimal configuration`_.
A source document can be a file in a sourcedir or a directory in the
sourcedir. Note that the file Assembly-HOWTO.xml is self-contained. The
directory BRIDGE-STP-HOWTO a file BRIDGE-STP-HOWTO.sgml.::
Here are some example usages against a live checkout of the LDP source
repository and a local cache of the output tree:
To see what work needs to be done, `ldptool --list`::
$ ldptool --list
orphan <unknown> Bugzilla-Guide
new DocBook XML 4.x DocBook-Demystification-HOWTO
stale DocBook XML 4.x Linux-Dictionary
broken DocBook SGML 3.x/4.x PHP-Nuke-HOWTO
stale Linuxdoc User-Group-HOWTO
To see publication status of each document:::
$ ldptool --list all | head -n 3
published Linuxdoc 3-Button-Mouse
published Linuxdoc 3D-Modelling
published Linuxdoc 4mb-Laptops
To get more information about the newer or missing files in a specific
document:::
$ ldptool --verbose --list Linux-Dictionary
stale DocBook XML 4.x Linux-Dictionary
doctype <class 'tldp.doctypes.docbook4xml.Docbook4XML'>
output dir /home/mabrown/tmp/en/Linux-Dictionary
source file /home/mabrown/vcs/LDP/LDP/guide/docbook/Linux-Dictionary/Linux-Dictionary.xml
newer source /home/mabrown/vcs/LDP/LDP/guide/docbook/Linux-Dictionary/Contributors.xml
newer source /home/mabrown/vcs/LDP/LDP/guide/docbook/Linux-Dictionary/D.xml
newer source /home/mabrown/vcs/LDP/LDP/guide/docbook/Linux-Dictionary/J.xml
newer source /home/mabrown/vcs/LDP/LDP/guide/docbook/Linux-Dictionary/O.xml
newer source /home/mabrown/vcs/LDP/LDP/guide/docbook/Linux-Dictionary/S.xml
To see what the entire source collection looks like, use `ldptool --summary`:::
$ ldptool --summary
By Status Type
--------------
source 503 3-Button-Mouse, 3D-Modelling, 4mb-Laptops, and 500 more ...
output 503 3-Button-Mouse, 3D-Modelling, 4mb-Laptops, and 500 more ...
published 503 3-Button-Mouse, 3D-Modelling, 4mb-Laptops, and 500 more ...
stale 0
orphan 0
broken 1 HOWTO-INDEX
new 0
By Document Type
----------------
Linuxdoc 226 3-Button-Mouse, 3D-Modelling, and 224 more ...
Docbook4XML 130 8021X-HOWTO, abs-guide, and 128 more ...
Docbook5XML 1 Assembly-HOWTO
DocbookSGML 146 ACP-Modem, and 145 more ...
To build and publish a single document:::
$ ldptool --publish DocBook-Demystification-HOWTO
$ ldptool --publish ~/vcs/LDP/LDP/howto/docbook/Valgrind-HOWTO.xml
To build and publish anything that is new or updated work:::
$ ldptool --publish
$ ldptool --publish work
To (re-)build and publish everything, regardless of state:::
$ ldptool --publish all
To generate a specific output (into a --builddir):::
$ ldptool --build DocBook-Demystification-HOWTO
To generate all outputs into a --builddir (should exist):::
$ ldptool --builddir ~/tmp/scratch-directory/ --build all
To build new/updated work, but pass over a trouble-maker:::
$ ldptool --build --skip HOWTO-INDEX
To loudly generate all outputs, except a trouble-maker:::
$ ldptool --build all --loglevel debug --skip HOWTO-INDEX
To print out a shell script for building a specific document:::
$ ldptool --script TransparentProxy
$ ldptool --script ~/vcs/LDP/LDP/howto/docbook/Assembly-HOWTO.xml
Logging
-------
The `ldptool` utility is largely written to be interactive or a supervised
batch process. It uses STDERR as its logstream and sets the default loglevel
at logging.ERROR. At this log level, in `--script`, `--build` and `--publish`
mode, it should report nothing to STDERR. To increase progress verbosity,
setting the loglevel to info (`--loglevel info`) may help with understanding
what work the tool is performing. If you need to collect diagnostic
information for troubleshooting or bug reports, `ldptool` supports `--loglevel
debug`.
Configuration
-------------
The `ldptool` comes with support for reading its settings from the
command-line, environment or a system and/or user-specified configuration
file. If you want to generate a sample configuration file to edit and use
later, you can run:::
ldptool --dump-cfg > my-ldptool.cfg
ldptool --configfile my-ldptool.cfg --list
LDPTOOL_CONFIGFILE=/path/to/ldptool.cfg ldptool --list
Source document identification
------------------------------
TLDP's source repository contains many separate directories containing
documents (e.g. LDP/howto/docbook, LDP/howto/linuxdoc). Each of these
directories may contain documents; to `ldptool` each of these is a
`--sourcedir`.
A source document (in a `--sourcedir`) can be a file or a directory. Here are
two examples. The Assembly-HOWTO.xml is an entire document stored as a single
file. The directory BRIDGE-STP-HOWTO exists and contains its main document, a
file named BRIDGE-STP-HOWTO.sgml. In the case of a source document that is a
directory, the stem name of the primary document must match the name of the
directory.::
Assembly-HOWTO.xml
BRIDGE-STP-HOWTO/
@ -47,8 +190,10 @@ directory BRIDGE-STP-HOWTO a file BRIDGE-STP-HOWTO.sgml.::
BRIDGE-STP-HOWTO/images/old-hardware-setup.eps
BRIDGE-STP-HOWTO/images/old-hardware-setup.png
Each document can be identified by its stem name. In the above, the stems are
`Assembly-HOWTO` and `BRIDGE-STP-HOWTO`.
Each document for a single run of `ldptool` can be uniquely identified by its
stem name. In the above, the stems are `Assembly-HOWTO` and
`BRIDGE-STP-HOWTO`. It is an error to have two documents with the same stem
name and the second discovered document will be ignored.
There is a directory containing the output collection. Each directory is named
by the stem name of the source document and contains the output formats for
@ -79,80 +224,62 @@ above two documents:::
... and more ...
Example usages:
---------------
Minimal configuration
---------------------
The most important configuration parameters that `ldptool` takes are the set
of source directories (in which to find documents) and the output directory,
in which to create the resulting outputs. It will not be able to run unless
it has at least one --sourcedir and an existing --pubdir directory.
Here are some example usages against a live checkout of the LDP source
repository and a local cache of the output tree:
If you have an LDP checkout in your home directory, here's an example which
would process all of the Linuxdoc HOWTO docs:::
To see what work needs to be done, `ldptool --list`::
mkdir LDP-output-tree
ldptool --sourcedir $HOME/LDP/LDP/howto/linuxdoc --pubdir LDP-output-tree
$ ldptool --list
new DocBook-Demystification-HOWTO
stale Linux-Dictionary
broken PHP-Nuke-HOWTO
orphan Traffic-Control-tcng-HTB-HOWTO
If you would like to create a sample configuration file for use later (or for
copying into the system location, `/etc/ldptool/ldptool.ini`, you can generate
your own config file as follows:::
To see publication status of each document:::
ldptool > sample-ldptool.cfg \
--sourcedir $HOME/LDP/LDP/faq/linuxdoc/ \
--sourcedir $HOME/LDP/LDP/guide/linuxdoc/ \
--sourcedir $HOME/LDP/LDP/howto/linuxdoc/ \
--sourcedir $HOME/LDP/LDP/howto/docbook/ \
--sourcedir $HOME/LDP/LDP/guide/docbook/ \
--sourcedir $HOME/LDP/LDP/ref/docbook/ \
--sourcedir $HOME/LDP/LDP/faq/docbook/ \
--pubdir $HOME/LDP-output/ \
--loglevel info \
--dump-cfg
$ ldptool --list all | head -n 3
published 3-Button-Mouse
published 3D-Modelling
published 4mb-Laptops
Then, you can run the same configuration again with:::
To get more information about the newer or missing files in a specific
document:::
ldptool --configfile sample-ldptool.cfg
$ ldptool --list Linux-Dictionary
stale Linux-Dictionary
newer source /vcs/LDP/LDP/guide/docbook/Linux-Dictionary/Contributors.xml
newer source /vcs/LDP/LDP/guide/docbook/Linux-Dictionary/D.xml
newer source /vcs/LDP/LDP/guide/docbook/Linux-Dictionary/J.xml
newer source /vcs/LDP/LDP/guide/docbook/Linux-Dictionary/O.xml
newer source /vcs/LDP/LDP/guide/docbook/Linux-Dictionary/S.xml
The `ldptool` program tries to locate all of the tools it needs to process
documents. Each source format requires a certain set of tools, for example, to
process DocBook 4.x XML, `ldptool` needs the executables xmllint, xstlproc,
html2text, fop and dblatex. It also requires the XSL files for generating FO,
chunked HTML and single-page HTML. All of the items are configurable on the
command-line or in the configuration file, but here's a sample config file
stanza:::
To get the big picture:::
[ldptool-docbook4xml]
xslchunk = /usr/share/xml/docbook/stylesheet/ldp/html/tldp-sections.xsl
xslsingle = /usr/share/xml/docbook/stylesheet/ldp/html/tldp-one-page.xsl
fop = /usr/bin/fop
dblatex = /usr/bin/dblatex
xsltproc = /usr/bin/xsltproc
html2text = /usr/bin/html2text
xslprint = /usr/share/xml/docbook/stylesheet/ldp/fo/tldp-print.xsl
xmllint = /usr/bin/xmllint
$ ldptool --summary
source 503 3-Button-Mouse, 3D-Modelling, 4mb-Laptops, and 500 more ...
output 503 3-Button-Mouse, 3D-Modelling, 4mb-Laptops, and 500 more ...
published 503 3-Button-Mouse, 3D-Modelling, 4mb-Laptops, and 500 more ...
new 1 DocBook-Demystification-HOWTO
orphan 1 Traffic-Control-tcng-HTB-HOWTO
broken 1 HOWTO-INDEX
stale 1 Linux-Dictionary
To generate a specific output:::
$ ldptool --build DocBook-Demystification-HOWTO
To generate all outputs:::
$ ldptool --build
To generate all outputs, except a trouble-maker:::
$ ldptool --build --skip HOWTO-INDEX
To loudly generate all outputs, except a trouble-maker:::
$ ldptool --build --loglevel debug --skip HOWTO-INDEX
To print out a script of what would be executed:::
$ ldptool --script DocBook-Demystification-HOWTO
Configuration
-------------
The `ldptool` comes with support for reading its settings from the
command-line, environment or a system and/or user-specified configuration
file. If you want to generate a sample configuration file to edit and use
later, you can run:::
ldptool --dump-cfg > my-ldptool.cfg
ldptool --configfile my-ldptool.cfg --list
LDPTOOL_CONFIGFILE=/path/to/ldptool.cfg ldptool --list
The above stanza was generated by running `ldptool --dump-cfg` on an Ubuntu
14.04 system which had all of the software dependencies installed. If your
distribution does not supply ldp-docbook-xsl, for example, you would need to
fetch those files, put them someplace in the filesystem and adjust your
configuration file or command-line invocations accordingly.
Software dependencies
@ -161,24 +288,30 @@ There are a large number of packages listed here in the dependency set. This
is because the supporting software for processing Linuxdoc and the various
DocBook formats is split across many upstream packages and repositories.
The generated python packages (see below) do not include the explicit
dependencies to allow the package manager (e.g. apt, zypper, dnf) to install
the dependencies. This would be a nice improvement.
Here are the dependencies needed for this tool to run:
Ubuntu / Debian
+++++++++++++++
- git{,-core,-doc,-man}
- linuxdoc-tools{,-text,-latex}
- docbook{,-dsssl,-xsl,-utils}
- htmldoc{,-common}
- xsltproc
- libxml2-utils
- fop
- sgml2x
- openjade
- opensp
- openjade
- ldp-docbook-xsl
- ldp-docbook-dsssl
- html2text
- docbook5-xml
- docbook-xsl-ns
- jing
- asciidoc
- libxml2-utils
OpenSUSE
++++++++
@ -186,12 +319,14 @@ OpenSUSE
- openjade
- sgmltool
- html2text
- libxml2-tools
- libxslt-tools
- docbook{,5}-xsl-stylesheets
- docbook-dsssl-stylesheets
- docbook-utils-minimal
- docbook-utils
- jing
- asciidoc
- libxml2-tools
- libxslt-tools
There are a few additional data files that are needed, specifically, the TLDP
XSL and DSSSL files that are used by the respective DocBook SGML (openjade) and
@ -201,6 +336,15 @@ On Debian-based systems, there are packages available from the distributor
called ldp-docbook-{xsl,dsssl}. There aren't any such packages for RPM (yet).
Supported Python versions
-------------------------
This package was developed against Python-2.7.8 and Python-3.4.1 (on
OpenSUSE). It has been used on Python-2.7.6 (Ubuntu-14.04) and Python-3.4.2 and Python-2.7.9 (on Debian 8).
Continuous Integration testing information and coverage can be reviewed at
`this project's Travis CI page <https://travis-ci.org/martin-a-brown/python-tldp/>`_.
Installation
------------
This is a pure-Python package, and you should be able to use your favorite
@ -209,31 +353,44 @@ requires a large number of other packages, most of which are outside of the
Python ecosystem. There's room for improvement here, but here are a few
tidbits.
Build an RPM:::
Build an RPM::
python setup.py bdist_rpm
python setup.py sdist && rpmbuild -ta ./dist/python-tldp-${VERSION}.tar.gz
There's a file, `contrib/tldp.spec`, which makes a few changes to the
setuptools stock-generated specfile. Specifically, the package gets named
`python-tldp` instead of `tldp` and the configuration file is marked
`%config(noreplace)`.
There's a generated file, `contrib/tldp.spec`, which makes a few changes to the
setuptools stock-generated specfile. It adds the dependencies, marks the
configuration file as %config(noreplace), adds a manpage and names the binary
package `python-tldp`.
I know less about packaging for Debian. Relying on python-stdeb yields a
working and usable Debian package which has been tested out on an Ubuntu
14.04.3 system.
Build a DEB::
Build a DEB:::
Check to see if the package is available from upstream. It may be included in
the Debian repositories already::
apt-cache search tldp
The quick and dirty way is as follows::
python setup.py --command-packages=stdeb.command bdist_deb
I have not tried installing the package in a virtualenv or with pip. If you
try that, please let me know any problems you encounter.
But, there is also a `debian` directory. If you are working straight from the
git checkout, you should be able to generate an installable (unsigned) Debian
package with::
bash contrib/debian-release.sh -us -uc
Install using pip:
Unknown. Because the tool relies so heavily on system-installed non-Python
tools, I have not bothered to try installing the package using pip. It should
work equivalently as well as running the program straight from a checkout.
If you learn anything here or have suggestions, for me, please feel free to
send them along.
Links
-----
* `Canonical python-tldp repository <https://github.com/tLDP/python-tldp>`_
* `Source tree on GitHub <https://github.com/tLDP/LDP>`_
* `Output documentation tree (sample) <http://www.tldp.org/>`_

20
TODO
View File

@ -1,29 +1,29 @@
python-tldp TODO
================
Bugs
----
* when running --sourcedir $FILE, the error message is TERRIBLE;
fix it;
user-visible needs
------------------
* add features for --list-doctypes, --list-status-classes
* add a manpage
* add support for .epub3 (or just .epub?)
* add support for .epub3 (or just .epub?) [python-epub ?]
* figure out how/if to build the outputs in a separate place
rather than in the real output directory [prevent the problem that
a bad document update kills the real output directory and leaves
an empty result]
* consider adding support for metadata extraction from documents
* create TLDP DocBook 5.0 XSL files (if we care)
* create TLDP customizations of DocBook 5.0 XSL (namespaced) files
(if we wish to do so)
code internals
--------------
* generate contrib/tldp.spec at build time (?)
* figure out suppression of system configuration (testing borkage)
* SourceDocument and OutputDirectory both have nearly-identical
methods called detail() which define a format string; probably
should be defined once in a parent class or something

26
contrib/debian-release.sh Normal file
View File

@ -0,0 +1,26 @@
#! /bin/bash
#
#
set -e
set -x
set -o pipefail
PACKAGE=$(dpkg-parsechangelog | awk '/Source:/{print $2}')
VERSION=$(dpkg-parsechangelog | awk -F'[- ]' '/Version:/{print $2}')
PREFIX="${PACKAGE}-${VERSION}"
TARBALL="../${PACKAGE}_${VERSION}.orig.tar.xz"
git archive \
--format tar \
--prefix "${PREFIX}/" \
"${PREFIX}" \
| xz \
--compress \
--to-stdout \
> "${TARBALL}"
exec debuild "$@"
# -- end of file

29
contrib/rpm-release.py Normal file
View File

@ -0,0 +1,29 @@
#! /usr/bin/python
#
#
from __future__ import print_function
import os
import sys
opd = os.path.dirname
opj = os.path.join
sys.path.insert(0, opd(opd(__file__)))
from tldp import VERSION
fin = open(opj(opd(__file__), 'tldp.spec.in'))
fout = open(opj(opd(__file__), 'tldp.spec'), 'w')
def transform(mapping, text):
for tag, replacement in mapping.items():
text = text.replace(tag, replacement)
return text
subst = {'@VERSION@': VERSION}
print(subst)
fout.write(transform(subst, fin.read()))
# -- end of file

View File

@ -1,11 +1,11 @@
%define sourcename tldp
%define name python-tldp
%define version 0.4.8
%define unmangled_version 0.4.8
%define unmangled_version 0.4.8
%define version 0.7.15
%define unmangled_version 0.7.15
%define unmangled_version 0.7.15
%define release 1
Summary: tools for processing all TLDP source documents
Summary: automatic publishing tool for DocBook, Linuxdoc and Asciidoc
Name: %{name}
Version: %{version}
Release: %{release}
@ -16,45 +16,35 @@ BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot
Prefix: %{_prefix}
BuildArch: noarch
Vendor: Martin A. Brown <martin@linux-ip.net>
BuildRequires: python-setuptools
Requires: asciidoc
Requires: jing
Requires: htmldoc
Requires: sgmltool
Requires: openjade
Requires: docbook-utils
Requires: docbook-utils-minimal
Requires: docbook-dsssl-stylesheets
Requires: docbook-xsl-stylesheets
Requires: docbook5-xsl-stylesheets
Requires: libxslt-tools
Requires: python-networkx
%description
tldp - tools for publishing from TLDP sources
=============================================
A toolset for publishing multiple output formats of a source document to an
output directory. The supported source formats can be listed, but contain at
least, Linuxdoc, DocBookSGML and DocBook XML 4.x.
tldp - automatic publishing tool for DocBook, Linuxdoc and Asciidoc
===================================================================
A toolset for publishing multiple output formats (PDF, text, chunked HTML and
single-page HTML) from each source document in one of the supported formats.
* Asciidoc
* Linuxdoc
* Docbook SGML 3.x (though deprecated, please no new submissions)
* Docbook SGML 4.x
* Docbook XML 4.x
* Docbook XML 5.x (basic support, as of 2016-03-10)
TLDP = The Linux Documentation Project.
These are a set of scripts that process committed documents in the
TLDP document source repository to an output tree of choice.
Installation
------------
You can install, upgrade, uninstall tldp tools with these commands::
$ pip install tldp
$ pip install --upgrade tldp
$ pip uninstall tldp
There's also a package for Debian/Ubuntu, but it's not always the
latest version.
Example usages:
---------------
FIXME: Missing examples.
Links
-----
* `Output documentation tree (sample) <http://www.tldp.org/>`_
* `Source tree on GitHub <https://github.com/tLDP/LDP>`_
%prep
%setup -n %{sourcename}-%{unmangled_version}
@ -64,6 +54,7 @@ python setup.py build
%install
python setup.py install --single-version-externally-managed -O1 --root=$RPM_BUILD_ROOT --record=INSTALLED_FILES
install -D --mode 0644 docs/ldptool.1 %{buildroot}%{_mandir}/man1/ldptool.1
perl -pi -e 's,(/etc/ldptool/ldptool.ini),%config(noreplace) $1,' INSTALLED_FILES
%clean
@ -71,3 +62,4 @@ rm -rf $RPM_BUILD_ROOT
%files -f INSTALLED_FILES
%defattr(-,root,root)
%{_mandir}/man1/ldptool.1*

65
contrib/tldp.spec.in Normal file
View File

@ -0,0 +1,65 @@
%define sourcename tldp
%define name python-tldp
%define version @VERSION@
%define unmangled_version @VERSION@
%define unmangled_version @VERSION@
%define release 1
Summary: automatic publishing tool for DocBook, Linuxdoc and Asciidoc
Name: %{name}
Version: %{version}
Release: %{release}
Source0: %{sourcename}-%{unmangled_version}.tar.gz
License: MIT
Group: Development/Libraries
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot
Prefix: %{_prefix}
BuildArch: noarch
Vendor: Martin A. Brown <martin@linux-ip.net>
BuildRequires: python-setuptools
Requires: asciidoc
Requires: jing
Requires: htmldoc
Requires: sgmltool
Requires: openjade
Requires: docbook-utils
Requires: docbook-utils-minimal
Requires: docbook-dsssl-stylesheets
Requires: docbook-xsl-stylesheets
Requires: docbook5-xsl-stylesheets
Requires: libxslt-tools
Requires: python-networkx
%description
tldp - automatic publishing tool for DocBook, Linuxdoc and Asciidoc
===================================================================
A toolset for publishing multiple output formats (PDF, text, chunked HTML and
single-page HTML) from each source document in one of the supported formats.
* Asciidoc
* Linuxdoc
* Docbook SGML 3.x (though deprecated, please no new submissions)
* Docbook SGML 4.x
* Docbook XML 4.x
* Docbook XML 5.x (basic support, as of 2016-03-10)
TLDP = The Linux Documentation Project.
%prep
%setup -n %{sourcename}-%{unmangled_version}
%build
python setup.py build
%install
python setup.py install --single-version-externally-managed -O1 --root=$RPM_BUILD_ROOT --record=INSTALLED_FILES
install -D --mode 0644 docs/ldptool.1 %{buildroot}%{_mandir}/man1/ldptool.1
perl -pi -e 's,(/etc/ldptool/ldptool.ini),%config(noreplace) $1,' INSTALLED_FILES
%clean
rm -rf $RPM_BUILD_ROOT
%files -f INSTALLED_FILES
%defattr(-,root,root)
%{_mandir}/man1/ldptool.1*

21
debian/changelog vendored Normal file
View File

@ -0,0 +1,21 @@
tldp (0.7.15-1) unstable; urgency=low
* support Python3.8+: fix import for MutableMapping and other minor fixes
tldp (0.7.14-1) unstable; urgency=low
* Add --version option.
-- Martin A. Brown <martin@linux-ip.net> Mon, 16 May 2016 16:54:47 +0000
tldp (0.7.13-1) unstable; urgency=low
* Fix testsuite when run as root (Closes: #824201).
-- Martin A. Brown <martin@linux-ip.net> Fri, 13 May 2016 16:28:22 +0000
tldp (0.7.12-1) unstable; urgency=low
* Initial release (Closes: #822181)
-- Martin A. Brown <martin@linux-ip.net> Wed, 27 Apr 2016 17:09:56 +0000

4
debian/clean vendored Normal file
View File

@ -0,0 +1,4 @@
tldp.egg-info/
docs/_build/
.coverage/
.tox/

1
debian/compat vendored Normal file
View File

@ -0,0 +1 @@
9

55
debian/control vendored Normal file
View File

@ -0,0 +1,55 @@
Source: tldp
Maintainer: Martin A. Brown <martin@linux-ip.net>
Section: text
X-Python3-Version: >= 3.4
Priority: optional
Homepage: https://github.com/tLDP/python-tldp
Build-Depends: debhelper (>= 9),
dh-python,
python3-all,
python3-networkx,
python3-nose,
python3-coverage,
python3-setuptools,
python3-sphinx,
htmldoc,
fop,
jing,
sgml2x,
xsltproc,
asciidoc,
docbook,
docbook5-xml,
docbook-xsl-ns,
linuxdoc-tools-latex,
linuxdoc-tools-text,
ldp-docbook-xsl,
ldp-docbook-dsssl,
html2text
Standards-Version: 3.9.8
Vcs-Git: https://github.com/tLDP/python-tldp.git
Vcs-Browser: https://github.com/tLDP/python-tldp
Package: python3-tldp
Architecture: all
Depends: ${misc:Depends},
${python3:Depends},
fop,
jing,
xsltproc,
docbook,
docbook5-xml,
docbook-xsl-ns,
htmldoc,
html2text,
sgml2x,
asciidoc,
linuxdoc-tools-latex,
linuxdoc-tools-text,
ldp-docbook-xsl,
ldp-docbook-dsssl
Description: automatic publishing tool for DocBook, Linuxdoc and Asciidoc
The Linux Documentation Project (TLDP) stores hundreds of documents in
DocBook SGML, DocBook XML, Linuxdoc and Asciidoc formats. This tool
automatically detects the source format and generates a directory containing
chunked and single-page HTML, a PDF and a plain text output.

71
debian/copyright vendored Normal file
View File

@ -0,0 +1,71 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: python-tldp
Upstream-Contact: Martin A. Brown <martin@linux-ip.net>
Source: https://github.com/tLDP/python-tldp
Files: *
Copyright: 2016 Linux Documentation Project
License: MIT
Files: extras/dsssl/ldp.dsl
Copyright: 2000-2003 - Greg Ferguson (gferg@metalab.unc.edu)
License: GPL-2.0+
Files: extras/xsl/tldp-*.xsl extras/css/style.css
Copyright: 2000-2002 - David Horton (dhorton@speakeasy.net)
License: GFDL-1.2
Files: tests/sample-documents/DocBook-4.2-WHYNOT/images/* tests/sample-documents/DocBookSGML-Larger/images/bullet.png
Copyright: Copyright (C) 2011-2012 O'Reilly Media
License: MIT
Files: extras/collateindex.pl
Copyright: 1997-2001 Norman Walsh
License: MIT
License: MIT
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
.
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
License: GPL-2.0+
This package is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
.
This package is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>
.
On Debian systems, the complete text of the GNU General
Public License version 2 can be found in "/usr/share/common-licenses/GPL-2".
License: GFDL-1.2
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU
Free Documentation License".
.
On Debian systems, the complete text of the GFDL-1.2 can be found in
/usr/share/common-licenses/GFDL-1.2

11
debian/rules vendored Executable file
View File

@ -0,0 +1,11 @@
#!/usr/bin/make -f
# export PYBUILD_VERBOSE=1
export PYBUILD_NAME=tldp
%:
dh $@ --with=python3 --buildsystem=pybuild
override_dh_installman:
(cd docs && \
sphinx-build -b man -d _build/doctrees . _build/man)
dh_installman docs/_build/man/ldptool.1

1
debian/source/format vendored Normal file
View File

@ -0,0 +1 @@
3.0 (quilt)

4
debian/upstream/metadata vendored Normal file
View File

@ -0,0 +1,4 @@
Bug-Database: https://github.com/tLDP/LDP/issues
Contact: discuss@en.tldp.org
Name: python-tldp
Repository: https://github.com/tLDP/LDP

3
debian/watch vendored Normal file
View File

@ -0,0 +1,3 @@
version=3
opts=uversionmangle=s/(rc|a|b|c)/~$1/ \
https://pypi.debian.net/tldp/tldp-(.+)\.(?:zip|tgz|tbz|txz|(?:tar\.(?:gz|bz2|xz)))

239
docs/conf.py Normal file
View File

@ -0,0 +1,239 @@
# -*- coding: utf-8 -*-
#
# tox documentation build configuration file, created by
# sphinx-quickstart on Fri Nov 9 19:00:14 2012.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'ldptool-man'
# General information about the project.
project = u'ldptool'
copyright = u'Manual page (C) 2016, Linux Documentation Project'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.9.2'
# The full version, including alpha/beta/rc tags.
release = '1.9.2'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'ldptooldoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = []
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('ldptool-man', 'ldptool', u'DocBook, Linuxdoc and Asciidoc build/publishing tool.',
[u'Martin A. Brown <martin@linux-ip.net>',], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('ldptool-man', 'ldptool', u'ldptool(1)',
u'Martin A. Brown', 'ldptool', 'DocBook, Linuxdoc and Asciidoc build/publishing tool.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'

380
docs/ldptool-man.rst Normal file
View File

@ -0,0 +1,380 @@
:orphan:
ldptool manual page
===================
Synopsis
--------
**ldptool** [*options*] [*pathname* [...]]
Description
-----------
:program:`ldptool` creates chunked HTML, single-page HTML, PDF and plain text
outputs for each source document it is passed as a *pathname*. See
`Source document discovery`_. It will compare the source document and output
document and rebuild an output only if the content has changed.
If it is not passed any arguments, `ldptool` will collect all of the
directories specified with the --sourcedir option and scan through these
directories looking for valid source documents.
The action taken depends on the options passed to the utility. If no options
are passed, then the default `--build` action will be attempted. The options
controlling the overall program are described in the sections `Action
options`_ and `Main options`_. All other options are relegated to the tail of
the manpage, because they are merely configurables for individual document
processors.
The `ldptool` can:
- generate an inventory from multiple source directories (`--sourcedir`)
- crawl through a single output collection (`--pubdir`)
- match the sources to the outputs (based on document stem name)
- describe the collection by type and status (`--summary`)
- list out individual document type and status (`--list`)
- describe supported source formats (`--formats`)
- describe the meaning of document status (`--statustypes`)
- build the expected (non-configurable) set of outputs (`--build`)
- build and publish the outputs (`--publish`)
- produce runnable shell script to STDOUT (`--script`)
- generate configuration files that it can then take as input
Action options
--------------
-h, --help
show a help message and exit
-V, --version
print out the version number
-b, --build
Build LDP documentation into the `--builddir` and exit.
This is the default action if no other action is specified.
-p, --publish
Build LDP documentation into the `--builddir`. If all builds are
successful, then copy the result for each source document into the
`--pubdir`, effectively replacing (and deleting) the older documents;
finally, remove `--builddir`, if empty.
-S, --script
Print a runnable bash script to STDOUT. This will produce a
shell script showing what would be executed upon `--build`.
-l, --detail, --list
Examine the various SOURCEDIRs and the PUBDIR and generate a report
showing the FORMAT of the source document and STATUS of the document.
Add the `--verbose` flag for more information.
-t, --summary
Examine the various SOURCEDIRs and the PUBDIR and generate a short
report summarizing documents by STATUS and by DOCTYPE. Add the
`--verbose` flag for more information.
-T, --doctypes, --formats, --format, --list-doctypes, --list-formats
List the supported DOCTYPEs; there is one processor for each DOCTYPE.
--statustypes, --list-statustypes
List the possible document STATUS types. There are only seven basic status
types, but several synonyms and groups of STATUS types (internally called
'classes').
Main options
------------
-s, --sourcedir, --source-dir, --source-directory SOURCEDIR (default: None)
Specify the name of a SOURCEDIR which contains source documents. See
also `Source document discovery`_.
The `--sourcedir` option may be used more than once.
-o, --pubdir, --output, --outputdir, --outdir PUBDIR (default: None)
Specify the the name of a PUBDIR. Used as the publication if the build
succeeds. When `--publish` is used with `--pubdir`, the output of
a successful document build will be used to replace any existing document
output directory in PUBDIR.
-d, --builddir, --build-dir, --build-directory BUILDDIR (default: 'ldptool-build')
Specify the name of a BUILDDIR. A scratch directory used to build each
source document; directory is temporary and will be removed if the
build succeeds AND `--publish` has been requested. Under the `--build`
action, all output directories and contents remain in the BUILDDIR for
inspection.
--verbose [True | False] (default: False)
Provide more information in --list and --detail actions. The option can
be thrown without an argument which is equivalent to True. To allow the
CLI to supersede environment or configuration file values, `--verbose
false` is also supported.
--skip [STEM | DOCTYPE | STATUS]
Specify a source document name, document type or document status to skip
during processing. Each document is known by its STEM (see also `Source
document discovery`_), its document DOCTYPE (see list below),
and by the document STATUS (see list below).
The `--skip` option may be used more than once.
DOCTYPE can be one of:
Asciidoc, Docbook4XML, Docbook5XML, DocbookSGML, or Linuxdoc
(See also output of `--doctypes`)
STATUS can be one of:
source, sources, output, outputs, published, stale, broken, new
orphan, orphans, orphaned, problems, work, all
(See also output of `--statustypes`)
--resources RESOURCEDIR (default: ['images', 'resources'])
Some source documents provide images, scripts and other content. These
files are usually stored in a directory such as ./images/ that need to be
copied intact into the output directory. Adjust the set of resource
directories wyth this option.
The `--resources` option may be used more than once.
--loglevel LOGLEVEL (default: ERROR)
set the loglevel to LOGLEVEL; can be passed as numeric or textual; in
increasing order: CRITICAL (50), ERROR (40), WARNING (30), INFO (20),
DEBUG (10); N.B. the text names are not case-sensitive: 'info' is OK
-c, --configfile, --config-file, --cfg CONFIGFILE (default: None)
Specify the name of a CONFIGFILE containing parameters to be read for
this invocation; an INI-style configuration file. A sample can be
generated with --dump-cfg. Although only one CONFIGFILE can be specified
via the environment or the command-line, the system config file
(/etc/ldptool/ldptool.ini) is always read.
--dump_cli, --dump-cli
Produce the resulting, merged configuration as in CLI form. (After
processing all configuration sources (defaults, system configuration, user
configuration, environment variables, command-line.)
--dump_env, --dump-env
Produce the resulting, merged configuration as a shell environment file.
--dump_cfg, --dump-cfg
Produce the resulting, merged configuration as an INI-configuration file.
--debug_options, --debug-options
Provide lots of debugging information on option-processing; see also
`--loglevel debug`.
Source document discovery
-------------------------
Almost all documentation formats provide the possibility that a document can
span multiple files. Although more than half of the LDP document collection
consists of single-file HOWTO contributions, there are a number of documents
that are composed of dozens, even hundreds of files. In order to accommodate
both the simple documents and these much more complex documents, LDP adopted a
simple (though not unique) naming strategy to allow a single document to span
multiple files::
Each document is referred to by a stem, which is the filename
without any extension. A single file document is simple
STEM.EXT. A document that requires many files must be contained
in a directory with the STEM name. Therefore, the primary
source document will always be called either STEM.EXT or
STEM/STEM.EXT.
(If there is a STEM/STEM.xml and STEM/STEM.sgml in the same directory, that is
an error, and `ldptool` will freak out and shoot pigeons.)
During document discovery, `ldptool` will walk through all of the source
directories specified with `--sourcedir` and build a complete list of all
identifiable source documents. Then, it will walk through the publication
directory `--pubdir` and match up each output directory (by its STEM) with the
corresponding STEM found in one of the source directories.
Then, `ldptool` can then determine whether any source files are newer. It uses
content-hashing, i.e. MD5, and if a source file is newer, the status is
`stale`. If there is no matching output, the source file is `new`. If
there's an output with no source, that is in `orphan`. See the
`--statustypes` output for the full list of STATUS types.
Examples
--------
To build and publish a single document::
$ ldptool --publish DocBook-Demystification-HOWTO
$ ldptool --publish ~/vcs/LDP/LDP/howto/docbook/Valgrind-HOWTO.xml
To build and publish anything that is new or updated work::
$ ldptool --publish
$ ldptool --publish work
To (re-)build and publish everything, regardless of state::
$ ldptool --publish all
To generate a specific output (into a --builddir)::
$ ldptool --build DocBook-Demystification-HOWTO
To generate all outputs into a --builddir (should exist)::
$ ldptool --builddir ~/tmp/scratch-directory/ --build all
To build new/updated work, but pass over a trouble-maker::
$ ldptool --build --skip HOWTO-INDEX
To loudly generate all outputs, except a trouble-maker::
$ ldptool --build all --loglevel debug --skip HOWTO-INDEX
To print out a shell script for building a specific document::
$ ldptool --script TransparentProxy
$ ldptool --script ~/vcs/LDP/LDP/howto/docbook/Assembly-HOWTO.xml
Environment
-----------
The `ldptool` accepts configuration via environment variables. All such
environment variables are prefixed with the name `LDPTOOL_`.
The name of each variable is constructed from the primary
command-line option name. The `-b` is better known as `--builddir`, so the
environment variable would be `LDPTOOL_BUILDDIR`. Similarly, the environment
variable names for each of the handlers can be derived from the name of the
handler and its option. For example, the Asciidoc processor needs to have
access to the `xmllint` and `asciidoc` utilities.
The environment variable corresponding to the CLI option `--asciidoc-xmllint`
would be `LDPTOOL_ASCIIDOC_XMLLINT`. Similarly, `--asciidoc-asciidoc` should
be `LDPTOOL_ASCIIDOC_ASCIIDOC`.
Variables accepting multiple options use the comma as a separator::
LDPTOOL_RESOURCES=images,resources
The complete listing of possible environment variables with all current values
can be printed by using `ldptool --dump-env`.
Configuration file
------------------
The system-installed configuration file is `/etc/ldptool/ldptool.ini`. The
format is a simple INI-style configuration file with a block for the main
program and a block for each handler. Here's a partial example::
[ldptool]
resources = images,
resources
loglevel = 40
[ldptool-asciidoc]
asciidoc = /usr/bin/asciidoc
xmllint = /usr/bin/xmllint
Note that the comma separates multiple values for a single option
(`resources`) in the above config fragment.
The complete, current configuration file can be printed by using `ldptool
--dump-cfg`.
Configuration option fragments for each DOCTYPE handler
-------------------------------------------------------
Every source format has a single handler and each DOCTYPE handler may require
a different set of executables and/or data files to complete its job. The
defaults depend on the platform and are detected at runtime. In most cases,
the commands are found in `/usr/bin` (see below). The data files, for example
the LDP XSL files and the docbook.rng, may live in different places on
different systems.
If a given DOCTYPE handler cannot find all of its requirements, it will
complain to STDERR during execution, but will not abort the rest of the run.
If, for some reason, `ldptool` cannot find data files, but you know where they
are, consider generating a configuration file with the `--dump-cfg` option,
adjusting the relevant options and then passing the `--configfile your.ini` to
specify these paths.
Asciidoc
--------
--asciidoc-asciidoc PATH
full path to asciidoc [/usr/bin/asciidoc]
--asciidoc-xmllint PATH
full path to xmllint [/usr/bin/xmllint]
N.B. The Asciidoc processor simply converts the source document to a
Docbook4XML document and then uses the richer Docbook4XML toolchain.
Docbook4XML
-----------
--docbook4xml-xslchunk PATH
full path to LDP HTML chunker XSL
--docbook4xml-xslsingle PATH
full path to LDP HTML single-page XSL
--docbook4xml-xslprint PATH
full path to LDP FO print XSL
--docbook4xml-xmllint PATH
full path to xmllint [/usr/bin/xmllint]
--docbook4xml-xsltproc PATH
full path to xsltproc [/usr/bin/xsltproc]
--docbook4xml-html2text PATH
full path to html2text [/usr/bin/html2text]
--docbook4xml-fop PATH
full path to fop [/usr/bin/fop]
--docbook4xml-dblatex PATH
full path to dblatex [/usr/bin/dblatex]
Docbook5XML
-----------
--docbook5xml-xslchunk PATH
full path to LDP HTML chunker XSL
--docbook5xml-xslsingle PATH
full path to LDP HTML single-page XSL
--docbook5xml-xslprint PATH
full path to LDP FO print XSL
--docbook5xml-rngfile PATH
full path to docbook.rng
--docbook5xml-xmllint PATH
full path to xmllint [/usr/bin/xmllint]
--docbook5xml-xsltproc PATH
full path to xsltproc [/usr/bin/xsltproc]
--docbook5xml-html2text PATH
full path to html2text [/usr/bin/html2text]
--docbook5xml-fop PATH
full path to fop [/usr/bin/fop]
--docbook5xml-dblatex PATH
full path to dblatex [/usr/bin/dblatex]
--docbook5xml-jing PATH
full path to jing [/usr/bin/jing]
DocbookSGML
-----------
--docbooksgml-docbookdsl PATH
full path to html/docbook.dsl
--docbooksgml-ldpdsl PATH
full path to ldp/ldp.dsl [None]
--docbooksgml-jw PATH
full path to jw [/usr/bin/jw]
--docbooksgml-html2text PATH
full path to html2text [/usr/bin/html2text]
--docbooksgml-openjade PATH
full path to openjade [/usr/bin/openjade]
--docbooksgml-dblatex PATH
full path to dblatex [/usr/bin/dblatex]
--docbooksgml-collateindex PATH
full path to collateindex
Linuxdoc
--------
--linuxdoc-sgmlcheck PATH
full path to sgmlcheck [/usr/bin/sgmlcheck]
--linuxdoc-sgml2html PATH
full path to sgml2html [/usr/bin/sgml2html]
--linuxdoc-html2text PATH
full path to html2text [/usr/bin/html2text]
--linuxdoc-htmldoc PATH
full path to htmldoc [/usr/bin/htmldoc]

533
docs/ldptool.1 Normal file
View File

@ -0,0 +1,533 @@
.\" Man page generated from reStructuredText.
.
.TH "LDPTOOL" "1" "May 16, 2016" "1.9.2" "ldptool"
.SH NAME
ldptool \- DocBook, Linuxdoc and Asciidoc build/publishing tool.
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.SH SYNOPSIS
.sp
\fBldptool\fP [\fIoptions\fP] [\fIpathname\fP [...]]
.SH DESCRIPTION
.sp
\fBldptool\fP creates chunked HTML, single\-page HTML, PDF and plain text
outputs for each source document it is passed as a \fIpathname\fP\&. See
\fI\%Source document discovery\fP\&. It will compare the source document and output
document and rebuild an output only if the content has changed.
.sp
If it is not passed any arguments, \fIldptool\fP will collect all of the
directories specified with the \-\-sourcedir option and scan through these
directories looking for valid source documents.
.sp
The action taken depends on the options passed to the utility. If no options
are passed, then the default \fI\-\-build\fP action will be attempted. The options
controlling the overall program are described in the sections \fI\%Action
options\fP and \fI\%Main options\fP\&. All other options are relegated to the tail of
the manpage, because they are merely configurables for individual document
processors.
.sp
The \fIldptool\fP can:
.INDENT 0.0
.IP \(bu 2
generate an inventory from multiple source directories (\fI\-\-sourcedir\fP)
.IP \(bu 2
crawl through a single output collection (\fI\-\-pubdir\fP)
.IP \(bu 2
match the sources to the outputs (based on document stem name)
.IP \(bu 2
describe the collection by type and status (\fI\-\-summary\fP)
.IP \(bu 2
list out individual document type and status (\fI\-\-list\fP)
.IP \(bu 2
describe supported source formats (\fI\-\-formats\fP)
.IP \(bu 2
describe the meaning of document status (\fI\-\-statustypes\fP)
.IP \(bu 2
build the expected (non\-configurable) set of outputs (\fI\-\-build\fP)
.IP \(bu 2
build and publish the outputs (\fI\-\-publish\fP)
.IP \(bu 2
produce runnable shell script to STDOUT (\fI\-\-script\fP)
.IP \(bu 2
generate configuration files that it can then take as input
.UNINDENT
.SH ACTION OPTIONS
.INDENT 0.0
.TP
.B \-h\fP,\fB \-\-help
show a help message and exit
.TP
.B \-V\fP,\fB \-\-version
print out the version number
.TP
.B \-b\fP,\fB \-\-build
Build LDP documentation into the \fI\-\-builddir\fP and exit.
This is the default action if no other action is specified.
.TP
.B \-p\fP,\fB \-\-publish
Build LDP documentation into the \fI\-\-builddir\fP\&. If all builds are
successful, then copy the result for each source document into the
\fI\-\-pubdir\fP, effectively replacing (and deleting) the older documents;
finally, remove \fI\-\-builddir\fP, if empty.
.TP
.B \-S\fP,\fB \-\-script
Print a runnable bash script to STDOUT. This will produce a
shell script showing what would be executed upon \fI\-\-build\fP\&.
.TP
.B \-l\fP,\fB \-\-detail\fP,\fB \-\-list
Examine the various SOURCEDIRs and the PUBDIR and generate a report
showing the FORMAT of the source document and STATUS of the document.
Add the \fI\-\-verbose\fP flag for more information.
.TP
.B \-t\fP,\fB \-\-summary
Examine the various SOURCEDIRs and the PUBDIR and generate a short
report summarizing documents by STATUS and by DOCTYPE. Add the
\fI\-\-verbose\fP flag for more information.
.TP
.B \-T\fP,\fB \-\-doctypes\fP,\fB \-\-formats\fP,\fB \-\-format\fP,\fB \-\-list\-doctypes\fP,\fB \-\-list\-formats
List the supported DOCTYPEs; there is one processor for each DOCTYPE.
.TP
.B \-\-statustypes\fP,\fB \-\-list\-statustypes
List the possible document STATUS types. There are only seven basic status
types, but several synonyms and groups of STATUS types (internally called
\(aqclasses\(aq).
.UNINDENT
.SH MAIN OPTIONS
.INDENT 0.0
.TP
.B \-s, \-\-sourcedir, \-\-source\-dir, \-\-source\-directory SOURCEDIR (default: None)
Specify the name of a SOURCEDIR which contains source documents. See
also \fI\%Source document discovery\fP\&.
.sp
The \fI\-\-sourcedir\fP option may be used more than once.
.TP
.B \-o, \-\-pubdir, \-\-output, \-\-outputdir, \-\-outdir PUBDIR (default: None)
Specify the the name of a PUBDIR. Used as the publication if the build
succeeds. When \fI\-\-publish\fP is used with \fI\-\-pubdir\fP, the output of
a successful document build will be used to replace any existing document
output directory in PUBDIR.
.TP
.B \-d, \-\-builddir, \-\-build\-dir, \-\-build\-directory BUILDDIR (default: \(aqldptool\-build\(aq)
Specify the name of a BUILDDIR. A scratch directory used to build each
source document; directory is temporary and will be removed if the
build succeeds AND \fI\-\-publish\fP has been requested. Under the \fI\-\-build\fP
action, all output directories and contents remain in the BUILDDIR for
inspection.
.TP
.B \-\-verbose [True | False] (default: False)
Provide more information in \-\-list and \-\-detail actions. The option can
be thrown without an argument which is equivalent to True. To allow the
CLI to supersede environment or configuration file values, \fI\-\-verbose
false\fP is also supported.
.TP
.B \-\-skip [STEM | DOCTYPE | STATUS]
Specify a source document name, document type or document status to skip
during processing. Each document is known by its STEM (see also \fI\%Source
document discovery\fP), its document DOCTYPE (see list below),
and by the document STATUS (see list below).
.sp
The \fI\-\-skip\fP option may be used more than once.
.INDENT 7.0
.TP
.B DOCTYPE can be one of:
Asciidoc, Docbook4XML, Docbook5XML, DocbookSGML, or Linuxdoc
(See also output of \fI\-\-doctypes\fP)
.TP
.B STATUS can be one of:
source, sources, output, outputs, published, stale, broken, new
orphan, orphans, orphaned, problems, work, all
(See also output of \fI\-\-statustypes\fP)
.UNINDENT
.TP
.B \-\-resources RESOURCEDIR (default: [\(aqimages\(aq, \(aqresources\(aq])
Some source documents provide images, scripts and other content. These
files are usually stored in a directory such as ./images/ that need to be
copied intact into the output directory. Adjust the set of resource
directories wyth this option.
.sp
The \fI\-\-resources\fP option may be used more than once.
.TP
.B \-\-loglevel LOGLEVEL (default: ERROR)
set the loglevel to LOGLEVEL; can be passed as numeric or textual; in
increasing order: CRITICAL (50), ERROR (40), WARNING (30), INFO (20),
DEBUG (10); N.B. the text names are not case\-sensitive: \(aqinfo\(aq is OK
.TP
.B \-c, \-\-configfile, \-\-config\-file, \-\-cfg CONFIGFILE (default: None)
Specify the name of a CONFIGFILE containing parameters to be read for
this invocation; an INI\-style configuration file. A sample can be
generated with \-\-dump\-cfg. Although only one CONFIGFILE can be specified
via the environment or the command\-line, the system config file
(/etc/ldptool/ldptool.ini) is always read.
.UNINDENT
.INDENT 0.0
.TP
.B \-\-dump_cli\fP,\fB \-\-dump\-cli
Produce the resulting, merged configuration as in CLI form. (After
processing all configuration sources (defaults, system configuration, user
configuration, environment variables, command\-line.)
.TP
.B \-\-dump_env\fP,\fB \-\-dump\-env
Produce the resulting, merged configuration as a shell environment file.
.TP
.B \-\-dump_cfg\fP,\fB \-\-dump\-cfg
Produce the resulting, merged configuration as an INI\-configuration file.
.TP
.B \-\-debug_options\fP,\fB \-\-debug\-options
Provide lots of debugging information on option\-processing; see also
\fI\-\-loglevel debug\fP\&.
.UNINDENT
.SH SOURCE DOCUMENT DISCOVERY
.sp
Almost all documentation formats provide the possibility that a document can
span multiple files. Although more than half of the LDP document collection
consists of single\-file HOWTO contributions, there are a number of documents
that are composed of dozens, even hundreds of files. In order to accommodate
both the simple documents and these much more complex documents, LDP adopted a
simple (though not unique) naming strategy to allow a single document to span
multiple files:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
Each document is referred to by a stem, which is the filename
without any extension. A single file document is simple
STEM.EXT. A document that requires many files must be contained
in a directory with the STEM name. Therefore, the primary
source document will always be called either STEM.EXT or
STEM/STEM.EXT.
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
(If there is a STEM/STEM.xml and STEM/STEM.sgml in the same directory, that is
an error, and \fIldptool\fP will freak out and shoot pigeons.)
.sp
During document discovery, \fIldptool\fP will walk through all of the source
directories specified with \fI\-\-sourcedir\fP and build a complete list of all
identifiable source documents. Then, it will walk through the publication
directory \fI\-\-pubdir\fP and match up each output directory (by its STEM) with the
corresponding STEM found in one of the source directories.
.sp
Then, \fIldptool\fP can then determine whether any source files are newer. It uses
content\-hashing, i.e. MD5, and if a source file is newer, the status is
\fIstale\fP\&. If there is no matching output, the source file is \fInew\fP\&. If
there\(aqs an output with no source, that is in \fIorphan\fP\&. See the
\fI\-\-statustypes\fP output for the full list of STATUS types.
.SH EXAMPLES
.sp
To build and publish a single document:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
$ ldptool \-\-publish DocBook\-Demystification\-HOWTO
$ ldptool \-\-publish ~/vcs/LDP/LDP/howto/docbook/Valgrind\-HOWTO.xml
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To build and publish anything that is new or updated work:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
$ ldptool \-\-publish
$ ldptool \-\-publish work
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To (re\-)build and publish everything, regardless of state:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
$ ldptool \-\-publish all
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To generate a specific output (into a \-\-builddir):
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
$ ldptool \-\-build DocBook\-Demystification\-HOWTO
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To generate all outputs into a \-\-builddir (should exist):
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
$ ldptool \-\-builddir ~/tmp/scratch\-directory/ \-\-build all
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To build new/updated work, but pass over a trouble\-maker:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
$ ldptool \-\-build \-\-skip HOWTO\-INDEX
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To loudly generate all outputs, except a trouble\-maker:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
$ ldptool \-\-build all \-\-loglevel debug \-\-skip HOWTO\-INDEX
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To print out a shell script for building a specific document:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
$ ldptool \-\-script TransparentProxy
$ ldptool \-\-script ~/vcs/LDP/LDP/howto/docbook/Assembly\-HOWTO.xml
.ft P
.fi
.UNINDENT
.UNINDENT
.SH ENVIRONMENT
.sp
The \fIldptool\fP accepts configuration via environment variables. All such
environment variables are prefixed with the name \fILDPTOOL_\fP\&.
.sp
The name of each variable is constructed from the primary
command\-line option name. The \fI\-b\fP is better known as \fI\-\-builddir\fP, so the
environment variable would be \fILDPTOOL_BUILDDIR\fP\&. Similarly, the environment
variable names for each of the handlers can be derived from the name of the
handler and its option. For example, the Asciidoc processor needs to have
access to the \fIxmllint\fP and \fIasciidoc\fP utilities.
.sp
The environment variable corresponding to the CLI option \fI\-\-asciidoc\-xmllint\fP
would be \fILDPTOOL_ASCIIDOC_XMLLINT\fP\&. Similarly, \fI\-\-asciidoc\-asciidoc\fP should
be \fILDPTOOL_ASCIIDOC_ASCIIDOC\fP\&.
.sp
Variables accepting multiple options use the comma as a separator:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
LDPTOOL_RESOURCES=images,resources
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
The complete listing of possible environment variables with all current values
can be printed by using \fIldptool \-\-dump\-env\fP\&.
.SH CONFIGURATION FILE
.sp
The system\-installed configuration file is \fI/etc/ldptool/ldptool.ini\fP\&. The
format is a simple INI\-style configuration file with a block for the main
program and a block for each handler. Here\(aqs a partial example:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
[ldptool]
resources = images,
resources
loglevel = 40
[ldptool\-asciidoc]
asciidoc = /usr/bin/asciidoc
xmllint = /usr/bin/xmllint
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Note that the comma separates multiple values for a single option
(\fIresources\fP) in the above config fragment.
.sp
The complete, current configuration file can be printed by using \fIldptool
\-\-dump\-cfg\fP\&.
.SH CONFIGURATION OPTION FRAGMENTS FOR EACH DOCTYPE HANDLER
.sp
Every source format has a single handler and each DOCTYPE handler may require
a different set of executables and/or data files to complete its job. The
defaults depend on the platform and are detected at runtime. In most cases,
the commands are found in \fI/usr/bin\fP (see below). The data files, for example
the LDP XSL files and the docbook.rng, may live in different places on
different systems.
.sp
If a given DOCTYPE handler cannot find all of its requirements, it will
complain to STDERR during execution, but will not abort the rest of the run.
.sp
If, for some reason, \fIldptool\fP cannot find data files, but you know where they
are, consider generating a configuration file with the \fI\-\-dump\-cfg\fP option,
adjusting the relevant options and then passing the \fI\-\-configfile your.ini\fP to
specify these paths.
.SH ASCIIDOC
.INDENT 0.0
.TP
.BI \-\-asciidoc\-asciidoc \ PATH
full path to asciidoc [/usr/bin/asciidoc]
.TP
.BI \-\-asciidoc\-xmllint \ PATH
full path to xmllint [/usr/bin/xmllint]
.UNINDENT
.sp
N.B. The Asciidoc processor simply converts the source document to a
Docbook4XML document and then uses the richer Docbook4XML toolchain.
.SH DOCBOOK4XML
.INDENT 0.0
.TP
.BI \-\-docbook4xml\-xslchunk \ PATH
full path to LDP HTML chunker XSL
.TP
.BI \-\-docbook4xml\-xslsingle \ PATH
full path to LDP HTML single\-page XSL
.TP
.BI \-\-docbook4xml\-xslprint \ PATH
full path to LDP FO print XSL
.TP
.BI \-\-docbook4xml\-xmllint \ PATH
full path to xmllint [/usr/bin/xmllint]
.TP
.BI \-\-docbook4xml\-xsltproc \ PATH
full path to xsltproc [/usr/bin/xsltproc]
.TP
.BI \-\-docbook4xml\-html2text \ PATH
full path to html2text [/usr/bin/html2text]
.TP
.BI \-\-docbook4xml\-fop \ PATH
full path to fop [/usr/bin/fop]
.TP
.BI \-\-docbook4xml\-dblatex \ PATH
full path to dblatex [/usr/bin/dblatex]
.UNINDENT
.SH DOCBOOK5XML
.INDENT 0.0
.TP
.BI \-\-docbook5xml\-xslchunk \ PATH
full path to LDP HTML chunker XSL
.TP
.BI \-\-docbook5xml\-xslsingle \ PATH
full path to LDP HTML single\-page XSL
.TP
.BI \-\-docbook5xml\-xslprint \ PATH
full path to LDP FO print XSL
.TP
.BI \-\-docbook5xml\-rngfile \ PATH
full path to docbook.rng
.TP
.BI \-\-docbook5xml\-xmllint \ PATH
full path to xmllint [/usr/bin/xmllint]
.TP
.BI \-\-docbook5xml\-xsltproc \ PATH
full path to xsltproc [/usr/bin/xsltproc]
.TP
.BI \-\-docbook5xml\-html2text \ PATH
full path to html2text [/usr/bin/html2text]
.TP
.BI \-\-docbook5xml\-fop \ PATH
full path to fop [/usr/bin/fop]
.TP
.BI \-\-docbook5xml\-dblatex \ PATH
full path to dblatex [/usr/bin/dblatex]
.TP
.BI \-\-docbook5xml\-jing \ PATH
full path to jing [/usr/bin/jing]
.UNINDENT
.SH DOCBOOKSGML
.INDENT 0.0
.TP
.BI \-\-docbooksgml\-docbookdsl \ PATH
full path to html/docbook.dsl
.TP
.BI \-\-docbooksgml\-ldpdsl \ PATH
full path to ldp/ldp.dsl [None]
.TP
.BI \-\-docbooksgml\-jw \ PATH
full path to jw [/usr/bin/jw]
.TP
.BI \-\-docbooksgml\-html2text \ PATH
full path to html2text [/usr/bin/html2text]
.TP
.BI \-\-docbooksgml\-openjade \ PATH
full path to openjade [/usr/bin/openjade]
.TP
.BI \-\-docbooksgml\-dblatex \ PATH
full path to dblatex [/usr/bin/dblatex]
.TP
.BI \-\-docbooksgml\-collateindex \ PATH
full path to collateindex
.UNINDENT
.SH LINUXDOC
.INDENT 0.0
.TP
.BI \-\-linuxdoc\-sgmlcheck \ PATH
full path to sgmlcheck [/usr/bin/sgmlcheck]
.TP
.BI \-\-linuxdoc\-sgml2html \ PATH
full path to sgml2html [/usr/bin/sgml2html]
.TP
.BI \-\-linuxdoc\-html2text \ PATH
full path to html2text [/usr/bin/html2text]
.TP
.BI \-\-linuxdoc\-htmldoc \ PATH
full path to htmldoc [/usr/bin/htmldoc]
.UNINDENT
.SH AUTHOR
Martin A. Brown <martin@linux-ip.net>
.SH COPYRIGHT
Manual page (C) 2016, Linux Documentation Project
.\" Generated by docutils manpage writer.
.

View File

@ -56,63 +56,72 @@ loglevel = ERROR
#
verbose = False
# -- the four main actions, probably ought not to be set in the config file
# -- These are the main actions and they are mutually exclusive. Pick any
# of them that you would like:
#
# publish = False
# build = False
# script = False
# detail = False
# summary = False
# script = False
# build = False
# doctypes = False
# statustypes = False
#
# -- Each of the document types may require different executables and/or data
# files to support processing of a specific document type. The below
# files to support processing of the specific document type. The below
# configuration file section fragments allow each document type processor
# to keep its own configurables separate from other document processors.
#
# -- The ldptool code uses $PATH (from the environment) to locate the
# executables, by default. If the utilities are not installed in the
# system path, then it is possible to configure the full path to each
# executable, here, in this system-wide configuration file.
# executable in your own configuration file or in a system-wide
# configuration file (/etc/ldptool/ldptool.ini).
#
# -- Also, the data files, for example, the DocBook DSSSL and DocBook XSL
# stylesheets may be in a location that ldptool cannot find. If so, it
# will skip building any document type if it is lacking the appropriate
# data files. It is possible to configure the full path to the data files
# here, in this system-wide configuration file.
# -- If specific data files are not discoverable, e.g. the DocBook DSSSL and
# DocBook XSL stylesheets, the ldptool will skip processing that document
# type.
#
[ldptool-linuxdoc]
# htmldoc = /usr/bin/htmldoc
# html2text = /usr/bin/html2text
# sgml2html = /usr/bin/sgml2html
# sgmlcheck = /usr/bin/sgmlcheck
[ldptool-docbooksgml]
# collateindex = /home/mabrown/bin/collateindex.pl
# dblatex = /usr/bin/dblatex
# docbookdsl = /usr/share/sgml/docbook/dsssl-stylesheets/html/docbook.dsl
# html2text = /usr/bin/html2text
# jw = /usr/bin/jw
# ldpdsl = /usr/share/sgml/docbook/stylesheet/dsssl/ldp/ldp.dsl
# openjade = /usr/bin/openjade
[ldptool-docbook4xml]
# xslchunk = /usr/share/xml/docbook/stylesheet/ldp/html/tldp-sections.xsl
# fop = /usr/bin/fop
# dblatex = /usr/bin/dblatex
# xsltproc = /usr/bin/xsltproc
# html2text = /usr/bin/html2text
# xsltproc = /usr/bin/xsltproc
# xslchunk = /usr/share/xml/docbook/stylesheet/ldp/html/tldp-sections.xsl
# xslprint = /usr/share/xml/docbook/stylesheet/ldp/fo/tldp-print.xsl
# xslsingle = /usr/share/xml/docbook/stylesheet/ldp/html/tldp-one-page.xsl
[ldptool-linuxdoc]
# sgml2html = /usr/bin/sgml2html
# htmldoc = /usr/bin/htmldoc
# html2text = /usr/bin/html2text
[ldptool-docbooksgml]
# ldpdsl = /usr/share/sgml/docbook/stylesheet/dsssl/ldp/ldp.dsl
# jw = /usr/bin/jw
# dblatex = /usr/bin/dblatex
# html2text = /usr/bin/html2text
# collateindex = /home/mabrown/bin/collateindex.pl
# docbookdsl = /usr/share/sgml/docbook/dsssl-stylesheets/html/docbook.dsl
# openjade = /usr/bin/openjade
[ldptool-docbook5xml]
# xsltproc = /usr/bin/xsltproc
# dblatex = /usr/bin/dblatex
# xslprint = /usr/share/xml/docbook/stylesheet/docbook-xsl-ns/fo/docbook.xsl
# xmllint = /usr/bin/xmllint
# xslsingle = /usr/share/xml/docbook/stylesheet/docbook-xsl-ns/html/docbook.xsl
# xslchunk = /usr/share/xml/docbook/stylesheet/docbook-xsl-ns/html/chunk.xsl
# rngfile = /usr/share/xml/docbook/schema/rng/5.0/docbook.rng
# fop = /usr/bin/fop
# jing = /usr/bin/jing
# html2text = /usr/bin/html2text
# rngfile = /usr/share/xml/docbook/schema/rng/5.0/docbook.rng
# xmllint = /usr/bin/xmllint
# xslchunk = /usr/share/xml/docbook/stylesheet/docbook-xsl-ns/html/chunk.xsl
# xslprint = /usr/share/xml/docbook/stylesheet/docbook-xsl-ns/fo/docbook.xsl
# xslsingle = /usr/share/xml/docbook/stylesheet/docbook-xsl-ns/html/docbook.xsl
# xsltproc = /usr/bin/xsltproc
[ldptool-asciidoc]
# asciidoc = /usr/bin/asciidoc
# xmllint = /usr/bin/xmllint
# -- end of file

3
requirements.txt Normal file
View File

@ -0,0 +1,3 @@
networkx
nose
coverage

View File

@ -2,7 +2,8 @@
import os
import glob
from setuptools import setup, find_packages
from setuptools import setup
from tldp import VERSION
with open(os.path.join(os.path.dirname(__file__), 'README.rst')) as r_file:
@ -11,25 +12,25 @@ with open(os.path.join(os.path.dirname(__file__), 'README.rst')) as r_file:
setup(
name='tldp',
version='0.4.8',
version=VERSION,
license='MIT',
author='Martin A. Brown',
author_email='martin@linux-ip.net',
url="http://en.tldp.org/",
description='tools for processing all TLDP source documents',
description='automatic publishing tool for DocBook, Linuxdoc and Asciidoc',
long_description=readme,
packages=find_packages(),
test_suite='tests',
install_requires=['networkx',],
include_package_data = True,
package_data = {'extras': ['extras/collateindex.pl'],
'extras/xsl': glob.glob('extras/xsl/*.xsl'),
'extras/css': glob.glob('extras/css/*.css'),
'extras/dsssl': glob.glob('extras/dsssl/*.dsl'),
},
data_files = [('/etc/ldptool', ['etc/ldptool.ini']), ],
entry_points = {
'console_scripts': ['ldptool = tldp.driver:main',],
packages=['tldp', 'tldp/doctypes'],
test_suite='nose.collector',
install_requires=['networkx', 'nose'],
include_package_data=True,
package_data={'extras': ['extras/collateindex.pl'],
'extras/xsl': glob.glob('extras/xsl/*.xsl'),
'extras/css': glob.glob('extras/css/*.css'),
'extras/dsssl': glob.glob('extras/dsssl/*.dsl'),
},
data_files=[('/etc/ldptool', ['etc/ldptool.ini']), ],
entry_points={
'console_scripts': ['ldptool = tldp.driver:main', ],
},
classifiers=[
'Development Status :: 4 - Beta',

View File

@ -1,12 +1,13 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
try:
from types import SimpleNamespace
except ImportError:
from utils import SimpleNamespace
import codecs
from argparse import Namespace
import tldp.doctypes
@ -17,59 +18,62 @@ opd = os.path.dirname
opa = os.path.abspath
sampledocs = opa(opj(opd(__file__), 'sample-documents'))
ex_linuxdoc = SimpleNamespace(
type=tldp.doctypes.linuxdoc.Linuxdoc,
filename=opj(sampledocs, 'linuxdoc-simple.sgml'),
)
ex_docbooksgml = SimpleNamespace(
type=tldp.doctypes.docbooksgml.DocbookSGML,
filename=opj(sampledocs, 'docbooksgml-simple.sgml'),
)
def load_content(ex):
with codecs.open(ex.filename, encoding='utf-8') as f:
ex.content = f.read()
ex.stem, ex.ext = stem_and_ext(ex.filename)
ex_docbook4xml = SimpleNamespace(
type=tldp.doctypes.docbook4xml.Docbook4XML,
filename=opj(sampledocs, 'docbook4xml-simple.xml'),
)
ex_docbook5xml = SimpleNamespace(
type=tldp.doctypes.docbook5xml.Docbook5XML,
filename=opj(sampledocs, 'docbook5xml-simple.xml'),
)
ex_linuxdoc = Namespace(
doctype=tldp.doctypes.linuxdoc.Linuxdoc,
filename=opj(sampledocs, 'linuxdoc-simple.sgml'),)
# ex_rst = SimpleNamespace(
# type=tldp.doctypes.rst.RestructuredText,
# filename=opj(sampledocs, 'restructuredtext-simple.rst'),
# )
#
# ex_text = SimpleNamespace(
# type=tldp.doctypes.text.Text,
# filename=opj(sampledocs, 'text-simple.txt'),
# )
#
# ex_markdown = SimpleNamespace(
# type=tldp.doctypes.markdown.Markdown,
# filename=opj(sampledocs, 'markdown-simple.md'),
# )
ex_docbooksgml = Namespace(
doctype=tldp.doctypes.docbooksgml.DocbookSGML,
filename=opj(sampledocs, 'docbooksgml-simple.sgml'),)
ex_linuxdoc_dir = SimpleNamespace(
type=tldp.doctypes.linuxdoc.Linuxdoc,
filename=opj(sampledocs, 'Linuxdoc-Larger',
'Linuxdoc-Larger.sgml'),
)
ex_docbook4xml = Namespace(
doctype=tldp.doctypes.docbook4xml.Docbook4XML,
filename=opj(sampledocs, 'docbook4xml-simple.xml'),)
ex_docbook4xml_dir = SimpleNamespace(
type=tldp.doctypes.docbook4xml.Docbook4XML,
filename=opj(sampledocs, 'DocBook-4.2-WHYNOT',
'DocBook-4.2-WHYNOT.xml'),
)
ex_docbook5xml = Namespace(
doctype=tldp.doctypes.docbook5xml.Docbook5XML,
filename=opj(sampledocs, 'docbook5xml-simple.xml'),)
ex_asciidoc = Namespace(
doctype=tldp.doctypes.asciidoc.Asciidoc,
filename=opj(sampledocs, 'asciidoc-complete.txt'),)
ex_linuxdoc_dir = Namespace(
doctype=tldp.doctypes.linuxdoc.Linuxdoc,
filename=opj(sampledocs, 'Linuxdoc-Larger',
'Linuxdoc-Larger.sgml'),)
ex_docbook4xml_dir = Namespace(
doctype=tldp.doctypes.docbook4xml.Docbook4XML,
filename=opj(sampledocs, 'DocBook-4.2-WHYNOT',
'DocBook-4.2-WHYNOT.xml'),)
ex_docbooksgml_dir = Namespace(
doctype=tldp.doctypes.docbooksgml.DocbookSGML,
filename=opj(sampledocs, 'DocBookSGML-Larger',
'DocBookSGML-Larger.sgml'),)
# -- a bit ugly, but grab each dict
sources = [y for x, y in locals().items() if x.startswith('ex_')]
for ex in sources:
ex.content = open(ex.filename).read()
ex.stem, ex.ext = stem_and_ext(ex.filename)
load_content(ex)
unknown_doctype = Namespace(
doctype=None,
filename=opj(sampledocs, 'Unknown-Doctype.xqf'),)
broken_docbook4xml = Namespace(
doctype=tldp.doctypes.docbook4xml.Docbook4XML,
filename=opj(sampledocs, 'docbook4xml-broken.xml'),)
load_content(broken_docbook4xml)
# -- end of file

158
tests/long_driver.py Normal file
View File

@ -0,0 +1,158 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
import os
from tldptesttools import TestInventoryBase
from tldp.sources import SourceDocument
# -- Test Data
import example
# -- SUT
import tldp.driver
class TestDriverRun(TestInventoryBase):
def test_run_status_selection(self):
self.add_docbook4xml_xsl_to_config()
c = self.config
self.add_stale('Asciidoc-Stale-HOWTO', example.ex_asciidoc)
self.add_new('DocBook4XML-New-HOWTO', example.ex_docbook4xml)
argv = self.argv
argv.extend(['--publish', 'stale'])
argv.extend(['--docbook4xml-xslprint', c.docbook4xml_xslprint])
argv.extend(['--docbook4xml-xslchunk', c.docbook4xml_xslchunk])
argv.extend(['--docbook4xml-xslsingle', c.docbook4xml_xslsingle])
exitcode = tldp.driver.run(argv)
self.assertEqual(exitcode, os.EX_OK)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.published.keys()))
class TestDriverBuild(TestInventoryBase):
def test_build_one_broken(self):
self.add_docbook4xml_xsl_to_config()
c = self.config
c.build = True
self.add_new('Frobnitz-DocBook-XML-4-HOWTO', example.ex_docbook4xml)
# -- mangle the content of a valid DocBook XML file
borked = example.ex_docbook4xml.content[:-12]
self.add_new('Frobnitz-Borked-XML-4-HOWTO',
example.ex_docbook4xml, content=borked)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(2, len(inv.all.keys()))
docs = inv.all.values()
result = tldp.driver.build(c, docs)
self.assertTrue('Build failed' in result)
def test_build_only_requested_stem(self):
c = self.config
ex = example.ex_linuxdoc
self.add_published('Published-HOWTO', ex)
self.add_new('New-HOWTO', ex)
argv = ['--pubdir', c.pubdir, '--sourcedir', c.sourcedir[0]]
argv.extend(['--build', 'Published-HOWTO'])
tldp.driver.run(argv)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.published.keys()))
self.assertEqual(1, len(inv.work.keys()))
class TestDriverPublish(TestInventoryBase):
def test_publish_fail_because_broken(self):
c = self.config
c.publish = True
self.add_new('Frobnitz-DocBook-XML-4-HOWTO', example.ex_docbook4xml)
self.add_stale('Broken-DocBook-XML-4-HOWTO', example.broken_docbook4xml)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(2, len(inv.all.keys()))
docs = inv.all.values()
exitcode = tldp.driver.publish(c, docs)
self.assertNotEqual(exitcode, os.EX_OK)
def test_publish_docbook5xml(self):
c = self.config
c.publish = True
self.add_new('Frobnitz-DocBook-XML-5-HOWTO', example.ex_docbook5xml)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.all.keys()))
docs = inv.all.values()
exitcode = tldp.driver.publish(c, docs)
self.assertEqual(exitcode, os.EX_OK)
doc = docs.pop(0)
self.assertTrue(doc.output.iscomplete)
def test_publish_docbook4xml(self):
self.add_docbook4xml_xsl_to_config()
c = self.config
c.publish = True
self.add_new('Frobnitz-DocBook-XML-4-HOWTO', example.ex_docbook4xml)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.all.keys()))
docs = inv.all.values()
exitcode = tldp.driver.publish(c, docs)
self.assertEqual(exitcode, os.EX_OK)
doc = docs.pop(0)
self.assertTrue(doc.output.iscomplete)
def test_publish_asciidoc(self):
self.add_docbook4xml_xsl_to_config()
c = self.config
c.publish = True
self.add_new('Frobnitz-Asciidoc-HOWTO', example.ex_asciidoc)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.all.keys()))
docs = inv.all.values()
c.skip = []
exitcode = tldp.driver.publish(c, docs)
self.assertEqual(exitcode, os.EX_OK)
doc = docs.pop(0)
self.assertTrue(doc.output.iscomplete)
def test_publish_linuxdoc(self):
c = self.config
c.publish = True
self.add_new('Frobnitz-Linuxdoc-HOWTO', example.ex_linuxdoc)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.all.keys()))
docs = inv.all.values()
c.skip = []
exitcode = tldp.driver.publish(c, docs)
self.assertEqual(exitcode, os.EX_OK)
doc = docs.pop(0)
self.assertTrue(doc.output.iscomplete)
def test_publish_docbooksgml(self):
self.add_docbooksgml_support_to_config()
c = self.config
c.publish = True
self.add_new('Frobnitz-DocBookSGML-HOWTO', example.ex_docbooksgml)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.all.keys()))
docs = inv.all.values()
exitcode = tldp.driver.publish(c, docs)
self.assertEqual(exitcode, os.EX_OK)
doc = docs.pop(0)
self.assertTrue(doc.output.iscomplete)
def test_publish_docbooksgml_larger(self):
self.add_docbooksgml_support_to_config()
c = self.config
c.publish = True
doc = SourceDocument(example.ex_docbooksgml_dir.filename)
exitcode = tldp.driver.publish(c, [doc])
self.assertEqual(exitcode, os.EX_OK)
self.assertTrue(doc.output.iscomplete)
outputimages = os.path.join(doc.output.dirname, 'images')
self.assertTrue(os.path.exists(outputimages))
#
# -- end of file

93
tests/long_inventory.py Normal file
View File

@ -0,0 +1,93 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
import io
import os
import codecs
import shutil
from tldptesttools import TestInventoryBase, TestSourceDocSkeleton
# -- Test Data
import example
# -- SUT
import tldp.driver
opb = os.path.basename
opj = os.path.join
class TestInventoryHandling(TestInventoryBase):
def test_lifecycle(self):
self.add_docbook4xml_xsl_to_config()
c = self.config
argv = self.argv
argv.extend(['--publish'])
argv.extend(['--docbook4xml-xslprint', c.docbook4xml_xslprint])
argv.extend(['--docbook4xml-xslchunk', c.docbook4xml_xslchunk])
argv.extend(['--docbook4xml-xslsingle', c.docbook4xml_xslsingle])
mysource = TestSourceDocSkeleton(c.sourcedir)
ex = example.ex_docbook4xml_dir
exdir = os.path.dirname(ex.filename)
mysource.copytree(exdir)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.new.keys()))
# -- run first build (will generate MD5SUMS file
#
exitcode = tldp.driver.run(argv)
self.assertEqual(exitcode, os.EX_OK)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.published.keys()))
# -- remove the generated MD5SUMS file, ensure rebuild occurs
#
doc = inv.published.values().pop()
os.unlink(doc.output.MD5SUMS)
self.assertEqual(dict(), doc.output.md5sums)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.stale.keys()))
if not os.path.isdir(c.builddir):
os.mkdir(c.builddir)
exitcode = tldp.driver.run(argv)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.published.keys()))
# -- remove a source file, add a source file, change a source file
#
main = opj(mysource.dirname, opb(exdir), opb(ex.filename))
disappearing = opj(mysource.dirname, opb(exdir), 'disappearing.xml')
brandnew = opj(mysource.dirname, opb(exdir), 'brandnew.xml')
shutil.copy(disappearing, brandnew)
os.unlink(opj(mysource.dirname, opb(exdir), 'disappearing.xml'))
with codecs.open(main, 'w', encoding='utf-8') as f:
f.write(ex.content.replace('FIXME', 'TOTALLY-FIXED'))
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.stale.keys()))
stdout = io.StringIO()
c.verbose = True
tldp.driver.detail(c, inv.all.values(), file=stdout)
stdout.seek(0)
data = stdout.read()
self.assertTrue('new source' in data)
self.assertTrue('gone source' in data)
self.assertTrue('changed source' in data)
# -- rebuild (why not?)
#
if not os.path.isdir(c.builddir):
os.mkdir(c.builddir)
exitcode = tldp.driver.run(argv)
self.assertEqual(exitcode, os.EX_OK)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEqual(1, len(inv.published.keys()))
# -- remove a file (known extraneous file, build should succeed)
#
# -- end of file

View File

@ -1,7 +1,7 @@
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.1//EN">
<article>
<articleinfo>
<title>T</title>
<title>Bad Dir Multiple Doctypes (DocBook SGML 4.1)</title>
<author>
<firstname>A</firstname> <surname>B</surname>
<affiliation>

View File

@ -1,5 +1,5 @@
<?xml version="1.0" encoding="utf-8"?>
<article xmlns="http://docbook.org/ns/docbook" version="5.0" xml:lang="en">
<title>Simple article</title>
<title>Bad Dir Multiple Doctypes (DocBook XML 5.0)</title>
<para>This is a ridiculously terse article.</para>
</article>

View File

@ -21,6 +21,18 @@
<sect2>
<title>Intro</title>
<para>Text</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/warning.svg" format="SVG"/>
</imageobject>
<imageobject>
<imagedata fileref="images/warning.png" format="PNG"/>
</imageobject>
<imageobject>
<imagedata fileref="images/warning.jpg" format="JPG"/>
</imageobject>
</mediaobject>
<para>FIXME</para>
</sect2>
</sect1>

View File

@ -0,0 +1,5 @@
<section>
<para>
I am just a disappearing file, for use in the long_inventory.py test.
</para>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 768 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@ -0,0 +1,28 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
x="0px" y="0px"
width="24px"
height="24px"
viewBox="0 0 24 24"
enable-background="new 0 0 24 24"
xml:space="preserve">
<image id="image0" width="24" height="24" x="0" y="0"
xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAMAAADXqc3KAAAABGdBTUEAALGPC/xhBQAAACBjSFJN
AAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAolBMVEX////e3t7W1tbv7+/G
xsacUlK1KSmtMTGtlJS1tbWlSkrvCAj3AADGGBilOTmlhIScWlqlpaW1AACUAADWAAC9AAApAAAQ
AABaAAC1paWtAAAYAABCAADeAACUe3vGAAAhAAAIAABSAAAxAABzAADnAABKAACMAAClAABrAACE
AAB7AADOKSmllJS1nJzWKSmce3vOAADGEBDnISH39/e9ra2Ltr4AAAAAAWJLR0QhxGwNFgAAANVJ
REFUKM9tkukSgjAMhFtAAVvEA4knIgre4vn+ryZjGxtmuv1T9usmU1LGlLiDchmV1+n6avkBJWFP
SFRkiOv1jd+QGIk3oH5DhoqEeH40TmjGw/qTFKbC9OEdrDMGSBOsNmOOj6XnC1iu9F5kzOkiWOeb
QhDwT6y2sJO2RFnBXtoSsoCDNSGPi7k9MTkl9h7ngia4uQeYe1yuzL1F+mOUQ1WqbX3njJLTQ/sx
/40jeLb+unxlXM3jHbZInZkRBhHxY25m3vQRpVqXjNNX8v7cta7a/wIS1x0MRP1EswAAACV0RVh0
ZGF0ZTpjcmVhdGUAMjAxNi0wNC0xOVQyMToxMToxNy0wNzowMOsKgVIAAAAldEVYdGRhdGU6bW9k
aWZ5ADIwMTYtMDQtMTlUMjE6MTE6MTctMDc6MDCaVznuAAAAKnRFWHRTaWduYXR1cmUAYzQyYjdk
MmQ1NjRhYWI1ODg4OTE5Nzk3MDNmMDJiNDVPEd+TAAAAQ3RFWHRTb2Z0d2FyZQBAKCMpSW1hZ2VN
YWdpY2sgNC4yLjggOTkvMDgvMDEgY3Jpc3R5QG15c3RpYy5lcy5kdXBvbnQuY29tkbohuAAAAABJ
RU5ErkJggg==" />
</svg>

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@ -0,0 +1,35 @@
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.1//EN" [
<!ENTITY index SYSTEM "index.sgml">
]>
<article>
<articleinfo>
<title>T</title>
<author>
<firstname>A</firstname> <surname>B</surname>
<affiliation>
<address><email>devnull@example.org</email></address>
</affiliation>
</author>
<pubdate>2016-02-11</pubdate>
<abstract><para>abstract</para> </abstract>
<revhistory>
<revision>
<revnumber>1.0</revnumber>
<date>2016-02-11</date>
<authorinitials>AB</authorinitials>
<revremark>Initial release.</revremark>
</revision>
</revhistory>
</articleinfo>
<sect1 id="intro"><title>Introduction</title>
<para>Text</para>
<indexterm><primary>Text</primary></indexterm>
<sect2 id="copyright"><title>More stuff</title>
<para>Text</para>
</sect2>
</sect1>
&index;
</article>

Binary file not shown.

After

Width:  |  Height:  |  Size: 167 B

View File

@ -0,0 +1,33 @@
<index id='doc-index'>
<!-- This file was produced by collateindex.pl. -->
<!-- Remove this comment if you edit this file by hand! -->
<!-- ULINK is abused here.
The URL attribute holds the URL that points from the index entry
back to the appropriate place in the output produced by the HTML
stylesheet. (It's much easier to calculate this URL in the first
pass.)
The Role attribute holds the ID (either real or manufactured) of
the corresponding INDEXTERM. This is used by the print backends
to produce page numbers.
The entries below are sorted and collated into the correct order.
Duplicates may be removed in the HTML backend, but in the print
backends, it is impossible to suppress duplicate pages or coalesce
sequences of pages into a range.
-->
<title>Index</title>
<indexdiv><title>T</title>
<indexentry>
<primaryie>Text,
<ulink url="t1.htm#INTRO" role="AEN22">Introduction</ulink>
</primaryie>
</indexentry>
</indexdiv>
</index>

View File

@ -0,0 +1,16 @@
<!doctype linuxdoc system>
<article>
<title>B
<author>A
<date>2016-02-11
<abstract> abstract </abstract>
<toc>
<sect>Introduction
<p>
<sect>Stuff.
<p>
Øh, Ýêåh, wë lóvèþ uß §ÓµË ISO-8859-1.
<p>Pilcrow homage: ¶
<sect>More-stuff.
<p>
</article>

View File

@ -1,7 +1,7 @@
<!doctype linuxdoc system>
<article>
<title>B
<author>A
<title>Linuxdoc Larger Document
<author>Another Author
<date>2016-02-11
<abstract> abstract </abstract>
<toc>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 B

After

Width:  |  Height:  |  Size: 42 B

View File

@ -0,0 +1 @@
../../DocBookSGML-Larger/images/bullet.png

Before

Width:  |  Height:  |  Size: 71 B

After

Width:  |  Height:  |  Size: 42 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 B

After

Width:  |  Height:  |  Size: 42 B

View File

@ -0,0 +1 @@
../../DocBookSGML-Larger/images/bullet.png

Before

Width:  |  Height:  |  Size: 71 B

After

Width:  |  Height:  |  Size: 42 B

View File

@ -0,0 +1,3 @@
This is a deliberately weird file...without document signature and
with a completely unknown extension.

View File

@ -0,0 +1,138 @@
The Article Title
=================
Author's Name <authors@email.address>
v1.0, 2003-12
This is the optional preamble (an untitled section body). Useful for
writing simple sectionless documents consisting only of a preamble.
NOTE: The abstract, preface, appendix, bibliography, glossary and
index section titles are significant ('specialsections').
[abstract]
Example Abstract
----------------
The optional abstract (one or more paragraphs) goes here.
This document is an AsciiDoc article skeleton containing briefly
annotated element placeholders plus a couple of example index entries
and footnotes.
:numbered:
The First Section
-----------------
Article sections start at level 1 and can be nested up to four levels
deep.
footnote:[An example footnote.]
indexterm:[Example index entry]
And now for something completely different: ((monkeys)), lions and
tigers (Bengal and Siberian) using the alternative syntax index
entries.
(((Big cats,Lions)))
(((Big cats,Tigers,Bengal Tiger)))
(((Big cats,Tigers,Siberian Tiger)))
Note that multi-entry terms generate separate index entries.
Here are a couple of image examples: an image:images/smallnew.png[]
example inline image followed by an example block image:
.Tiger block image
image::images/tiger.png[Tiger image]
Followed by an example table:
.An example table
[width="60%",options="header"]
|==============================================
| Option | Description
| -a 'USER| GROUP' | Add 'USER' to 'GROUP'.
| -R 'GROUP' | Disables access to 'GROUP'.
|==============================================
.An example example
===============================================
Lorum ipum...
===============================================
[[X1]]
Sub-section with Anchor
~~~~~~~~~~~~~~~~~~~~~~~
Sub-section at level 2.
A Nested Sub-section
^^^^^^^^^^^^^^^^^^^^
Sub-section at level 3.
Yet another nested Sub-section
++++++++++++++++++++++++++++++
Sub-section at level 4.
This is the maximum sub-section depth supported by the distributed
AsciiDoc configuration.
footnote:[A second example footnote.]
The Second Section
------------------
Article sections are at level 1 and can contain sub-sections nested up
to four deep.
An example link to anchor at start of the <<X1,first sub-section>>.
indexterm:[Second example index entry]
An example link to a bibliography entry <<taoup>>.
:numbered!:
[appendix]
Example Appendix
----------------
AsciiDoc article appendices are just just article sections with
'specialsection' titles.
Appendix Sub-section
~~~~~~~~~~~~~~~~~~~~
Appendix sub-section at level 2.
[bibliography]
Example Bibliography
--------------------
The bibliography list is a style of AsciiDoc bulleted list.
[bibliography]
- [[[taoup]]] Eric Steven Raymond. 'The Art of Unix
Programming'. Addison-Wesley. ISBN 0-13-142901-9.
- [[[walsh-muellner]]] Norman Walsh & Leonard Muellner.
'DocBook - The Definitive Guide'. O'Reilly & Associates. 1999.
ISBN 1-56592-580-7.
[glossary]
Example Glossary
----------------
Glossaries are optional. Glossaries entries are an example of a style
of AsciiDoc labeled lists.
[glossary]
A glossary term::
The corresponding (indented) definition.
A second glossary term::
The corresponding (indented) definition.
ifdef::backend-docbook[]
[index]
Example Index
-------------
////////////////////////////////////////////////////////////////
The index is normally left completely empty, it's contents being
generated automatically by the DocBook toolchain.
////////////////////////////////////////////////////////////////
endif::backend-docbook[]

View File

@ -0,0 +1,31 @@
<?xml version="1.0" standalone="no"?>
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<article>
<articleinfo>
<title>T</title>
<author><firstname>A</firstname><surname>B</surname></author>
<authorinitials>AB</authorinitials>
<revhistory> <revision>
<revnumber>v0.0</revnumber>
<date>2016-02-11</date>
<authorinitials>AB</authorinitials>
<revremark> Initial release. </revremark>
</revision> </revhistory>
<abstract> <para> abstract </para> </abstract>
</articleinfo>
<sect1 id="intro">
<title>Intro</title>
<para>Text</para>
<sect2>
<title>Intro</title>
<para>Text</para>
</sect2>
</sect1>
<!-- HERE IS THE PROBLEM!
You've got a ventricle in your article!
You should see a doctor about that.
-->
</ventricle>

View File

@ -1 +0,0 @@
No content.

View File

@ -1 +0,0 @@
No content.

View File

@ -1 +0,0 @@
No content.

View File

@ -1,5 +1,9 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import unittest
import argparse
@ -79,18 +83,17 @@ class CascadingConfigBasicTest(CCTestTools):
ap.add_argument('--size', default=9, type=int)
c = Namespace(
tag='tag',
argparser=ap,
argv=''.split(),
env=dict(),
cfg='',
exp_config=Namespace(size=9),
exp_args=[],
)
tag='tag',
argparser=ap,
argv=''.split(),
env=dict(),
cfg='',
exp_config=Namespace(size=9),
exp_args=[],)
cc = CascadingConfig(c.tag, c.argparser, argv=c.argv, env=c.env)
config, args = cc.parse()
self.assertEquals(c.exp_config.size, config.size)
self.assertEqual(c.exp_config.size, config.size)
def test_cfg_is_read_passed_by_env(self):
ap = DefaultFreeArgumentParser()
@ -98,20 +101,19 @@ class CascadingConfigBasicTest(CCTestTools):
ap.add_argument('--size', default=9, type=int)
c = Namespace(
tag='tag',
argparser=ap,
argv=''.split(),
env=dict(),
cfg='[tag]\nsize = 8',
exp_config=Namespace(size=8),
exp_args=[],
)
tag='tag',
argparser=ap,
argv=''.split(),
env=dict(),
cfg='[tag]\nsize = 8',
exp_config=Namespace(size=8),
exp_args=[],)
self.writeconfig(c)
c.env.setdefault('TAG_CONFIGFILE', c.configfile)
cc = CascadingConfig(c.tag, c.argparser, argv=c.argv, env=c.env)
config, args = cc.parse()
self.assertEquals(c.exp_config.size, config.size)
self.assertEqual(c.exp_config.size, config.size)
def test_cfg_is_read_passed_by_argv(self):
ap = DefaultFreeArgumentParser()
@ -121,19 +123,18 @@ class CascadingConfigBasicTest(CCTestTools):
import logging
logging.getLogger().setLevel(logging.DEBUG)
c = Namespace(
tag='tag',
argparser=ap,
argv=''.split(),
env=dict(),
cfg='[tag]\nsize = 8',
exp_config=Namespace(size=8),
exp_args=[],
)
tag='tag',
argparser=ap,
argv=''.split(),
env=dict(),
cfg='[tag]\nsize = 8',
exp_config=Namespace(size=8),
exp_args=[],)
self.writeconfig(c)
c.argv.extend(['--configfile', c.configfile])
cc = CascadingConfig(c.tag, c.argparser, argv=c.argv, env=c.env)
config, args = cc.parse()
self.assertEquals(c.exp_config.size, config.size)
self.assertEqual(c.exp_config.size, config.size)
def test_precedence_env_cfg(self):
ap = DefaultFreeArgumentParser()
@ -143,19 +144,18 @@ class CascadingConfigBasicTest(CCTestTools):
import logging
logging.getLogger().setLevel(logging.DEBUG)
c = Namespace(
tag='tag',
argparser=ap,
argv=''.split(),
env=dict(TAG_SIZE=7, ),
cfg='[tag]\nsize = 8',
exp_config=Namespace(size=7),
exp_args=[],
)
tag='tag',
argparser=ap,
argv=''.split(),
env=dict(TAG_SIZE=7, ),
cfg='[tag]\nsize = 8',
exp_config=Namespace(size=7),
exp_args=[],)
self.writeconfig(c)
c.argv.extend(['--configfile', c.configfile])
cc = CascadingConfig(c.tag, c.argparser, argv=c.argv, env=c.env)
config, args = cc.parse()
self.assertEquals(c.exp_config.size, config.size)
self.assertEqual(c.exp_config.size, config.size)
def test_precedence_argv_env_cfg(self):
ap = DefaultFreeArgumentParser()
@ -165,54 +165,51 @@ class CascadingConfigBasicTest(CCTestTools):
import logging
logging.getLogger().setLevel(logging.DEBUG)
c = Namespace(
tag='tag',
argparser=ap,
argv='--size 6'.split(),
env=dict(TAG_SIZE=7, ),
cfg='[tag]\nsize = 8',
exp_config=Namespace(size=6),
exp_args=[],
)
tag='tag',
argparser=ap,
argv='--size 6'.split(),
env=dict(TAG_SIZE=7, ),
cfg='[tag]\nsize = 8',
exp_config=Namespace(size=6),
exp_args=[],)
self.writeconfig(c)
c.argv.extend(['--configfile', c.configfile])
cc = CascadingConfig(c.tag, c.argparser, argv=c.argv, env=c.env)
config, args = cc.parse()
self.assertEquals(c.exp_config.size, config.size)
self.assertEqual(c.exp_config.size, config.size)
def test_basic_emptydefault(self):
ap = DefaultFreeArgumentParser()
ap.add_argument('--source', default='', action='append', type=str)
c = Namespace(
tag='tag',
argparser=ap,
argv=''.split(),
env=dict(),
cfg='',
exp_config=Namespace(source=''),
exp_args=[],
)
tag='tag',
argparser=ap,
argv=''.split(),
env=dict(),
cfg='',
exp_config=Namespace(source=''),
exp_args=[],)
cc = CascadingConfig(c.tag, c.argparser, argv=c.argv, env=c.env)
config, args = cc.parse()
self.assertEquals(c.exp_config, config)
self.assertEquals(c.exp_args, args)
self.assertEqual(c.exp_config, config)
self.assertEqual(c.exp_args, args)
def test_basic_argv(self):
ap = DefaultFreeArgumentParser()
ap.add_argument('--source', default='', action='append', type=str)
c = Namespace(
tag='tag',
argparser=ap,
argv='--source /some/path'.split(),
env=dict(),
cfg='',
exp_config=Namespace(source=['/some/path']),
exp_args=[],
)
tag='tag',
argparser=ap,
argv='--source /some/path'.split(),
env=dict(),
cfg='',
exp_config=Namespace(source=['/some/path']),
exp_args=[],)
cc = CascadingConfig(c.tag, c.argparser, argv=c.argv, env=c.env)
config, args = cc.parse()
self.assertEquals(c.exp_config, config)
self.assertEquals(c.exp_args, args)
self.assertEqual(c.exp_config, config)
self.assertEqual(c.exp_args, args)
# -- end of file

View File

@ -1,5 +1,10 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import unittest
from argparse import Namespace
@ -17,7 +22,14 @@ class TestConfigWorks(unittest.TestCase):
def test_singleoptarg(self):
config, args = collectconfiguration('tag', ['--pubdir', '.'])
self.assertEquals(config.pubdir, '.')
self.assertEqual(config.pubdir, '.')
def test_nonexistent_directory(self):
argv = ['--pubdir', '/path/to/nonexistent/directory']
with self.assertRaises(ValueError) as ecm:
config, args = collectconfiguration('tag', argv)
e = ecm.exception
self.assertTrue("/path/to/nonexistent/directory" in e.args[0])
#
# -- end of file

View File

@ -1,12 +1,27 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import io
import os
import uuid
import errno
import codecs
import random
import unittest
from cStringIO import StringIO
from tempfile import NamedTemporaryFile as ntf
from argparse import Namespace
from tldptesttools import TestInventoryBase
from tldptesttools import TestInventoryBase, TestToolsFilesystem
from tldp.typeguesser import knowndoctypes
from tldp.inventory import stypes, status_types
from tldp.sources import SourceDocument
from tldp.outputs import OutputDirectory
from tldp import VERSION
# -- Test Data
import example
@ -15,10 +30,12 @@ import example
import tldp.config
import tldp.driver
# -- shorthand
opj = os.path.join
opd = os.path.dirname
opa = os.path.abspath
extras = opa(opj(opd(opd(__file__)), 'extras'))
sampledocs = opj(opd(__file__), 'sample-documents')
widths = Namespace(status=20, stem=50)
@ -27,38 +44,268 @@ class TestDriverDetail(TestInventoryBase):
def test_stale_detail_verbosity(self):
c = self.config
self.add_stale('Frobnitz-HOWTO', example.ex_docbook4xml)
self.add_stale('Stale-HOWTO', example.ex_docbook4xml)
c.verbose = True,
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
docs = inv.all.values()
stdout = StringIO()
stdout = io.StringIO()
tldp.driver.detail(c, docs, file=stdout)
stdout.seek(0)
self.assertTrue('newer source' in stdout.read())
self.assertTrue('changed source' in stdout.read())
def test_broken_detail_verbosity(self):
c = self.config
self.add_broken('Frobnitz-HOWTO', example.ex_docbook4xml)
self.add_broken('Broken-HOWTO', example.ex_docbook4xml)
c.verbose = True,
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
docs = inv.all.values()
stdout = StringIO()
stdout = io.StringIO()
tldp.driver.detail(c, docs, file=stdout)
stdout.seek(0)
self.assertTrue('missing output' in stdout.read())
def test_orphan_verbosity(self):
c = self.config
self.add_orphan('Orphan-HOWTO', example.ex_docbook4xml)
c.verbose = True,
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
docs = inv.all.values()
stdout = io.StringIO()
tldp.driver.detail(c, docs, file=stdout)
stdout.seek(0)
self.assertTrue('missing source' in stdout.read())
def test_run_detail(self):
self.add_published('Published-HOWTO', example.ex_linuxdoc)
self.add_new('New-HOWTO', example.ex_linuxdoc)
self.add_stale('Stale-HOWTO', example.ex_linuxdoc)
self.add_orphan('Orphan-HOWTO', example.ex_linuxdoc)
self.add_broken('Broken-HOWTO', example.ex_linuxdoc)
argv = self.argv
argv.append('--detail')
exitcode = tldp.driver.run(argv)
self.assertEqual(exitcode, os.EX_OK)
class TestDriverShowDoctypes(TestToolsFilesystem):
def test_show_doctypes(self):
tf = ntf(dir=self.tempdir, prefix='doctypes-', delete=False)
tf.close()
with codecs.open(tf.name, 'w', encoding='utf-8') as f:
result = tldp.driver.show_doctypes(Namespace(), file=f)
self.assertEqual(result, os.EX_OK)
with codecs.open(f.name, encoding='utf-8') as x:
stdout = x.read()
for doctype in knowndoctypes:
self.assertTrue(doctype.formatname in stdout)
def test_show_doctypes_extraargs(self):
result = tldp.driver.show_doctypes(Namespace(), 'bogus')
self.assertTrue('Extra arguments' in result)
def test_run_doctypes(self):
exitcode = tldp.driver.run(['--doctypes'])
self.assertEqual(exitcode, os.EX_OK)
class TestDriverShowStatustypes(TestToolsFilesystem):
def test_show_statustypes(self):
stdout = io.StringIO()
result = tldp.driver.show_statustypes(Namespace(), file=stdout)
self.assertEqual(result, os.EX_OK)
stdout.seek(0)
data = stdout.read()
for status in status_types:
self.assertTrue(stypes[status] in data)
def test_show_statustypes_extraargs(self):
result = tldp.driver.show_statustypes(Namespace(), 'bogus')
self.assertTrue('Extra arguments' in result)
def test_run_statustypes(self):
exitcode = tldp.driver.run(['--statustypes'])
self.assertEqual(exitcode, os.EX_OK)
class TestDriverShowVersion(unittest.TestCase):
def test_show_version(self):
stdout = io.StringIO()
result = tldp.driver.show_version(Namespace(), file=stdout)
self.assertEqual(result, os.EX_OK)
stdout.seek(0)
data = stdout.read().strip()
for status in status_types:
self.assertEqual(VERSION, data)
def test_run_statustypes(self):
exitcode = tldp.driver.run(['--version'])
self.assertEqual(exitcode, os.EX_OK)
class TestDriverSummary(TestInventoryBase):
def test_summary(self):
def test_run_summary(self):
self.add_published('Published-HOWTO', example.ex_linuxdoc)
self.add_new('New-HOWTO', example.ex_linuxdoc)
self.add_stale('Stale-HOWTO', example.ex_linuxdoc)
self.add_orphan('Orphan-HOWTO', example.ex_linuxdoc)
self.add_broken('Broken-HOWTO', example.ex_linuxdoc)
argv = self.argv
argv.append('--summary')
exitcode = tldp.driver.run(argv)
self.assertEqual(exitcode, os.EX_OK)
def test_summary_extraargs(self):
result = tldp.driver.summary(Namespace(), 'bogus')
self.assertTrue('Extra arguments' in result)
def test_summary_pubdir(self):
self.config.pubdir = None
result = tldp.driver.summary(self.config)
self.assertTrue('Option --pubdir' in result)
def test_summary_sourcedir(self):
self.config.sourcedir = None
result = tldp.driver.summary(self.config)
self.assertTrue('Option --sourcedir' in result)
def publishDocumentsWithLongNames(self, count):
names = list()
for _ in range(count):
x = str(uuid.uuid4())
names.append(x)
self.add_published(x, random.choice(example.sources))
return names
def test_summary_longnames(self):
c = self.config
self.add_new('Frobnitz-DocBook-XML-4-HOWTO', example.ex_docbook4xml)
stdout = StringIO()
tldp.driver.summary(c, None, file=stdout)
names = self.publishDocumentsWithLongNames(5)
stdout = io.StringIO()
result = tldp.driver.summary(c, file=stdout)
self.assertEqual(result, os.EX_OK)
stdout.seek(0)
parts = stdout.read().split()
idx = parts.index('new')
self.assertEqual(['new', '1'], parts[idx:idx+2])
data = stdout.read()
self.assertTrue('and 4 more' in data)
c.verbose = True
stdout = io.StringIO()
result = tldp.driver.summary(c, file=stdout)
self.assertEqual(result, os.EX_OK)
stdout.seek(0)
data = stdout.read()
for name in names:
self.assertTrue(name in data)
def publishDocumentsWithShortNames(self, count):
names = list()
for _ in range(count):
x = hex(random.randint(0, 2**32))
names.append(x)
self.add_published(x, random.choice(example.sources))
return names
def test_summary_short(self):
c = self.config
names = self.publishDocumentsWithShortNames(20)
stdout = io.StringIO()
result = tldp.driver.summary(c, file=stdout)
self.assertEqual(result, os.EX_OK)
stdout.seek(0)
data = stdout.read()
self.assertTrue('and 16 more' in data)
c.verbose = True
stdout = io.StringIO()
result = tldp.driver.summary(c, file=stdout)
self.assertEqual(result, os.EX_OK)
stdout.seek(0)
data = stdout.read()
for name in names:
self.assertTrue(name in data)
class TestcreateBuildDirectory(TestToolsFilesystem):
def test_createBuildDirectory(self):
d = os.path.join(self.tempdir, 'child', 'grandchild')
ready, error = tldp.driver.createBuildDirectory(d)
self.assertFalse(ready)
self.assertEqual(error, errno.ENOENT)
class Testbuilddir_setup(TestToolsFilesystem):
def test_builddir_setup_default(self):
config = Namespace()
_, config.pubdir = self.adddir('pubdir')
config.builddir = None
ready, error = tldp.driver.builddir_setup(config)
self.assertTrue(ready)
def test_builddir_setup_specified(self):
config = Namespace()
_, config.pubdir = self.adddir('pubdir')
_, config.builddir = self.adddir('builddir')
ready, error = tldp.driver.builddir_setup(config)
self.assertTrue(ready)
class TestremoveUnknownDoctypes(TestToolsFilesystem):
def test_removeUnknownDoctypes(self):
docs = list()
docs.append(SourceDocument(opj(sampledocs, 'Unknown-Doctype.xqf')))
docs.append(SourceDocument(opj(sampledocs, 'linuxdoc-simple.sgml')))
result = tldp.driver.removeUnknownDoctypes(docs)
self.assertEqual(1, len(result))
class Test_prepare_docs_script_mode(TestToolsFilesystem):
def test_prepare_docs_script_mode_basic(self):
config = Namespace(pubdir=self.tempdir)
doc = SourceDocument(opj(sampledocs, 'linuxdoc-simple.sgml'))
self.assertIsNone(doc.working)
tldp.driver.prepare_docs_script_mode(config, [doc])
self.assertIsInstance(doc.working, OutputDirectory)
def test_prepare_docs_script_mode_existing_output(self):
config = Namespace(pubdir=self.tempdir)
doc = SourceDocument(opj(sampledocs, 'linuxdoc-simple.sgml'))
doc.output = OutputDirectory.fromsource(config.pubdir, doc)
self.assertIsNone(doc.working)
tldp.driver.prepare_docs_script_mode(config, [doc])
self.assertIs(doc.working, doc.output)
class Test_prepare_docs_build_mode(TestInventoryBase):
def test_prepare_docs_build_mode(self):
c = self.config
doc = SourceDocument(opj(sampledocs, 'linuxdoc-simple.sgml'))
self.assertIsNone(doc.working)
tldp.driver.prepare_docs_build_mode(c, [doc])
self.assertIsInstance(doc.working, OutputDirectory)
def test_prepare_docs_build_mode_nobuilddir(self):
c = self.config
os.rmdir(c.builddir)
doc = SourceDocument(opj(sampledocs, 'linuxdoc-simple.sgml'))
ready, error = tldp.driver.prepare_docs_build_mode(c, [doc])
self.assertFalse(ready)
class Test_post_publish_cleanup(TestInventoryBase):
def test_post_publish_cleanup_enotempty(self):
c = self.config
doc = SourceDocument(opj(sampledocs, 'linuxdoc-simple.sgml'))
tldp.driver.prepare_docs_build_mode(c, [doc])
with open(opj(doc.dtworkingdir, 'annoyance-file.txt'), 'w'):
pass
tldp.driver.post_publish_cleanup([doc.dtworkingdir])
self.assertTrue(os.path.isdir(doc.dtworkingdir))
class TestDriverRun(TestInventoryBase):
@ -71,19 +318,30 @@ class TestDriverRun(TestInventoryBase):
self.add_stale('Stale-HOWTO', ex)
self.add_orphan('Orphan-HOWTO', ex)
self.add_broken('Broken-HOWTO', ex)
argv = ['--pubdir', c.pubdir, '--sourcedir', c.sourcedir[0]]
fullpath = opj(self.tempdir, 'sources', 'New-HOWTO.sgml')
argv.extend(['--build', 'stale', 'Orphan-HOWTO', fullpath])
tldp.driver.run(argv)
argv = self.argv
argv.extend(['--publish', 'stale', 'Orphan-HOWTO', fullpath])
exitcode = tldp.driver.run(argv)
self.assertEqual(exitcode, os.EX_OK)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEquals(4, len(inv.published.keys()))
self.assertEquals(1, len(inv.broken.keys()))
self.assertEqual(4, len(inv.published.keys()))
self.assertEqual(1, len(inv.broken.keys()))
def test_run_no_work(self):
self.add_published('Published-HOWTO', example.ex_linuxdoc)
exitcode = tldp.driver.run(self.argv)
# -- improvement: check for 'No work to do.' from logger
self.assertEqual(exitcode, os.EX_OK)
def test_run_loglevel_resetting(self):
'''just exercise the loglevel settings'''
argv = ['--doctypes', '--loglevel', 'debug']
tldp.driver.run(argv)
def test_run_extra_args(self):
c = self.config
self.add_new('New-HOWTO', example.ex_linuxdoc)
argv = ['--pubdir', c.pubdir, '--sourcedir', c.sourcedir[0]]
fullpath = opj(self.tempdir, 'sources', 'New-HOWTO.sgml')
argv = self.argv
argv.extend(['--build', 'stale', 'Orphan-HOWTO', fullpath, 'extra'])
val = tldp.driver.run(argv)
self.assertTrue('Unknown arguments' in val)
@ -92,26 +350,33 @@ class TestDriverRun(TestInventoryBase):
c = self.config
ex = example.ex_linuxdoc
self.add_new('New-HOWTO', ex)
argv = ['--pubdir', c.pubdir, '--sourcedir', c.sourcedir[0]]
tldp.driver.run(argv)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEquals(1, len(inv.published.keys()))
tldp.driver.run(self.argv)
docbuilddir = opj(c.builddir, ex.doctype.__name__)
inv = tldp.inventory.Inventory(docbuilddir, c.sourcedir)
self.assertEqual(1, len(inv.published.keys()))
def test_run_oops_no_sourcedir(self):
c = self.config
argv = ['--pubdir', c.pubdir]
ex = example.ex_linuxdoc
self.add_new('New-HOWTO', ex)
argv = ['--pubdir', c.pubdir]
exit = tldp.driver.run(argv)
self.assertTrue('required for inventory' in exit)
exitcode = tldp.driver.run(argv)
self.assertTrue('required for inventory' in exitcode)
def test_run_oops_no_pubdir(self):
c = self.config
ex = example.ex_linuxdoc
self.add_new('New-HOWTO', ex)
argv = ['--sourcedir', c.sourcedir[0]]
exit = tldp.driver.run(argv)
self.assertTrue('required for inventory' in exit)
self.add_new('New-HOWTO', example.ex_linuxdoc)
exitcode = tldp.driver.run(argv)
self.assertTrue('required for inventory' in exitcode)
def test_run_build_no_pubdir(self):
c = self.config
argv = ['--sourcedir', c.sourcedir[0]]
fname = opj(sampledocs, 'linuxdoc-simple.sgml')
argv.append(fname)
exitcode = tldp.driver.run(argv)
self.assertTrue('to --build' in exitcode)
class TestDriverProcessSkips(TestInventoryBase):
@ -127,8 +392,8 @@ class TestDriverProcessSkips(TestInventoryBase):
inc, exc = tldp.driver.processSkips(c, docs)
self.assertTrue(1, len(exc))
excluded = exc.pop()
self.assertEquals(excluded.stem, 'Stale-HOWTO')
self.assertEquals(len(inc) + 1, len(inv.all.keys()))
self.assertEqual(excluded.stem, 'Stale-HOWTO')
self.assertEqual(len(inc) + 1, len(inv.all.keys()))
def test_skipDocuments_stem(self):
c = self.config
@ -141,8 +406,8 @@ class TestDriverProcessSkips(TestInventoryBase):
inc, exc = tldp.driver.processSkips(c, docs)
self.assertTrue(1, len(exc))
excluded = exc.pop()
self.assertEquals(excluded.stem, 'Published-HOWTO')
self.assertEquals(len(inc) + 1, len(inv.all.keys()))
self.assertEqual(excluded.stem, 'Published-HOWTO')
self.assertEqual(len(inc) + 1, len(inv.all.keys()))
def test_skipDocuments_doctype(self):
c = self.config
@ -154,82 +419,55 @@ class TestDriverProcessSkips(TestInventoryBase):
inc, exc = tldp.driver.processSkips(c, docs)
self.assertTrue(1, len(exc))
excluded = exc.pop()
self.assertEquals(excluded.stem, 'Docbook4XML-HOWTO')
self.assertEquals(len(inc) + 1, len(inv.all.keys()))
self.assertEqual(excluded.stem, 'Docbook4XML-HOWTO')
self.assertEqual(len(inc) + 1, len(inv.all.keys()))
@unittest.skip("Except when you want to spend time....")
class TestDriverBuild(TestInventoryBase):
class TestDriverScript(TestInventoryBase):
def test_build_linuxdoc(self):
def test_script(self):
c = self.config
c.build = True
self.add_new('Frobnitz-Linuxdoc-HOWTO', example.ex_linuxdoc)
c.script = True
stdout = io.StringIO()
self.add_published('Published-HOWTO', example.ex_linuxdoc)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEquals(1, len(inv.all.keys()))
docs = inv.all.values()
c.skip = []
tldp.driver.build(c, docs)
doc = docs.pop(0)
self.assertTrue(doc.output.iscomplete)
tldp.driver.script(c, inv.all.values(), file=stdout)
stdout.seek(0)
data = stdout.read()
self.assertTrue('Published-HOWTO' in data)
def test_build_docbooksgml(self):
def test_script_no_pubdir(self):
c = self.config
c.build = True
self.add_new('Frobnitz-DocBook-SGML-HOWTO', example.ex_docbooksgml)
c.docbooksgml_collateindex = opj(extras, 'collateindex.pl')
c.docbooksgml_ldpdsl = opj(extras, 'dsssl', 'ldp.dsl')
c.script = True
stdout = io.StringIO()
self.add_published('New-HOWTO', example.ex_linuxdoc)
c.pubdir = None
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEquals(1, len(inv.all.keys()))
docs = inv.all.values()
tldp.driver.build(c, docs)
doc = docs.pop(0)
self.assertTrue(doc.output.iscomplete)
tldp.driver.script(c, inv.all.values(), file=stdout)
stdout.seek(0)
data = stdout.read()
self.assertTrue('New-HOWTO' in data)
def add_docbook4xml_xsl_to_config(self):
c = self.config
c.docbook4xml_xslprint = opj(extras, 'xsl', 'tldp-print.xsl')
c.docbook4xml_xslsingle = opj(extras, 'xsl', 'tldp-one-page.xsl')
c.docbook4xml_xslchunk = opj(extras, 'xsl', 'tldp-chapters.xsl')
def test_run_script(self):
self.add_published('Published-HOWTO', example.ex_linuxdoc)
self.add_new('New-HOWTO', example.ex_linuxdoc)
self.add_stale('Stale-HOWTO', example.ex_linuxdoc)
self.add_orphan('Orphan-HOWTO', example.ex_linuxdoc)
self.add_broken('Broken-HOWTO', example.ex_linuxdoc)
argv = self.argv
argv.append('--script')
exitcode = tldp.driver.run(argv)
self.assertEqual(exitcode, os.EX_OK)
def test_build_docbook4xml(self):
self.add_docbook4xml_xsl_to_config()
def test_script_bad_invocation(self):
c = self.config
c.build = True
self.add_new('Frobnitz-DocBook-XML-4-HOWTO', example.ex_docbook4xml)
c.script = False
self.add_published('Published-HOWTO', example.ex_linuxdoc)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEquals(1, len(inv.all.keys()))
docs = inv.all.values()
tldp.driver.build(c, docs)
doc = docs.pop(0)
self.assertTrue(doc.output.iscomplete)
def test_build_one_broken(self):
self.add_docbook4xml_xsl_to_config()
c = self.config
c.build = True
self.add_new('Frobnitz-DocBook-XML-4-HOWTO', example.ex_docbook4xml)
# -- mangle the content of a valid DocBook XML file
borked = example.ex_docbook4xml.content[:-12]
self.add_new('Frobnitz-Borked-XML-4-HOWTO',
example.ex_docbook4xml, content=borked)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEquals(2, len(inv.all.keys()))
docs = inv.all.values()
result = tldp.driver.build(c, docs)
self.assertEquals(1, result)
def test_build_only_requested_stem(self):
c = self.config
ex = example.ex_linuxdoc
self.add_published('Published-HOWTO', ex)
self.add_new('New-HOWTO', ex)
argv = ['--pubdir', c.pubdir, '--sourcedir', c.sourcedir[0]]
argv.extend(['--build', 'Published-HOWTO'])
tldp.driver.run(argv)
inv = tldp.inventory.Inventory(c.pubdir, c.sourcedir)
self.assertEquals(1, len(inv.published.keys()))
self.assertEquals(1, len(inv.work.keys()))
with self.assertRaises(Exception) as ecm:
tldp.driver.script(c, inv.all.values())
e = ecm.exception
self.assertTrue("neither --build nor --script" in e.args[0])
#
# -- end of file

View File

@ -1,5 +1,10 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import random
@ -41,57 +46,57 @@ class TestInventoryUsage(TestInventoryBase):
def test_detect_status_published(self):
c = self.config
ex = random.choice(example.sources)
self.add_published('Frobnitz-HOWTO', ex)
self.add_published('Frobnitz-Published-HOWTO', ex)
i = Inventory(c.pubdir, c.sourcedir)
self.assertEquals(0, len(i.stale))
self.assertEquals(1, len(i.published))
self.assertEquals(0, len(i.new))
self.assertEquals(0, len(i.orphan))
self.assertEquals(0, len(i.broken))
self.assertEqual(0, len(i.stale))
self.assertEqual(1, len(i.published))
self.assertEqual(0, len(i.new))
self.assertEqual(0, len(i.orphan))
self.assertEqual(0, len(i.broken))
def test_detect_status_new(self):
c = self.config
ex = random.choice(example.sources)
self.add_new('Frobnitz-HOWTO', ex)
self.add_new('Frobnitz-New-HOWTO', ex)
i = Inventory(c.pubdir, c.sourcedir)
self.assertEquals(0, len(i.stale))
self.assertEquals(0, len(i.published))
self.assertEquals(1, len(i.new))
self.assertEquals(0, len(i.orphan))
self.assertEquals(0, len(i.broken))
self.assertEqual(0, len(i.stale))
self.assertEqual(0, len(i.published))
self.assertEqual(1, len(i.new))
self.assertEqual(0, len(i.orphan))
self.assertEqual(0, len(i.broken))
def test_detect_status_orphan(self):
c = self.config
ex = random.choice(example.sources)
self.add_orphan('Frobnitz-HOWTO', ex)
self.add_orphan('Frobnitz-Orphan-HOWTO', ex)
i = Inventory(c.pubdir, c.sourcedir)
self.assertEquals(0, len(i.stale))
self.assertEquals(0, len(i.published))
self.assertEquals(0, len(i.new))
self.assertEquals(1, len(i.orphan))
self.assertEquals(0, len(i.broken))
self.assertEqual(0, len(i.stale))
self.assertEqual(0, len(i.published))
self.assertEqual(0, len(i.new))
self.assertEqual(1, len(i.orphan))
self.assertEqual(0, len(i.broken))
def test_detect_status_stale(self):
c = self.config
ex = random.choice(example.sources)
self.add_stale('Frobnitz-HOWTO', ex)
self.add_stale('Frobnitz-Stale-HOWTO', ex)
i = Inventory(c.pubdir, c.sourcedir)
self.assertEquals(1, len(i.stale))
self.assertEquals(1, len(i.published))
self.assertEquals(0, len(i.new))
self.assertEquals(0, len(i.orphan))
self.assertEquals(0, len(i.broken))
self.assertEqual(1, len(i.stale))
self.assertEqual(1, len(i.published))
self.assertEqual(0, len(i.new))
self.assertEqual(0, len(i.orphan))
self.assertEqual(0, len(i.broken))
def test_detect_status_broken(self):
c = self.config
ex = random.choice(example.sources)
self.add_broken('Frobnitz-HOWTO', ex)
self.add_broken('Frobnitz-Broken-HOWTO', ex)
i = Inventory(c.pubdir, c.sourcedir)
self.assertEquals(0, len(i.stale))
self.assertEquals(1, len(i.published))
self.assertEquals(0, len(i.new))
self.assertEquals(0, len(i.orphan))
self.assertEquals(1, len(i.broken))
self.assertEqual(0, len(i.stale))
self.assertEqual(1, len(i.published))
self.assertEqual(0, len(i.new))
self.assertEqual(0, len(i.orphan))
self.assertEqual(1, len(i.broken))
#
# -- end of file

View File

@ -1,5 +1,9 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import errno
@ -20,6 +24,7 @@ class TestOutputNamingConvention(unittest.TestCase):
onc = OutputNamingConvention("/path/to/output/", "Stem")
self.assertTrue(onc.name_txt.endswith(".txt"))
self.assertTrue(onc.name_pdf.endswith(".pdf"))
self.assertTrue(onc.name_epub.endswith(".epub"))
self.assertTrue(onc.name_html.endswith(".html"))
self.assertTrue(onc.name_htmls.endswith("-single.html"))
self.assertTrue(onc.name_indexhtml.endswith("index.html"))
@ -32,13 +37,13 @@ class TestOutputCollection(TestToolsFilesystem):
with self.assertRaises(IOError) as ecm:
OutputCollection(missing)
e = ecm.exception
self.assertEquals(errno.ENOENT, e.errno)
self.assertEqual(errno.ENOENT, e.errno)
def test_file_in_output_collection(self):
reldir, absdir = self.adddir('collection')
self.addfile('collection', __file__, stem='non-directory')
self.addfile('collection', __file__, stem='non-directory')
oc = OutputCollection(absdir)
self.assertEquals(0, len(oc))
self.assertEqual(0, len(oc))
def test_manyfiles(self):
reldir, absdir = self.adddir('manyfiles')
@ -46,7 +51,7 @@ class TestOutputCollection(TestToolsFilesystem):
for x in range(count):
self.adddir('manyfiles/Document-Stem-' + str(x))
oc = OutputCollection(absdir)
self.assertEquals(count, len(oc))
self.assertEqual(count, len(oc))
class TestOutputDirectory(TestToolsFilesystem):
@ -56,7 +61,7 @@ class TestOutputDirectory(TestToolsFilesystem):
with self.assertRaises(IOError) as ecm:
OutputDirectory(odoc)
e = ecm.exception
self.assertEquals(errno.ENOENT, e.errno)
self.assertEqual(errno.ENOENT, e.errno)
def test_iscomplete(self):
reldir, absdir = self.adddir('outputs/Frobnitz-HOWTO')
@ -68,6 +73,7 @@ class TestOutputDirectory(TestToolsFilesystem):
with open(fname, 'w'):
pass
self.assertTrue(o.iscomplete)
self.assertTrue('Frobnitz' in str(o))
#
# -- end of file

View File

@ -1,17 +1,16 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import errno
import random
import unittest
from cStringIO import StringIO
try:
from types import SimpleNamespace
except ImportError:
from utils import SimpleNamespace
from argparse import Namespace
from io import StringIO
from tldptesttools import TestToolsFilesystem
@ -21,41 +20,40 @@ import example
# -- SUT
from tldp.sources import SourceCollection, SourceDocument
from tldp.sources import scansourcedirs, sourcedoc_fromdir
from tldp.sources import arg_issourcedoc
sampledocs = os.path.join(os.path.dirname(__file__), 'sample-documents')
widths = SimpleNamespace(status=20, stem=50)
class TestFileSourceCollectionMultiDir(TestToolsFilesystem):
def test_multidir_finding_singlefiles(self):
ex = random.choice(example.sources)
doc0 = SimpleNamespace(reldir='LDP/howto', stem="A-Unique-Stem")
doc1 = SimpleNamespace(reldir='LDP/guide', stem="A-Different-Stem")
doc0 = Namespace(reldir='LDP/howto', stem="A-Unique-Stem")
doc1 = Namespace(reldir='LDP/guide', stem="A-Different-Stem")
documents = (doc0, doc1)
for d in documents:
d.reldir, d.absdir = self.adddir(d.reldir)
_, _ = self.addfile(d.reldir, ex.filename, stem=d.stem)
s = scansourcedirs([x.absdir for x in documents])
self.assertEquals(2, len(s))
self.assertEqual(2, len(s))
expected = set([x.stem for x in documents])
found = set(s.keys())
self.assertEquals(expected, found)
self.assertEqual(expected, found)
def test_multidir_finding_namecollision(self):
ex = random.choice(example.sources)
doc0 = SimpleNamespace(reldir='LDP/howto', stem="A-Non-Unique-Stem")
doc1 = SimpleNamespace(reldir='LDP/guide', stem="A-Non-Unique-Stem")
doc0 = Namespace(reldir='LDP/howto', stem="A-Non-Unique-Stem")
doc1 = Namespace(reldir='LDP/guide', stem="A-Non-Unique-Stem")
documents = (doc0, doc1)
for d in documents:
d.reldir, d.absdir = self.adddir(d.reldir)
_, _ = self.addfile(d.reldir, ex.filename, stem=d.stem)
s = scansourcedirs([x.absdir for x in documents])
self.assertEquals(1, len(s))
self.assertEqual(1, len(s))
expected = set([x.stem for x in documents])
found = set(s.keys())
self.assertEquals(expected, found)
self.assertEqual(expected, found)
class TestFileSourceCollectionOneDir(TestToolsFilesystem):
@ -65,7 +63,7 @@ class TestFileSourceCollectionOneDir(TestToolsFilesystem):
reldir, absdir = self.adddir(maindir)
os.mkfifo(os.path.join(absdir, 'non-dir-non-file.xml'))
s = scansourcedirs([absdir])
self.assertEquals(0, len(s))
self.assertEqual(0, len(s))
def test_finding_singlefile(self):
ex = random.choice(example.sources)
@ -73,7 +71,7 @@ class TestFileSourceCollectionOneDir(TestToolsFilesystem):
reldir, absdir = self.adddir(maindir)
_, _ = self.addfile(reldir, ex.filename)
s = scansourcedirs([absdir])
self.assertEquals(1, len(s))
self.assertEqual(1, len(s))
def test_skipping_misnamed_singlefile(self):
ex = random.choice(example.sources)
@ -81,7 +79,7 @@ class TestFileSourceCollectionOneDir(TestToolsFilesystem):
reldir, absdir = self.adddir(maindir)
self.addfile(reldir, ex.filename, ext=".mis")
s = scansourcedirs([absdir])
self.assertEquals(1, len(s))
self.assertEqual(1, len(s))
def test_multiple_stems_of_different_extensions(self):
ex = random.choice(example.sources)
@ -91,7 +89,7 @@ class TestFileSourceCollectionOneDir(TestToolsFilesystem):
self.addfile(reldir, ex.filename, stem=stem, ext=".xml")
self.addfile(reldir, ex.filename, stem=stem, ext=".md")
s = scansourcedirs([absdir])
self.assertEquals(1, len(s))
self.assertEqual(1, len(s))
class TestNullSourceCollection(TestToolsFilesystem):
@ -121,34 +119,58 @@ class TestInvalidSourceCollection(TestToolsFilesystem):
def testEmptyDir(self):
s = scansourcedirs([self.tempdir])
self.assertEquals(0, len(s))
self.assertEqual(0, len(s))
class TestSourceDocument(unittest.TestCase):
class Test_sourcedoc_fromdir(unittest.TestCase):
def test_init(self):
for ex in example.sources:
fullpath = ex.filename
fn = os.path.basename(fullpath)
doc = SourceDocument(fullpath)
self.assertIsInstance(doc, SourceDocument)
self.assertTrue(fn in str(doc))
self.assertTrue(fn in doc.statinfo)
def test_sourcedoc_fromdir(self):
dirname = os.path.dirname(example.ex_linuxdoc_dir.filename)
doc = SourceDocument(dirname)
self.assertIsInstance(doc, SourceDocument)
def test_sourcedoc_fromdir_missingdir(self):
dirname = os.path.dirname('/frobnitz/path/to/extremely/unlikely/file')
self.assertIsNone(sourcedoc_fromdir(dirname))
def test_sourcedoc_fromdir_withdots(self):
dirname = os.path.dirname(example.ex_docbook4xml_dir.filename)
doc = sourcedoc_fromdir(dirname)
self.assertIsNotNone(doc)
class Test_arg_issourcedoc(unittest.TestCase):
def test_arg_issourcedoc_fromdir(self):
fname = example.ex_linuxdoc_dir.filename
dirname = os.path.dirname(fname)
self.assertTrue(fname, arg_issourcedoc(dirname))
class TestSourceDocument(TestToolsFilesystem):
def test_init(self):
for ex in example.sources:
fullpath = ex.filename
fn = os.path.relpath(fullpath, start=example.sampledocs)
doc = SourceDocument(fullpath)
self.assertIsInstance(doc, SourceDocument)
self.assertTrue(fn in str(doc))
self.assertTrue(fn in doc.md5sums)
def test_fromfifo_should_fail(self):
fifo = os.path.join(self.tempdir, 'fifofile')
os.mkfifo(fifo)
with self.assertRaises(ValueError) as ecm:
SourceDocument(fifo)
e = ecm.exception
self.assertTrue('not identifiable' in e.args[0])
def test_fromdir(self):
dirname = os.path.dirname(example.ex_linuxdoc_dir.filename)
doc = SourceDocument(dirname)
self.assertIsInstance(doc, SourceDocument)
def test_detail(self):
ex = example.ex_linuxdoc_dir
s = SourceDocument(ex.filename)
fout = StringIO()
widths = Namespace(status=20, doctype=20, stem=50)
s.detail(widths, False, file=fout)
fout.seek(0)
result = fout.read()
@ -161,7 +183,7 @@ class TestSourceDocument(unittest.TestCase):
with self.assertRaises(Exception) as ecm:
SourceDocument(fullpath)
e = ecm.exception
self.assertTrue('multiple document choices' in e.message)
self.assertTrue('multiple document choices' in e.args[0])
class TestMissingSourceDocuments(TestToolsFilesystem):
@ -171,13 +193,13 @@ class TestMissingSourceDocuments(TestToolsFilesystem):
with self.assertRaises(IOError) as ecm:
SourceDocument(missing)
e = ecm.exception
self.assertEquals(errno.ENOENT, e.errno)
self.assertEqual(errno.ENOENT, e.errno)
def test_init_wrongtype(self):
with self.assertRaises(ValueError) as ecm:
SourceDocument(self.tempdir)
e = ecm.exception
self.assertTrue('not identifiable' in e.message)
self.assertTrue('not identifiable' in e.args[0])
#
# -- end of file

View File

@ -1,7 +1,12 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import codecs
import unittest
from tempfile import NamedTemporaryFile as ntf
@ -12,30 +17,41 @@ import example
from tldp.typeguesser import guess
from tldp.doctypes.common import SignatureChecker
# -- shorthand
opj = os.path.join
opd = os.path.dirname
opa = os.path.abspath
sampledocs = opj(opd(__file__), 'sample-documents')
def genericGuessTest(content, ext):
f = ntf(prefix='tldp-guesser-test-', suffix=ext, delete=False)
f.write(content)
f.flush()
dt = guess(f.name)
f.close()
os.unlink(f.name)
tf = ntf(prefix='tldp-guesser-test-', suffix=ext, delete=False)
tf.close()
with codecs.open(tf.name, 'w', encoding='utf-8') as f:
f.write(content)
dt = guess(tf.name)
os.unlink(tf.name)
return dt
class TestDoctypes(unittest.TestCase):
def testISO_8859_1(self):
dt = guess(opj(sampledocs, 'ISO-8859-1.sgml'))
self.assertIsNotNone(dt)
def testDetectionBySignature(self):
for ex in example.sources:
if isinstance(ex.type, SignatureChecker):
if isinstance(ex.doctype, SignatureChecker):
dt = genericGuessTest(ex.content, ex['ext'])
self.assertEqual(ex.type, dt)
self.assertEqual(ex.doctype, dt)
def testDetectionByExtension(self):
for ex in example.sources:
if not isinstance(ex.type, SignatureChecker):
if not isinstance(ex.doctype, SignatureChecker):
dt = genericGuessTest(ex.content, ex.ext)
self.assertEqual(ex.type, dt)
self.assertEqual(ex.doctype, dt)
def testDetectionBogusExtension(self):
dt = genericGuessTest('franks-cheese-shop', '.wmix')
@ -52,13 +68,18 @@ class TestDoctypes(unittest.TestCase):
dt = genericGuessTest('<valid class="bogus">XML</valid>', '.xml')
self.assertIsNone(dt)
def testGuessSingleMatchAsciidoc(self):
ex = example.ex_asciidoc
dt = genericGuessTest(ex.content, '.txt')
self.assertEqual(ex.doctype, dt)
def testGuessTooManyMatches(self):
a = example.ex_docbook4xml.content
b = example.ex_docbook5xml.content
four, fourdt = a + b, example.ex_docbook4xml.type
four, fourdt = a + b, example.ex_docbook4xml.doctype
dt = genericGuessTest(four, '.xml')
self.assertIs(dt, fourdt)
five, fivedt = b + a, example.ex_docbook5xml.type
five, fivedt = b + a, example.ex_docbook5xml.doctype
dt = genericGuessTest(five, '.xml')
self.assertIs(dt, fivedt)

View File

@ -1,20 +1,29 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import stat
import uuid
import errno
import posix
import unittest
from tempfile import mkdtemp
from tempfile import NamedTemporaryFile as ntf
from tldptesttools import TestToolsFilesystem
# -- SUT
from tldp.utils import makefh, which, execute
from tldp.utils import which, execute
from tldp.utils import statfile, statfiles, stem_and_ext
from tldp.utils import arg_isexecutable, isexecutable
from tldp.utils import arg_isreadablefile, isreadablefile
from tldp.utils import arg_isdirectory, arg_isloglevel
from tldp.utils import arg_isstr
from tldp.utils import swapdirs
class Test_isexecutable_and_friends(unittest.TestCase):
@ -39,23 +48,40 @@ class Test_isreadablefile_and_friends(unittest.TestCase):
def test_isreadablefile(self):
f = ntf(prefix='readable-file')
self.assertTrue(isreadablefile(f.name))
mode = os.stat(f.name).st_mode
os.chmod(f.name, 0)
self.assertFalse(isreadablefile(f.name))
if 0 == os.getuid():
self.assertTrue(isreadablefile(f.name))
else:
self.assertFalse(isreadablefile(f.name))
os.chmod(f.name, mode)
def test_arg_isreadablefile(self):
f = ntf(prefix='readable-file')
self.assertEquals(f.name, arg_isreadablefile(f.name))
self.assertEqual(f.name, arg_isreadablefile(f.name))
mode = os.stat(f.name).st_mode
os.chmod(f.name, 0)
self.assertIsNone(arg_isreadablefile(f.name))
if 0 == os.getuid():
self.assertEqual(f.name, arg_isreadablefile(f.name))
else:
self.assertIsNone(arg_isreadablefile(f.name))
os.chmod(f.name, mode)
class Test_arg_isstr(unittest.TestCase):
def test_arg_isstr(self):
self.assertEqual('s', arg_isstr('s'))
self.assertEqual(None, arg_isstr(7))
class Test_arg_isloglevel(unittest.TestCase):
def test_arg_isloglevel_integer(self):
self.assertEquals(7, arg_isloglevel(7))
self.assertEquals(40, arg_isloglevel('frobnitz'))
self.assertEquals(20, arg_isloglevel('INFO'))
self.assertEquals(10, arg_isloglevel('DEBUG'))
self.assertEqual(7, arg_isloglevel(7))
self.assertEqual(40, arg_isloglevel('frobnitz'))
self.assertEqual(20, arg_isloglevel('INFO'))
self.assertEqual(10, arg_isloglevel('DEBUG'))
class Test_arg_isdirectory(TestToolsFilesystem):
@ -73,6 +99,22 @@ class Test_execute(TestToolsFilesystem):
result = execute([exe], logdir=self.tempdir)
self.assertEqual(0, result)
def test_execute_stdout_to_devnull(self):
exe = which('cat')
cmd = [exe, '/etc/hosts']
devnull = open('/dev/null', 'w')
result = execute(cmd, stdout=devnull, logdir=self.tempdir)
devnull.close()
self.assertEqual(0, result)
def test_execute_stderr_to_devnull(self):
exe = which('cat')
cmd = [exe, '/etc/hosts']
devnull = open('/dev/null', 'w')
result = execute(cmd, stderr=devnull, logdir=self.tempdir)
devnull.close()
self.assertEqual(0, result)
def test_execute_returns_nonzero(self):
exe = which('false')
result = execute([exe], logdir=self.tempdir)
@ -83,7 +125,7 @@ class Test_execute(TestToolsFilesystem):
with self.assertRaises(ValueError) as ecm:
execute([exe], logdir=None)
e = ecm.exception
self.assertTrue('logdir must be a directory' in e.message)
self.assertTrue('logdir must be a directory' in e.args[0])
def test_execute_exception_when_logdir_enoent(self):
exe = which('true')
@ -98,7 +140,7 @@ class Test_which(unittest.TestCase):
def test_good_which_python(self):
python = which('python')
self.assertIsInstance(python, str)
self.assertIsNotNone(python)
self.assertTrue(os.path.isfile(python))
qualified_python = which(python)
self.assertEqual(python, qualified_python)
@ -112,7 +154,8 @@ class Test_which(unittest.TestCase):
f.close()
notfound = which(f.name)
self.assertIsNone(notfound)
os.chmod(f.name, 0755)
mode = stat.S_IRWXU | stat.S_IRGRP | stat.S_IROTH
os.chmod(f.name, mode)
found = which(f.name)
self.assertEqual(f.name, found)
os.unlink(f.name)
@ -125,7 +168,8 @@ class Test_statfiles(unittest.TestCase):
here = os.path.dirname(os.path.abspath(__file__))
statinfo = statfiles(here, relative=here)
self.assertIsInstance(statinfo, dict)
self.assertTrue(os.path.basename('sample-documents') in statinfo)
adoc = 'sample-documents/asciidoc-complete.txt'
self.assertTrue(adoc in statinfo)
def test_statfiles_dir_rel(self):
here = os.path.dirname(os.path.abspath(__file__))
@ -155,7 +199,7 @@ class Test_statfiles(unittest.TestCase):
this = os.path.join(here, str(uuid.uuid4()))
statinfo = statfiles(this)
self.assertIsInstance(statinfo, dict)
self.assertEquals(0, len(statinfo))
self.assertEqual(0, len(statinfo))
class Test_statfile(TestToolsFilesystem):
@ -168,6 +212,19 @@ class Test_statfile(TestToolsFilesystem):
f = ntf(dir=self.tempdir)
self.assertIsNone(statfile(f.name + '-ENOENT_TEST'))
def test_statfile_exception(self):
f = ntf(dir=self.tempdir)
omode = os.stat(self.tempdir).st_mode
os.chmod(self.tempdir, 0)
if 0 != os.getuid():
with self.assertRaises(Exception) as ecm:
statfile(f.name)
e = ecm.exception
self.assertIn(e.errno, (errno.EPERM, errno.EACCES))
os.chmod(self.tempdir, omode)
stbuf = statfile(f.name)
self.assertIsInstance(stbuf, posix.stat_result)
class Test_stem_and_ext(unittest.TestCase):
@ -182,23 +239,37 @@ class Test_stem_and_ext(unittest.TestCase):
self.assertEqual(r0, r1)
class Test_att_statinfo(unittest.TestCase):
class Test_swapdirs(TestToolsFilesystem):
def test_max_mtime(self):
pass
def test_swapdirs_bogusarg(self):
with self.assertRaises(OSError) as ecm:
swapdirs('/path/to/frickin/impossible/dir', None)
e = ecm.exception
self.assertTrue(errno.ENOENT is e.errno)
def test_swapdirs_b_missing(self):
a = mkdtemp(dir=self.tempdir)
b = a + '-B'
self.assertFalse(os.path.exists(b))
swapdirs(a, b)
self.assertTrue(os.path.exists(b))
class Test_makefh(unittest.TestCase):
def test_makefh(self):
f = ntf(prefix='tldp-makefh-openfile-test-', delete=False)
# fprime = makefh(f.file)
# self.assertIs(f, fprime)
# del fprime
f.close()
fprime = makefh(f.name)
self.assertIs(f.name, fprime.name)
os.unlink(f.name)
def test_swapdirs_with_file(self):
a = mkdtemp(dir=self.tempdir)
afile = os.path.join(a, 'silly')
b = mkdtemp(dir=self.tempdir)
bfile = os.path.join(b, 'silly')
with open(afile, 'w'):
pass
self.assertTrue(os.path.exists(a))
self.assertTrue(os.path.exists(afile))
self.assertTrue(os.path.exists(b))
self.assertFalse(os.path.exists(bfile))
swapdirs(a, b)
self.assertTrue(os.path.exists(a))
self.assertFalse(os.path.exists(afile))
self.assertTrue(os.path.exists(b))
self.assertTrue(os.path.exists(bfile))
#
# -- end of file

View File

@ -1,16 +1,30 @@
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import time
import codecs
import random
import shutil
import unittest
from tempfile import mkdtemp
from tempfile import NamedTemporaryFile as ntf
from tldp.config import collectconfiguration
import tldp.config
from tldp.outputs import OutputNamingConvention
from tldp.utils import writemd5sums, md5file
# -- short names
#
opa = os.path.abspath
opb = os.path.basename
opd = os.path.dirname
opj = os.path.join
extras = opa(opj(opd(opd(__file__)), 'extras'))
def stem_and_ext(name):
@ -87,10 +101,11 @@ class CCTestTools(unittest.TestCase):
shutil.rmtree(self.tempdir)
def writeconfig(self, case):
f = ntf(prefix=case.tag, suffix='.cfg', dir=self.tempdir, delete=False)
f.write(case.cfg)
f.close()
case.configfile = f.name
tf = ntf(prefix=case.tag, suffix='.cfg', dir=self.tempdir, delete=False)
tf.close()
with codecs.open(tf.name, 'w', encoding='utf-8') as f:
f.write(case.cfg)
case.configfile = tf.name
class TestOutputDirSkeleton(OutputNamingConvention):
@ -99,6 +114,9 @@ class TestOutputDirSkeleton(OutputNamingConvention):
if not os.path.isdir(self.dirname):
os.mkdir(self.dirname)
def create_md5sum_file(self, md5s):
writemd5sums(self.MD5SUMS, md5s)
def create_expected_docs(self):
for name in self.expected:
fname = getattr(self, name)
@ -116,25 +134,51 @@ class TestSourceDocSkeleton(object):
self.dirname = dirname
if not os.path.isdir(self.dirname):
os.mkdir(self.dirname)
self.md5s = dict()
def copytree(self, source):
dst = opj(self.dirname, opb(source))
shutil.copytree(source, dst)
def create_stale(self, fname):
l = list(self.md5s[fname])
random.shuffle(l)
if l == self.md5s[fname]:
self.invalidate_checksum(fname)
self.md5s[fname] = ''.join(l)
@property
def md5sums(self):
return self.md5s
def addsourcefile(self, filename, content):
fname = os.path.join(self.dirname, filename)
if os.path.isfile(content):
shutil.copy(content, fname)
else:
with open(fname, 'w') as f:
with codecs.open(fname, 'w', encoding='utf-8') as f:
f.write(content)
relpath = os.path.relpath(fname, start=self.dirname)
self.md5s[relpath] = md5file(fname)
class TestInventoryBase(unittest.TestCase):
def setUp(self):
self.makeTempdir()
self.config, _ = collectconfiguration('ldptool', [])
tldp.config.DEFAULT_CONFIGFILE = None
self.config, _ = tldp.config.collectconfiguration('ldptool', [])
c = self.config
c.pubdir = os.path.join(self.tempdir, 'outputs')
c.builddir = os.path.join(self.tempdir, 'builddir')
c.sourcedir = os.path.join(self.tempdir, 'sources')
for d in (c.sourcedir, c.pubdir):
argv = list()
argv.extend(['--builddir', c.builddir])
argv.extend(['--pubdir', c.pubdir])
argv.extend(['--sourcedir', c.sourcedir])
self.argv = argv
# -- and make some directories
for d in (c.sourcedir, c.pubdir, c.builddir):
if not os.path.isdir(d):
os.mkdir(d)
c.sourcedir = [c.sourcedir]
@ -150,20 +194,24 @@ class TestInventoryBase(unittest.TestCase):
def add_stale(self, stem, ex):
c = self.config
mysource = TestSourceDocSkeleton(c.sourcedir)
fname = stem + ex.ext
mysource.addsourcefile(fname, ex.filename)
mysource.create_stale(fname)
myoutput = TestOutputDirSkeleton(os.path.join(c.pubdir, stem), stem)
myoutput.mkdir()
myoutput.create_expected_docs()
time.sleep(0.001)
mysource = TestSourceDocSkeleton(c.sourcedir)
mysource.addsourcefile(stem + ex.ext, ex.filename)
myoutput.create_md5sum_file(mysource.md5sums)
def add_broken(self, stem, ex):
c = self.config
mysource = TestSourceDocSkeleton(c.sourcedir)
mysource.addsourcefile(stem + ex.ext, ex.filename)
fname = stem + ex.ext
mysource.addsourcefile(fname, ex.filename)
myoutput = TestOutputDirSkeleton(os.path.join(c.pubdir, stem), stem)
myoutput.mkdir()
myoutput.create_expected_docs()
myoutput.create_md5sum_file(mysource.md5sums)
prop = random.choice(myoutput.expected)
fname = getattr(myoutput, prop, None)
assert fname is not None
@ -177,6 +225,14 @@ class TestInventoryBase(unittest.TestCase):
else:
mysource.addsourcefile(stem + ex.ext, ex.filename)
def add_unknown(self, stem, ext, content=None):
c = self.config
mysource = TestSourceDocSkeleton(c.sourcedir)
if content:
mysource.addsourcefile(stem + ext, content)
else:
mysource.addsourcefile(stem + ext, '')
def add_orphan(self, stem, ex):
c = self.config
myoutput = TestOutputDirSkeleton(os.path.join(c.pubdir, stem), stem)
@ -190,6 +246,18 @@ class TestInventoryBase(unittest.TestCase):
myoutput = TestOutputDirSkeleton(os.path.join(c.pubdir, stem), stem)
myoutput.mkdir()
myoutput.create_expected_docs()
myoutput.create_md5sum_file(mysource.md5sums)
def add_docbooksgml_support_to_config(self):
c = self.config
c.docbooksgml_collateindex = opj(extras, 'collateindex.pl')
c.docbooksgml_ldpdsl = opj(extras, 'dsssl', 'ldp.dsl')
def add_docbook4xml_xsl_to_config(self):
c = self.config
c.docbook4xml_xslprint = opj(extras, 'xsl', 'tldp-print.xsl')
c.docbook4xml_xslsingle = opj(extras, 'xsl', 'tldp-one-page.xsl')
c.docbook4xml_xslchunk = opj(extras, 'xsl', 'tldp-chapters.xsl')
#
# -- end of file

View File

@ -1,16 +0,0 @@
from __future__ import absolute_import, division, print_function
class SimpleNamespace:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def __repr__(self):
keys = sorted(self.__dict__)
items = ("{}={!r}".format(k, self.__dict__[k]) for k in keys)
return "{}({})".format(type(self).__name__, ", ".join(items))
def __eq__(self, other):
return self.__dict__ == other.__dict__

View File

@ -1,4 +1,13 @@
import config
import outputs
import sources
import inventory
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import
from __future__ import unicode_literals
import tldp.config
import tldp.outputs
import tldp.sources
import tldp.inventory
VERSION = "0.7.15"

View File

@ -1,7 +1,10 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import sys
@ -17,7 +20,7 @@ CLISEP = CFGSEP = '-' # -- dash -
MULTIVALUESEP = ','
try:
from configparser import SafeConfigParser as ConfigParser
from configparser import ConfigParser as ConfigParser
except ImportError:
from ConfigParser import SafeConfigParser as ConfigParser

View File

@ -1,61 +1,155 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import argparse
import logging
from tldp.utils import arg_isdirectory, arg_isloglevel, arg_isreadablefile
from tldp.utils import arg_isloglevel, arg_isreadablefile
from tldp.cascadingconfig import CascadingConfig, DefaultFreeArgumentParser
import tldp.typeguesser
logger = logging.getLogger(__name__)
DEFAULT_CONFIGFILE = '/etc/ldptool/ldptool.ini'
class DirectoriesExist(argparse._AppendAction):
def __call__(self, parser, namespace, values, option_string=None):
if not os.path.isdir(values):
message = "No such directory: %r for option %r, aborting..."
message = message % (values, option_string)
logger.critical(message)
raise ValueError(message)
items = getattr(namespace, self.dest, [])
items.append(values)
setattr(namespace, self.dest, items)
class DirectoryExists(argparse._StoreAction):
def __call__(self, parser, namespace, values, option_string=None):
if not os.path.isdir(values):
message = "No such directory: %r for option %r, aborting..."
message = message % (values, option_string)
logger.critical(message)
raise ValueError(message)
setattr(namespace, self.dest, values)
class StoreTrueOrNargBool(argparse._StoreAction):
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
'0': False, 'no': False, 'false': False, 'off': False}
def __init__(self, *args, **kwargs):
super(argparse._StoreAction, self).__init__(*args, **kwargs)
def __call__(self, parser, namespace, values, option_string=None):
if values is None:
setattr(namespace, self.dest, True)
else:
boolval = self._boolean_states.get(values.lower(), None)
if boolval is None:
message = "Non-boolean value: %r for option %r, aborting..."
message = message % (values, option_string)
logger.critical(message)
raise ValueError(message)
else:
setattr(namespace, self.dest, boolval)
def collectconfiguration(tag, argv):
ap = DefaultFreeArgumentParser()
'''main specification of command-line (and config file) shape'''
ap = DefaultFreeArgumentParser()
ap.add_argument('--sourcedir', '--source-dir', '--source-directory',
'-s',
default=[], action=DirectoriesExist,
help='a directory containing LDP source documents')
ap.add_argument('--pubdir', '--output', '--outputdir', '--outdir',
'-o',
default=None, action=DirectoryExists,
help='a directory containing LDP output documents')
ap.add_argument('--builddir', '--build-dir', '--build-directory',
'-d',
default=None, action=DirectoryExists,
help='a scratch directory used for building')
ap.add_argument('--configfile', '--config-file', '--cfg',
'-c',
default=DEFAULT_CONFIGFILE,
type=arg_isreadablefile,
help='a configuration file')
ap.add_argument('--loglevel',
default=logging.ERROR, type=arg_isloglevel,
help='set the loglevel')
ap.add_argument('--verbose',
action=StoreTrueOrNargBool, nargs='?', default=False,
help='more info in --list/--detail [%(default)s]')
ap.add_argument('--skip',
default=[], action='append', type=str,
help='skip this stem during processing')
ap.add_argument('--resources',
default=['images', 'resources'], action='append', type=str,
help='subdirs to copy during build [%(default)s]')
# -- and the distinct, mutually exclusive actions this script can perform
#
g = ap.add_mutually_exclusive_group()
g.add_argument('--publish',
'-p',
action='store_true', default=False,
help='build and publish LDP documentation [%(default)s]')
g.add_argument('--build',
'-b',
action='store_true', default=False,
help='build LDP documentation [%(default)s]')
g.add_argument('--script',
'-S',
action='store_true', default=False,
help='dump runnable script [%(default)s]')
g.add_argument('--detail', '--list',
'-l',
action='store_true', default=False,
help='list elements of LDP system [%(default)s]')
g.add_argument('--summary',
'-t',
action='store_true', default=False,
help='dump inventory summary report [%(default)s]')
ap.add_argument('--verbose',
action='store_true', default=False,
help='more info in --list/--detail [%(default)s]')
ap.add_argument('--loglevel',
default=logging.ERROR, type=arg_isloglevel,
help='set the loglevel')
ap.add_argument('--skip',
default=[], action='append', type=str,
help='skip this stem during processing')
ap.add_argument('--sourcedir', '--source-dir', '--source-directory',
'-s',
action='append', default='', type=arg_isdirectory,
help='a directory containing LDP source documents')
ap.add_argument('--pubdir', '--output', '--outputdir', '--outdir',
'-o',
default=None, type=arg_isdirectory,
help='a directory containing LDP output documents')
ap.add_argument('--configfile', '--config-file', '--cfg',
'-c',
default='/etc/ldptool/ldptool.ini',
type=arg_isreadablefile,
help='a configuration file')
g.add_argument('--doctypes', '--formats', '--format',
'--list-doctypes', '--list-formats',
'-T',
action='store_true', default=False,
help='show supported doctypes [%(default)s]')
g.add_argument('--statustypes', '--list-statustypes',
action='store_true', default=False,
help='show status types and classes [%(default)s]')
g.add_argument('--version',
'-V',
action='store_true', default=False,
help='print out the version number [%(default)s]')
# -- collect up the distributed configuration fragments
#

View File

@ -1,7 +1,11 @@
# from .text import Text
# from .rst import RestructuredText
# from .markdown import Markdown
from .linuxdoc import Linuxdoc
from .docbooksgml import DocbookSGML
from .docbook4xml import Docbook4XML
from .docbook5xml import Docbook5XML
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import
from tldp.doctypes.asciidoc import Asciidoc
from tldp.doctypes.linuxdoc import Linuxdoc
from tldp.doctypes.docbooksgml import DocbookSGML
from tldp.doctypes.docbook4xml import Docbook4XML
from tldp.doctypes.docbook5xml import Docbook5XML

View File

@ -1,30 +1,53 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import logging
from tldp.doctypes.common import BaseDoctype
from tldp.utils import which
from tldp.utils import arg_isexecutable, isexecutable
from tldp.doctypes.common import depends
from tldp.doctypes.docbook4xml import Docbook4XML
logger = logging.getLogger(__name__)
class Asciidoc(BaseDoctype):
class Asciidoc(Docbook4XML):
formatname = 'AsciiDoc'
extensions = ['.txt']
signatures = []
tools = ['asciidoc', 'a2x']
def create_txt(self):
logger.info("Creating txt for %s", self.source.stem)
required = {'asciidoc_asciidoc': isexecutable,
'asciidoc_xmllint': isexecutable,
}
required.update(Docbook4XML.required)
def create_pdf(self):
logger.info("Creating PDF for %s", self.source.stem)
def make_docbook45(self, **kwargs):
s = '''"{config.asciidoc_asciidoc}" \\
--backend docbook45 \\
--out-file {output.validsource} \\
"{source.filename}"'''
return self.shellscript(s, **kwargs)
def create_html(self):
logger.info("Creating chunked HTML for %s", self.source.stem)
@depends(make_docbook45)
def make_validated_source(self, **kwargs):
s = '"{config.asciidoc_xmllint}" --noout --valid "{output.validsource}"'
return self.shellscript(s, **kwargs)
def create_htmls(self):
logger.info("Creating single page HTML for %s", self.source.stem)
@classmethod
def argparse(cls, p):
descrip = 'executables and data files for %s' % (cls.formatname,)
g = p.add_argument_group(title=cls.__name__, description=descrip)
g.add_argument('--asciidoc-asciidoc', type=arg_isexecutable,
default=which('asciidoc'),
help='full path to asciidoc [%(default)s]')
g.add_argument('--asciidoc-xmllint', type=arg_isexecutable,
default=which('xmllint'),
help='full path to xmllint [%(default)s]')
#
# -- end of file

View File

@ -1,18 +1,25 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import sys
import stat
import time
import errno
import codecs
import shutil
import logging
import inspect
from tempfile import NamedTemporaryFile as ntf
from functools import wraps
import networkx as nx
from tldp.utils import execute, logtimings
from tldp.utils import execute, logtimings, writemd5sums
logger = logging.getLogger(__name__)
@ -23,19 +30,16 @@ set -o pipefail
'''
postamble = '''
# -- end of file
'''
# -- end of file'''
def depends(graph, *predecessors):
def depends(*predecessors):
'''decorator to be used for constructing build order graph'''
def anon(f):
for dep in predecessors:
graph.add_edge(dep.__name__, f.__name__)
@wraps(f)
def method(self, *args, **kwargs):
return f(self, *args, **kwargs)
method.depends = [x.__name__ for x in predecessors]
return method
return anon
@ -43,18 +47,16 @@ def depends(graph, *predecessors):
class SignatureChecker(object):
@classmethod
def signatureLocation(cls, f):
f.seek(0)
buf = f.read(1024).lower()
def signatureLocation(cls, buf, fname):
for sig in cls.signatures:
try:
sindex = buf.index(sig.lower())
sindex = buf.index(sig)
logger.debug("YES FOUND signature %r in %s at %s; doctype %s.",
sig, f.name, sindex, cls)
sig, fname, sindex, cls)
return sindex
except ValueError:
logger.debug("not found signature %r in %s for type %s",
sig, f.name, cls.__name__)
sig, fname, cls.__name__)
return None
@ -67,14 +69,12 @@ class BaseDoctype(object):
self.source = kwargs.get('source', None)
self.output = kwargs.get('output', None)
self.config = kwargs.get('config', None)
self.removals = list()
self.removals = set()
assert self.source is not None
assert self.output is not None
assert self.config is not None
def cleanup(self):
if self.config.script:
return
stem = self.source.stem
removals = getattr(self, 'removals', None)
if removals:
@ -90,6 +90,8 @@ class BaseDoctype(object):
def build_precheck(self):
classname = self.__class__.__name__
if self.config.script:
return True
for tool, validator in self.required.items():
thing = getattr(self.config, tool, None)
logger.debug("%s, tool = %s, thing = %s", classname, tool, thing)
@ -100,11 +102,104 @@ class BaseDoctype(object):
assert validator(thing)
return True
def clear_output(self, **kwargs):
'''remove the entire output directory
This method must be --script aware. The method execute_shellscript()
generates scripts into the directory that would be removed. Thus, the
behaviour is different depending on --script mode or --build mode.
'''
logger.debug("%s removing dir %s.",
self.output.stem, self.output.dirname)
if self.config.script:
s = 'test -d "{output.dirname}" && rm -rf -- "{output.dirname}"'
return self.shellscript(s, **kwargs)
if os.path.exists(self.output.dirname):
shutil.rmtree(self.output.dirname)
return True
def mkdir_output(self, **kwargs):
'''create a new output directory
This method must be --script aware. The method execute_shellscript()
generates scripts into the directory that would be removed. Thus, the
behaviour is different depending on --script mode or --build mode.
'''
logger.debug("%s creating dir %s.",
self.output.stem, self.output.dirname)
if self.config.script:
s = 'mkdir -p -- "{output.logdir}"'
return self.shellscript(s, **kwargs)
for d in (self.output.dirname, self.output.logdir):
if not os.path.isdir(d):
os.mkdir(d)
return True
def chdir_output(self, **kwargs):
'''chdir to the output directory (or write the script that would)'''
logger.debug("%s chdir to dir %s.",
self.output.stem, self.output.dirname)
if self.config.script:
s = '''
# - - - - - {source.stem} - - - - - -
cd -- "{output.dirname}"'''
return self.shellscript(s, **kwargs)
os.chdir(self.output.dirname)
return True
def generate_md5sums(self, **kwargs):
logger.debug("%s generating MD5SUMS in %s.",
self.output.stem, self.output.dirname)
timestr = time.strftime('%F-%T', time.gmtime())
md5file = self.output.MD5SUMS
if self.config.script:
l = list()
for fname, hashval in sorted(self.source.md5sums.items()):
l.append('# {} {}'.format(hashval, fname))
md5s = '\n'.join(l)
s = '''# -- MD5SUMS file from source tree at {}
#
# md5sum > {} -- {}
#
{}
#'''
s = s.format(timestr,
md5file,
' '.join(self.source.md5sums.keys()),
md5s)
return self.shellscript(s, **kwargs)
header = '# -- MD5SUMS for {}'.format(self.source.stem)
writemd5sums(md5file, self.source.md5sums, header=header)
return True
def copy_static_resources(self, **kwargs):
logger.debug("%s copy resources %s.",
self.output.stem, self.output.dirname)
source = list()
for d in self.config.resources:
fullpath = os.path.join(self.source.dirname, d)
fullpath = os.path.abspath(fullpath)
if os.path.isdir(fullpath):
source.append('"' + fullpath + '"')
if not source:
logger.debug("%s no images or resources to copy", self.source.stem)
return True
s = 'rsync --archive --verbose %s ./' % (' '.join(source))
return self.shellscript(s, **kwargs)
def hook_build_success(self):
self.cleanup()
stem = self.output.stem
logdir = self.output.logdir
dirname = self.output.dirname
logger.info("%s build SUCCESS %s.", stem, dirname)
logger.debug("%s removing logs %s)", stem, logdir)
if os.path.isdir(logdir):
shutil.rmtree(logdir)
return True
def hook_build_failure(self):
self.cleanup()
pass
def shellscript(self, script, **kwargs):
if self.config.build:
@ -116,17 +211,20 @@ class BaseDoctype(object):
raise Exception(etext % (self.source.stem,))
@logtimings(logger.debug)
def dump_shellscript(self, script, preamble=preamble, postamble=postamble):
def dump_shellscript(self, script, preamble=preamble,
postamble=postamble, **kwargs):
source = self.source
output = self.output
config = self.config
file = kwargs.get('file', sys.stdout)
s = script.format(output=output, source=source, config=config)
print('', file=sys.stdout)
print(s, file=sys.stdout)
print('', file=file)
print(s, file=file)
return True
@logtimings(logger.debug)
def execute_shellscript(self, script, preamble=preamble, postamble=postamble):
def execute_shellscript(self, script, preamble=preamble,
postamble=postamble, **kwargs):
source = self.source
output = self.output
config = self.config
@ -136,12 +234,13 @@ class BaseDoctype(object):
s = script.format(output=output, source=source, config=config)
tf = ntf(dir=logdir, prefix=prefix, suffix='.sh', delete=False)
if preamble:
tf.write(preamble)
tf.write(s)
if postamble:
tf.write(postamble)
tf.close()
with codecs.open(tf.name, 'w', encoding='utf-8') as f:
if preamble:
f.write(preamble)
f.write(s)
if postamble:
f.write(postamble)
mode = stat.S_IXUSR | stat.S_IRUSR | stat.S_IWUSR
os.chmod(tf.name, mode)
@ -149,62 +248,93 @@ class BaseDoctype(object):
cmd = [tf.name]
result = execute(cmd, logdir=logdir)
if result != 0:
with open(tf.name) as f:
with codecs.open(tf.name, encoding='utf-8') as f:
for line in f:
logger.info("Script: %s", line.rstrip())
return False
return True
@logtimings(logger.debug)
def buildall(self):
def build_prepare(self, **kwargs):
stem = self.source.stem
order = nx.dag.topological_sort(self.graph)
logger.debug("%s build order %r", self.source.stem, order)
for dep in order:
method = getattr(self, dep, None)
classname = self.__class__.__name__
order = ['build_precheck',
'clear_output',
'mkdir_output',
'chdir_output',
'generate_md5sums',
'copy_static_resources',
]
for methname in order:
method = getattr(self, methname, None)
assert method is not None
logger.info("%s calling method %s.%s",
stem, classname, method.__name__)
if not method(**kwargs):
logger.error("%s called method %s.%s failed, skipping...",
stem, classname, method.__name__)
return False
return True
def determinebuildorder(self):
graph = nx.DiGraph()
d = dict(inspect.getmembers(self, inspect.ismethod))
for name, member in d.items():
predecessors = getattr(member, 'depends', None)
if predecessors:
for pred in predecessors:
method = d.get(pred, None)
assert method is not None
graph.add_edge(method, member)
order = nx.dag.topological_sort(graph)
return order
@logtimings(logger.debug)
def build_fullrun(self, **kwargs):
stem = self.source.stem
order = self.determinebuildorder()
logger.debug("%s build order %r", self.source.stem, order)
for method in order:
classname = self.__class__.__name__
logger.info("%s calling method %s.%s", stem, classname, dep)
if not method():
logger.error("%s reported method %s failure, skipping...",
stem, dep)
logger.info("%s calling method %s.%s",
stem, classname, method.__name__)
if not method(**kwargs):
logger.error("%s called method %s.%s failed, skipping...",
stem, classname, method.__name__)
return False
return True
@logtimings(logger.info)
def generate(self):
stem = self.source.stem
classname = self.__class__.__name__
# -- the output directory gets to prepare; must return True
def generate(self, **kwargs):
# -- perform build preparation steps;
# - check for all executables and data files
# - clear output dir
# - make output dir
# - chdir to output dir
# - copy source images/resources to output dir
#
# -- the processor gets to prepare; must return True
#
if not self.build_precheck():
logger.warning("%s %s failed (%s), skipping to next build",
stem, 'build_precheck', classname)
if not self.config.script:
opwd = os.getcwd()
if not self.build_prepare():
return False
if not self.output.hook_prebuild():
logger.warning("%s %s failed (%s), skipping to next build",
stem, 'hook_prebuild', classname)
return False
opwd = os.getcwd()
os.chdir(self.output.dirname)
# -- now, we can try to build everything; this is the BIG WORK!
# -- build
#
result = self.buildall()
result = self.build_fullrun(**kwargs)
# -- always clean the kitchen
#
self.cleanup()
# -- report on result and/or cleanup
#
if result:
self.hook_build_success() # -- processor
self.output.hook_build_success() # -- output document
self.hook_build_success()
else:
self.hook_build_failure() # -- processor
self.output.hook_build_failure() # -- output document
self.hook_build_failure()
os.chdir(opwd)
if not self.config.script:
os.chdir(opwd)
return result

View File

@ -1,15 +1,17 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import logging
import networkx as nx
from tldp.utils import which, firstfoundfile
from tldp.utils import arg_isexecutable, isexecutable
from tldp.utils import arg_isreadablefile, isreadablefile
from tldp.utils import arg_isstr, isstr
from tldp.doctypes.common import BaseDoctype, SignatureChecker, depends
@ -18,20 +20,24 @@ logger = logging.getLogger(__name__)
def xslchunk_finder():
l = ['/usr/share/xml/docbook/stylesheet/ldp/html/tldp-sections.xsl',
'http://docbook.sourceforge.net/release/xsl/current/html/chunk.xsl',
]
return firstfoundfile(l)
def xslsingle_finder():
l = ['/usr/share/xml/docbook/stylesheet/ldp/html/tldp-one-page.xsl',
'http://docbook.sourceforge.net/release/xsl/current/html/docbook.xsl',
]
return firstfoundfile(l)
def xslprint_finder():
l = ['/usr/share/xml/docbook/stylesheet/ldp/fo/tldp-print.xsl',
l = ['http://docbook.sourceforge.net/release/xsl/current/fo/docbook.xsl',
# '/usr/share/xml/docbook/stylesheet/ldp/fo/tldp-print.xsl',
]
return firstfoundfile(l)
return l[0]
# return firstfoundfile(l)
class Docbook4XML(BaseDoctype, SignatureChecker):
@ -49,42 +55,20 @@ class Docbook4XML(BaseDoctype, SignatureChecker):
'docbook4xml_xmllint': isexecutable,
'docbook4xml_xslchunk': isreadablefile,
'docbook4xml_xslsingle': isreadablefile,
'docbook4xml_xslprint': isreadablefile,
'docbook4xml_xslprint': isstr,
}
graph = nx.DiGraph()
def chdir_output(self):
os.chdir(self.output.dirname)
return True
@depends(graph, chdir_output)
def make_validated_source(self):
def make_validated_source(self, **kwargs):
s = '''"{config.docbook4xml_xmllint}" > "{output.validsource}" \\
--nonet \\
--noent \\
--xinclude \\
--postvalid \\
"{source.filename}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_validated_source)
def copy_static_resources(self):
source = list()
for d in ('images', 'resources'):
fullpath = os.path.join(self.source.dirname, d)
fullpath = os.path.abspath(fullpath)
if os.path.isdir(fullpath):
source.append('"' + fullpath + '"')
if not source:
logger.debug("%s no images or resources to copy",
self.source.stem)
return True
s = 'rsync --archive --verbose %s ./' % (' '.join(source))
return self.shellscript(s)
@depends(graph, copy_static_resources)
def make_name_htmls(self):
@depends(make_validated_source)
def make_name_htmls(self, **kwargs):
'''create a single page HTML output'''
s = '''"{config.docbook4xml_xsltproc}" > "{output.name_htmls}" \\
--nonet \\
@ -92,62 +76,65 @@ class Docbook4XML(BaseDoctype, SignatureChecker):
--stringparam base.dir . \\
"{config.docbook4xml_xslsingle}" \\
"{output.validsource}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_htmls)
def make_name_txt(self):
@depends(make_name_htmls)
def make_name_txt(self, **kwargs):
'''create text output'''
s = '''"{config.docbook4xml_html2text}" > "{output.name_txt}" \\
-style pretty \\
-nobs \\
"{output.name_htmls}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, copy_static_resources)
def make_fo(self):
@depends(make_validated_source)
def make_fo(self, **kwargs):
'''generate the Formatting Objects intermediate output'''
s = '''"{config.docbook4xml_xsltproc}" > "{output.name_fo}" \\
--stringparam fop.extensions 0 \\
--stringparam fop1.extensions 1 \\
"{config.docbook4xml_xslprint}" \\
"{output.validsource}"'''
self.removals.append(self.output.name_fo)
return self.shellscript(s)
if not self.config.script:
self.removals.add(self.output.name_fo)
return self.shellscript(s, **kwargs)
# -- this is conditionally built--see logic in make_name_pdf() below
# @depends(graph, make_fo)
def make_pdf_with_fop(self):
# @depends(make_fo)
def make_pdf_with_fop(self, **kwargs):
'''use FOP to create a PDF'''
s = '''"{config.docbook4xml_fop}" \\
-fo "{output.name_fo}" \\
-pdf "{output.name_pdf}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
# -- this is conditionally built--see logic in make_name_pdf() below
# @depends(graph, chdir_output)
def make_pdf_with_dblatex(self):
# @depends(make_validated_source)
def make_pdf_with_dblatex(self, **kwargs):
'''use dblatex (fallback) to create a PDF'''
s = '''"{config.docbook4xml_dblatex}" \\
-F xml \\
-t pdf \\
-o "{output.name_pdf}" \\
"{output.validsource}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_fo)
def make_name_pdf(self):
@depends(make_validated_source, make_fo)
def make_name_pdf(self, **kwargs):
stem = self.source.stem
classname = self.__class__.__name__
logger.info("%s calling method %s.%s",
stem, classname, 'make_pdf_with_fop')
if self.make_pdf_with_fop():
if self.make_pdf_with_fop(**kwargs):
return True
logger.error("%s %s failed creating PDF, falling back to dblatex...",
stem, self.config.docbook4xml_fop)
logger.info("%s calling method %s.%s",
stem, classname, 'make_pdf_with_dblatex')
return self.make_pdf_with_dblatex()
return self.make_pdf_with_dblatex(**kwargs)
@depends(graph, make_name_htmls)
def make_html(self):
@depends(make_validated_source)
def make_chunked_html(self, **kwargs):
'''create chunked HTML output'''
s = '''"{config.docbook4xml_xsltproc}" \\
--nonet \\
@ -155,54 +142,55 @@ class Docbook4XML(BaseDoctype, SignatureChecker):
--stringparam base.dir . \\
"{config.docbook4xml_xslchunk}" \\
"{output.validsource}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_html)
def make_name_html(self):
@depends(make_chunked_html)
def make_name_html(self, **kwargs):
'''rename DocBook XSL's index.html to LDP standard STEM.html'''
s = 'mv -v --no-clobber -- "{output.name_indexhtml}" "{output.name_html}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_html)
def make_name_indexhtml(self):
@depends(make_name_html)
def make_name_indexhtml(self, **kwargs):
'''create final index.html symlink'''
s = 'ln -svr -- "{output.name_html}" "{output.name_indexhtml}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_html)
def remove_validated_source(self):
@depends(make_name_html, make_name_pdf, make_name_htmls, make_name_txt)
def remove_validated_source(self, **kwargs):
'''create final index.html symlink'''
s = 'rm --verbose -- "{output.validsource}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@classmethod
def argparse(cls, p):
descrip = 'executables and data files for %s' % (cls.formatname,)
g = p.add_argument_group(title=cls.__name__, description=descrip)
g.add_argument('--docbook4xml-xslchunk', type=arg_isreadablefile,
default=xslchunk_finder(),
help='full path to LDP HTML chunker XSL [%(default)s]')
g.add_argument('--docbook4xml-xslsingle', type=arg_isreadablefile,
default=xslsingle_finder(),
help='full path to LDP HTML single-page XSL [%(default)s]')
g.add_argument('--docbook4xml-xslprint', type=arg_isreadablefile,
default=xslprint_finder(),
help='full path to LDP FO print XSL [%(default)s]')
g.add_argument('--docbook4xml-xmllint', type=arg_isexecutable,
default=which('xmllint'),
help='full path to xmllint [%(default)s]')
g.add_argument('--docbook4xml-xsltproc', type=arg_isexecutable,
default=which('xsltproc'),
help='full path to xsltproc [%(default)s]')
g.add_argument('--docbook4xml-html2text', type=arg_isexecutable,
default=which('html2text'),
help='full path to html2text [%(default)s]')
g.add_argument('--docbook4xml-fop', type=arg_isexecutable,
default=which('fop'),
help='full path to fop [%(default)s]')
g.add_argument('--docbook4xml-dblatex', type=arg_isexecutable,
default=which('dblatex'),
help='full path to dblatex [%(default)s]')
gadd = g.add_argument
gadd('--docbook4xml-xslchunk', type=arg_isreadablefile,
default=xslchunk_finder(),
help='full path to LDP HTML chunker XSL [%(default)s]')
gadd('--docbook4xml-xslsingle', type=arg_isreadablefile,
default=xslsingle_finder(),
help='full path to LDP HTML single-page XSL [%(default)s]')
gadd('--docbook4xml-xslprint', type=arg_isstr,
default=xslprint_finder(),
help='full path to LDP FO print XSL [%(default)s]')
gadd('--docbook4xml-xmllint', type=arg_isexecutable,
default=which('xmllint'),
help='full path to xmllint [%(default)s]')
gadd('--docbook4xml-xsltproc', type=arg_isexecutable,
default=which('xsltproc'),
help='full path to xsltproc [%(default)s]')
gadd('--docbook4xml-html2text', type=arg_isexecutable,
default=which('html2text'),
help='full path to html2text [%(default)s]')
gadd('--docbook4xml-fop', type=arg_isexecutable,
default=which('fop'),
help='full path to fop [%(default)s]')
gadd('--docbook4xml-dblatex', type=arg_isexecutable,
default=which('dblatex'),
help='full path to dblatex [%(default)s]')
#
# -- end of file

View File

@ -1,13 +1,13 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import logging
import networkx as nx
from tldp.utils import which, firstfoundfile
from tldp.utils import arg_isexecutable, isexecutable
from tldp.utils import arg_isreadablefile, isreadablefile
@ -65,46 +65,24 @@ class Docbook5XML(BaseDoctype, SignatureChecker):
'docbook5xml_xslsingle': isreadablefile,
}
graph = nx.DiGraph()
def chdir_output(self):
os.chdir(self.output.dirname)
return True
@depends(graph, chdir_output)
def make_xincluded_source(self):
def make_xincluded_source(self, **kwargs):
s = '''"{config.docbook5xml_xmllint}" > "{output.validsource}" \\
--nonet \\
--noent \\
--xinclude \\
"{source.filename}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_xincluded_source)
def validate_source(self):
@depends(make_xincluded_source)
def validate_source(self, **kwargs):
'''consider lxml.etree and other validators'''
s = '''"{config.docbook5xml_jing}" \\
"{config.docbook5xml_rngfile}" \\
"{output.validsource}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_xincluded_source)
def copy_static_resources(self):
source = list()
for d in ('images', 'resources'):
fullpath = os.path.join(self.source.dirname, d)
fullpath = os.path.abspath(fullpath)
if os.path.isdir(fullpath):
source.append('"' + fullpath + '"')
if not source:
logger.debug("%s no images or resources to copy",
self.source.stem)
return True
s = 'rsync --archive --verbose %s ./' % (' '.join(source))
return self.shellscript(s)
@depends(graph, copy_static_resources)
def make_name_htmls(self):
@depends(validate_source)
def make_name_htmls(self, **kwargs):
'''create a single page HTML output'''
s = '''"{config.docbook5xml_xsltproc}" > "{output.name_htmls}" \\
--nonet \\
@ -112,62 +90,65 @@ class Docbook5XML(BaseDoctype, SignatureChecker):
--stringparam base.dir . \\
"{config.docbook5xml_xslsingle}" \\
"{output.validsource}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_htmls)
def make_name_txt(self):
@depends(make_name_htmls)
def make_name_txt(self, **kwargs):
'''create text output'''
s = '''"{config.docbook5xml_html2text}" > "{output.name_txt}" \\
-style pretty \\
-nobs \\
"{output.name_htmls}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, copy_static_resources)
def make_fo(self):
@depends(validate_source)
def make_fo(self, **kwargs):
'''generate the Formatting Objects intermediate output'''
s = '''"{config.docbook5xml_xsltproc}" > "{output.name_fo}" \\
--stringparam fop.extensions 0 \\
--stringparam fop1.extensions 1 \\
"{config.docbook5xml_xslprint}" \\
"{output.validsource}"'''
self.removals.append(self.output.name_fo)
return self.shellscript(s)
if not self.config.script:
self.removals.add(self.output.name_fo)
return self.shellscript(s, **kwargs)
# -- this is conditionally built--see logic in make_name_pdf() below
# @depends(graph, make_fo)
def make_pdf_with_fop(self):
# @depends(make_fo)
def make_pdf_with_fop(self, **kwargs):
'''use FOP to create a PDF'''
s = '''"{config.docbook5xml_fop}" \\
-fo "{output.name_fo}" \\
-pdf "{output.name_pdf}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
# -- this is conditionally built--see logic in make_name_pdf() below
# @depends(graph, chdir_output)
def make_pdf_with_dblatex(self):
# @depends(validate_source)
def make_pdf_with_dblatex(self, **kwargs):
'''use dblatex (fallback) to create a PDF'''
s = '''"{config.docbook5xml_dblatex}" \\
-F xml \\
-t pdf \\
-o "{output.name_pdf}" \\
"{output.validsource}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_fo)
def make_name_pdf(self):
@depends(make_fo, validate_source)
def make_name_pdf(self, **kwargs):
stem = self.source.stem
classname = self.__class__.__name__
logger.info("%s calling method %s.%s",
stem, classname, 'make_pdf_with_fop')
if self.make_pdf_with_fop():
if self.make_pdf_with_fop(**kwargs):
return True
logger.error("%s %s failed creating PDF, falling back to dblatex...",
stem, self.config.docbook5xml_fop)
logger.info("%s calling method %s.%s",
stem, classname, 'make_pdf_with_dblatex')
return self.make_pdf_with_dblatex()
return self.make_pdf_with_dblatex(**kwargs)
@depends(graph, make_name_htmls)
def make_html(self):
@depends(make_name_htmls, validate_source)
def make_chunked_html(self, **kwargs):
'''create chunked HTML output'''
s = '''"{config.docbook5xml_xsltproc}" \\
--nonet \\
@ -175,61 +156,62 @@ class Docbook5XML(BaseDoctype, SignatureChecker):
--stringparam base.dir . \\
"{config.docbook5xml_xslchunk}" \\
"{output.validsource}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_html)
def make_name_html(self):
@depends(make_chunked_html)
def make_name_html(self, **kwargs):
'''rename DocBook XSL's index.html to LDP standard STEM.html'''
s = 'mv -v --no-clobber -- "{output.name_indexhtml}" "{output.name_html}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_html)
def make_name_indexhtml(self):
@depends(make_name_html)
def make_name_indexhtml(self, **kwargs):
'''create final index.html symlink'''
s = 'ln -svr -- "{output.name_html}" "{output.name_indexhtml}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_indexhtml, make_name_pdf)
def remove_xincluded_source(self):
@depends(make_name_htmls, make_name_html, make_name_pdf, make_name_txt)
def remove_xincluded_source(self, **kwargs):
'''remove the xincluded source file'''
s = 'rm --verbose -- "{output.validsource}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@classmethod
def argparse(cls, p):
descrip = 'executables for %s' % (cls.formatname,)
g = p.add_argument_group(title=cls.__name__, description=descrip)
g.add_argument('--docbook5xml-xslchunk', type=arg_isreadablefile,
default=xslchunk_finder(),
help='full path to LDP HTML chunker XSL [%(default)s]')
g.add_argument('--docbook5xml-xslsingle', type=arg_isreadablefile,
default=xslsingle_finder(),
help='full path to LDP HTML single-page XSL [%(default)s]')
g.add_argument('--docbook5xml-xslprint', type=arg_isreadablefile,
default=xslprint_finder(),
help='full path to LDP FO print XSL [%(default)s]')
gadd = g.add_argument
gadd('--docbook5xml-xslchunk', type=arg_isreadablefile,
default=xslchunk_finder(),
help='full path to LDP HTML chunker XSL [%(default)s]')
gadd('--docbook5xml-xslsingle', type=arg_isreadablefile,
default=xslsingle_finder(),
help='full path to LDP HTML single-page XSL [%(default)s]')
gadd('--docbook5xml-xslprint', type=arg_isreadablefile,
default=xslprint_finder(),
help='full path to LDP FO print XSL [%(default)s]')
g.add_argument('--docbook5xml-rngfile', type=arg_isreadablefile,
default=rngfile_finder(),
help='full path to docbook.rng [%(default)s]')
g.add_argument('--docbook5xml-xmllint', type=arg_isexecutable,
default=which('xmllint'),
help='full path to xmllint [%(default)s]')
g.add_argument('--docbook5xml-xsltproc', type=arg_isexecutable,
default=which('xsltproc'),
help='full path to xsltproc [%(default)s]')
g.add_argument('--docbook5xml-html2text', type=arg_isexecutable,
default=which('html2text'),
help='full path to html2text [%(default)s]')
g.add_argument('--docbook5xml-fop', type=arg_isexecutable,
default=which('fop'),
help='full path to fop [%(default)s]')
g.add_argument('--docbook5xml-dblatex', type=arg_isexecutable,
default=which('dblatex'),
help='full path to dblatex [%(default)s]')
g.add_argument('--docbook5xml-jing', type=arg_isexecutable,
default=which('jing'),
help='full path to jing [%(default)s]')
gadd('--docbook5xml-rngfile', type=arg_isreadablefile,
default=rngfile_finder(),
help='full path to docbook.rng [%(default)s]')
gadd('--docbook5xml-xmllint', type=arg_isexecutable,
default=which('xmllint'),
help='full path to xmllint [%(default)s]')
gadd('--docbook5xml-xsltproc', type=arg_isexecutable,
default=which('xsltproc'),
help='full path to xsltproc [%(default)s]')
gadd('--docbook5xml-html2text', type=arg_isexecutable,
default=which('html2text'),
help='full path to html2text [%(default)s]')
gadd('--docbook5xml-fop', type=arg_isexecutable,
default=which('fop'),
help='full path to fop [%(default)s]')
gadd('--docbook5xml-dblatex', type=arg_isexecutable,
default=which('dblatex'),
help='full path to dblatex [%(default)s]')
gadd('--docbook5xml-jing', type=arg_isexecutable,
default=which('jing'),
help='full path to jing [%(default)s]')
#

View File

@ -1,11 +1,13 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import logging
import networkx as nx
from tldp.utils import which, firstfoundfile
from tldp.utils import arg_isexecutable, isexecutable
@ -48,29 +50,7 @@ class DocbookSGML(BaseDoctype, SignatureChecker):
'docbooksgml_docbookdsl': isreadablefile,
}
graph = nx.DiGraph()
def chdir_output(self):
os.chdir(self.output.dirname)
return True
@depends(graph, chdir_output)
def copy_static_resources(self):
source = list()
for d in ('images', 'resources'):
fullpath = os.path.join(self.source.dirname, d)
fullpath = os.path.abspath(fullpath)
if os.path.isdir(fullpath):
source.append('"' + fullpath + '"')
if not source:
logger.debug("%s no images or resources to copy",
self.source.stem)
return True
s = 'rsync --archive --verbose %s ./' % (' '.join(source))
return self.shellscript(s)
@depends(graph, copy_static_resources)
def make_blank_indexsgml(self):
def make_blank_indexsgml(self, **kwargs):
indexsgml = os.path.join(self.source.dirname, 'index.sgml')
self.indexsgml = os.path.isfile(indexsgml)
if self.indexsgml:
@ -80,10 +60,10 @@ class DocbookSGML(BaseDoctype, SignatureChecker):
-N \\
-o \\
"index.sgml"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_blank_indexsgml)
def move_blank_indexsgml_into_source(self):
@depends(make_blank_indexsgml)
def move_blank_indexsgml_into_source(self, **kwargs):
'''move a blank index.sgml file into the source tree'''
if self.indexsgml:
return True
@ -91,10 +71,13 @@ class DocbookSGML(BaseDoctype, SignatureChecker):
--no-clobber \\
--verbose \\
-- "index.sgml" "{source.dirname}/index.sgml"'''
return self.shellscript(s)
indexsgml = os.path.join(self.source.dirname, 'index.sgml')
if not self.config.script:
self.removals.add(indexsgml)
return self.shellscript(s, **kwargs)
@depends(graph, move_blank_indexsgml_into_source)
def make_data_indexsgml(self):
@depends(move_blank_indexsgml_into_source)
def make_data_indexsgml(self, **kwargs):
'''collect document's index entries into a data file (HTML.index)'''
if self.indexsgml:
return True
@ -103,10 +86,10 @@ class DocbookSGML(BaseDoctype, SignatureChecker):
-V html-index \\
-d "{config.docbooksgml_docbookdsl}" \\
"{source.filename}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_data_indexsgml)
def make_indexsgml(self):
@depends(make_data_indexsgml)
def make_indexsgml(self, **kwargs):
'''generate the final document index file (index.sgml)'''
if self.indexsgml:
return True
@ -117,10 +100,10 @@ class DocbookSGML(BaseDoctype, SignatureChecker):
-o "index.sgml" \\
"HTML.index" \\
"{source.filename}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_indexsgml)
def move_indexsgml_into_source(self):
@depends(make_indexsgml)
def move_indexsgml_into_source(self, **kwargs):
'''move the generated index.sgml file into the source tree'''
if self.indexsgml:
return True
@ -129,27 +112,30 @@ class DocbookSGML(BaseDoctype, SignatureChecker):
--verbose \\
--force \\
-- "index.sgml" "{source.dirname}/index.sgml"'''
moved = self.shellscript(s)
if moved:
logger.debug("%s created %s", self.source.stem, indexsgml)
self.removals.append(indexsgml)
return True
return False
logger.debug("%s creating %s", self.source.stem, indexsgml)
if not self.config.script:
self.removals.add(indexsgml)
return self.shellscript(s, **kwargs)
@depends(graph, move_indexsgml_into_source)
def cleaned_indexsgml(self):
@depends(move_indexsgml_into_source)
def cleaned_indexsgml(self, **kwargs):
'''clean the junk from the output dir after building the index.sgml'''
# -- be super cautious before removing a bunch of files
cwd = os.getcwd()
if not os.path.samefile(cwd, self.output.dirname):
logger.error("%s (cowardly) refusing to clean directory %s", cwd)
logger.error("%s expected to find %s", self.output.dirname)
return False
s = '''find . -mindepth 1 -maxdepth 1 -not -type d -delete -print'''
return self.shellscript(s)
if not self.config.script:
cwd = os.getcwd()
if not os.path.samefile(cwd, self.output.dirname):
logger.error("%s (cowardly) refusing to clean directory %s",
self.source.stem, cwd)
logger.error("%s expected to find %s",
self.source.stem, self.output.dirname)
return False
preserve = os.path.basename(self.output.MD5SUMS)
s = '''find . -mindepth 1 -maxdepth 1 -not -type d -not -name {} -delete -print'''
s = s.format(preserve)
return self.shellscript(s, **kwargs)
@depends(graph, cleaned_indexsgml)
def make_htmls(self):
@depends(cleaned_indexsgml)
def make_htmls(self, **kwargs):
'''create a single page HTML output (with incorrect name)'''
s = '''"{config.docbooksgml_jw}" \\
-f docbook \\
@ -160,57 +146,57 @@ class DocbookSGML(BaseDoctype, SignatureChecker):
-V '%stock-graphics-extension%=.png' \\
--output . \\
"{source.filename}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_htmls)
def make_name_htmls(self):
@depends(make_htmls)
def make_name_htmls(self, **kwargs):
'''correct the single page HTML output name'''
s = 'mv -v --no-clobber -- "{output.name_html}" "{output.name_htmls}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_htmls)
def make_name_txt(self):
@depends(make_name_htmls)
def make_name_txt(self, **kwargs):
'''create text output (from single-page HTML)'''
s = '''"{config.docbooksgml_html2text}" > "{output.name_txt}" \\
-style pretty \\
-nobs \\
"{output.name_htmls}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
def make_pdf_with_jw(self):
def make_pdf_with_jw(self, **kwargs):
'''use jw (openjade) to create a PDF'''
s = '''"{config.docbooksgml_jw}" \\
-f docbook \\
-b pdf \\
--output . \\
"{source.filename}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
def make_pdf_with_dblatex(self):
def make_pdf_with_dblatex(self, **kwargs):
'''use dblatex (fallback) to create a PDF'''
s = '''"{config.docbooksgml_dblatex}" \\
-F sgml \\
-t pdf \\
-o "{output.name_pdf}" \\
"{source.filename}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, cleaned_indexsgml)
def make_name_pdf(self):
@depends(cleaned_indexsgml)
def make_name_pdf(self, **kwargs):
stem = self.source.stem
classname = self.__class__.__name__
logger.info("%s calling method %s.%s",
stem, classname, 'make_pdf_with_jw')
if self.make_pdf_with_jw():
if self.make_pdf_with_jw(**kwargs):
return True
logger.error("%s jw failed creating PDF, falling back to dblatex...",
stem)
logger.info("%s calling method %s.%s",
stem, classname, 'make_pdf_with_dblatex')
return self.make_pdf_with_dblatex()
return self.make_pdf_with_dblatex(**kwargs)
@depends(graph, make_name_htmls)
def make_html(self):
@depends(make_name_htmls)
def make_html(self, **kwargs):
'''create chunked HTML outputs'''
s = '''"{config.docbooksgml_jw}" \\
-f docbook \\
@ -220,19 +206,19 @@ class DocbookSGML(BaseDoctype, SignatureChecker):
-V '%stock-graphics-extension%=.png' \\
--output . \\
"{source.filename}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_html)
def make_name_html(self):
@depends(make_html)
def make_name_html(self, **kwargs):
'''rename openjade's index.html to LDP standard name STEM.html'''
s = 'mv -v --no-clobber -- "{output.name_indexhtml}" "{output.name_html}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_html)
def make_name_indexhtml(self):
@depends(make_name_html)
def make_name_indexhtml(self, **kwargs):
'''create final index.html symlink'''
s = 'ln -svr -- "{output.name_html}" "{output.name_indexhtml}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@classmethod
def argparse(cls, p):

View File

@ -1,7 +1,10 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import logging
@ -14,19 +17,6 @@ class Frobnitz(BaseDoctype):
formatname = 'Frobnitz'
extensions = ['.fb']
signatures = ['{{Frobnitz-Format 2.3}}']
tools = ['executablename', 'another']
def create_txt(self):
logger.info("Creating txt for %s", self.source.stem)
def create_pdf(self):
logger.info("Creating PDF for %s", self.source.stem)
def create_html(self):
logger.info("Creating chunked HTML for %s", self.source.stem)
def create_htmls(self):
logger.info("Creating single page HTML for %s", self.source.stem)
#
# -- end of file

View File

@ -1,11 +1,12 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import logging
import networkx as nx
from tldp.utils import which
from tldp.utils import arg_isexecutable, isexecutable
@ -20,78 +21,66 @@ class Linuxdoc(BaseDoctype, SignatureChecker):
signatures = ['<!doctype linuxdoc system', ]
required = {'linuxdoc_sgml2html': isexecutable,
'linuxdoc_sgmlcheck': isexecutable,
'linuxdoc_html2text': isexecutable,
'linuxdoc_htmldoc': isexecutable,
}
graph = nx.DiGraph()
def validate_source(self, **kwargs):
s = '"{config.linuxdoc_sgmlcheck}" "{source.filename}"'
return self.shellscript(s, **kwargs)
def chdir_output(self):
os.chdir(self.output.dirname)
return True
@depends(graph, chdir_output)
def copy_static_resources(self):
source = list()
for d in ('images', 'resources'):
fullpath = os.path.join(self.source.dirname, d)
fullpath = os.path.abspath(fullpath)
if os.path.isdir(fullpath):
source.append('"' + fullpath + '"')
if not source:
logger.debug("%s no images or resources to copy",
self.source.stem)
return True
s = 'rsync --archive --verbose %s ./' % (' '.join(source))
return self.shellscript(s)
@depends(graph, copy_static_resources)
def make_htmls(self):
@depends(validate_source)
def make_htmls(self, **kwargs):
'''create a single page HTML output (with incorrect name)'''
s = '"{config.linuxdoc_sgml2html}" --split=0 "{source.filename}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_htmls)
def make_name_htmls(self):
@depends(make_htmls)
def make_name_htmls(self, **kwargs):
'''correct the single page HTML output name'''
s = 'mv -v --no-clobber -- "{output.name_html}" "{output.name_htmls}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_htmls)
def make_name_txt(self):
@depends(make_name_htmls)
def make_name_txt(self, **kwargs):
'''create text output (from single-page HTML)'''
s = '''"{config.linuxdoc_html2text}" > "{output.name_txt}" \\
-style pretty \\
-nobs \\
"{output.name_htmls}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_htmls)
def make_name_pdf(self):
@depends(make_name_htmls)
def make_name_pdf(self, **kwargs):
s = '''"{config.linuxdoc_htmldoc}" \\
--size universal \\
--firstpage p1 \\
--format pdf \\
--footer c.1 \\
--outfile "{output.name_pdf}" \\
"{output.name_htmls}"'''
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_htmls)
def make_name_html(self):
'''create final index.html symlink'''
@depends(make_name_htmls)
def make_name_html(self, **kwargs):
'''create chunked output'''
s = '"{config.linuxdoc_sgml2html}" "{source.filename}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@depends(graph, make_name_html)
def make_name_indexhtml(self):
@depends(make_name_html)
def make_name_indexhtml(self, **kwargs):
'''create final index.html symlink'''
s = 'ln -svr -- "{output.name_html}" "{output.name_indexhtml}"'
return self.shellscript(s)
return self.shellscript(s, **kwargs)
@classmethod
def argparse(cls, p):
descrip = 'executables and data files for %s' % (cls.formatname,)
g = p.add_argument_group(title=cls.__name__, description=descrip)
g.add_argument('--linuxdoc-sgmlcheck', type=arg_isexecutable,
default=which('sgmlcheck'),
help='full path to sgmlcheck [%(default)s]')
g.add_argument('--linuxdoc-sgml2html', type=arg_isexecutable,
default=which('sgml2html'),
help='full path to sgml2html [%(default)s]')

View File

@ -1,7 +1,10 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import logging
@ -16,17 +19,5 @@ class Markdown(BaseDoctype):
signatures = []
tools = ['pandoc']
def create_txt(self):
logger.info("Creating txt for %s", self.source.stem)
def create_pdf(self):
logger.info("Creating PDF for %s", self.source.stem)
def create_html(self):
logger.info("Creating chunked HTML for %s", self.source.stem)
def create_htmls(self):
logger.info("Creating single page HTML for %s", self.source.stem)
#
# -- end of file

View File

@ -1,7 +1,10 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import logging
@ -14,19 +17,7 @@ class RestructuredText(BaseDoctype):
formatname = 'reStructuredText'
extensions = ['.rst']
signatures = []
tools = ['rst2html']
def create_txt(self):
logger.info("Creating txt for %s", self.source.stem)
def create_pdf(self):
logger.info("Creating PDF for %s", self.source.stem)
def create_html(self):
logger.info("Creating chunked HTML for %s", self.source.stem)
def create_htmls(self):
logger.info("Creating single page HTML for %s", self.source.stem)
#
# -- end of file

View File

@ -1,32 +0,0 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
from __future__ import absolute_import, division, print_function
import logging
from tldp.doctypes.common import BaseDoctype
logger = logging.getLogger(__name__)
class Text(BaseDoctype):
formatname = 'plain text'
extensions = ['.txt']
signatures = []
tools = ['pandoc']
def create_txt(self):
logger.info("Creating txt for %s", self.source.stem)
def create_pdf(self):
logger.info("Creating PDF for %s", self.source.stem)
def create_html(self):
logger.info("Creating chunked HTML for %s", self.source.stem)
def create_htmls(self):
logger.info("Creating single page HTML for %s", self.source.stem)
#
# -- end of file

View File

@ -1,36 +1,125 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import sys
import errno
import signal
import shutil
import logging
import inspect
import collections
from argparse import Namespace
from tldp.typeguesser import knowndoctypes
from tldp.sources import SourceDocument, arg_issourcedoc
from tldp.outputs import OutputDirectory
from tldp.inventory import Inventory, status_classes, status_types
from tldp.inventory import Inventory, status_classes, status_types, stypes
from tldp.config import collectconfiguration
from tldp.utils import arg_isloglevel
from tldp.utils import arg_isloglevel, arg_isdirectory
from tldp.utils import swapdirs, sameFilesystem
from tldp.doctypes.common import preamble, postamble
from tldp import VERSION
logformat = '%(levelname)-9s %(name)s %(filename)s#%(lineno)s %(funcName)s %(message)s'
# -- Don't freak out with IOError when our STDOUT, handled with
# head, sed, awk, grep, etc; and, also deal with a user's ctrl-C
# the same way (i.e. no traceback, just stop)
#
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
signal.signal(signal.SIGINT, signal.SIG_DFL)
logformat = '%(levelname)-9s %(name)s %(filename)s#%(lineno)s ' \
+ '%(funcName)s %(message)s'
logging.basicConfig(stream=sys.stderr, format=logformat, level=logging.ERROR)
logger = logging.getLogger(__name__)
# -- short names
#
opa = os.path.abspath
opb = os.path.basename
opd = os.path.dirname
opj = os.path.join
def summary(config, inv=None, **kwargs):
# -- error message prefixes
#
ERR_NEEDPUBDIR = "Option --pubdir (and --sourcedir) required "
ERR_NEEDSOURCEDIR = "Option --sourcedir (and --pubdir) required "
ERR_UNKNOWNARGS = "Unknown arguments received: "
ERR_EXTRAARGS = "Extra arguments received: "
def show_version(config, *args, **kwargs):
file = kwargs.get('file', sys.stdout)
print(VERSION, file=file)
return os.EX_OK
def show_doctypes(config, *args, **kwargs):
if args:
return ERR_EXTRAARGS + ' '.join(args)
file = kwargs.get('file', sys.stdout)
print("Supported source document types:", file=file)
print('', file=file)
for doctype in knowndoctypes:
classname = doctype.__name__
fname = os.path.abspath(inspect.getmodule(doctype).__file__)
extensions = ', '.join(doctype.extensions)
print('{}'.format(classname), file=file)
print(' format name: {}'.format(doctype.formatname), file=file)
print(' code location: {}'.format(fname), file=file)
print(' file extensions: {}'.format(extensions), file=file)
for signature in doctype.signatures:
print(' signature: {}'.format(signature), file=file)
print('', file=file)
print('', file=file)
return os.EX_OK
def show_statustypes(config, *args, **kwargs):
if args:
return ERR_EXTRAARGS + ' '.join(args)
file = kwargs.get('file', sys.stdout)
width = 2 + max([len(x) for x in status_types])
print("Basic status types:", file=file)
print('', file=file)
for status, descrip in stypes.items():
fmt = '{status:>{width}}: {descrip}'
text = fmt.format(status=status, descrip=descrip, width=width)
print(text, file=file)
print('', file=file)
print("Synonyms and groups:", file=file)
print('', file=file)
for status, descrip in status_classes.items():
fmt = '{status:>{width}}: {descrip}'
descrip = ', '.join(descrip)
text = fmt.format(status=status, descrip=descrip, width=width)
print(text, file=file)
print('', file=file)
return os.EX_OK
def summary(config, *args, **kwargs):
if args:
return ERR_EXTRAARGS + ' '.join(args)
if not config.pubdir:
return ERR_NEEDPUBDIR + "for --summary"
if not config.sourcedir:
return ERR_NEEDSOURCEDIR + "for --summary"
file = kwargs.get('file', sys.stdout)
inv = kwargs.get('inv', None)
if inv is None:
inv = Inventory(config.pubdir, config.sourcedir)
file = kwargs.get('file', sys.stdout)
width = Namespace()
width.doctype = max([len(x.__name__) for x in knowndoctypes])
width.status = max([len(x) for x in status_types])
width.count = len(str(len(inv.source.keys())))
print('By Document Status (STATUS)', '---------------------------',
sep='\n', file=file)
for status in status_types:
if status == 'all':
continue
count = len(getattr(inv, status, 0))
s = '{0:{w.status}} {1:{w.count}} '.format(status, count, w=width)
print(s, end="", file=file)
@ -41,16 +130,45 @@ def summary(config, inv=None, **kwargs):
s = ''
if abbrev:
s = s + abbrev.pop(0)
while abbrev and len(s) < 40:
while abbrev:
if (len(s) + len(abbrev[0])) > 48:
break
s = s + ', ' + abbrev.pop(0)
if abbrev:
s = s + ', and %d more ...' % (len(abbrev))
print(s, file=file)
return 0
print('', 'By Document Type (DOCTYPE)', '--------------------------',
sep='\n', file=file)
summarybytype = collections.defaultdict(list)
for doc in inv.source.values():
name = doc.doctype.__name__
summarybytype[name].append(doc.stem)
for doctype, docs in summarybytype.items():
count = len(docs)
s = '{0:{w.doctype}} {1:{w.count}} '.format(doctype, count, w=width)
print(s, end="", file=file)
if config.verbose:
print(', '.join(docs), file=file)
else:
abbrev = docs
s = ''
if abbrev:
s = s + abbrev.pop(0)
while abbrev:
if (len(s) + len(abbrev[0])) > 36:
break
s = s + ', ' + abbrev.pop(0)
if abbrev:
s = s + ', and %d more ...' % (len(abbrev))
print(s, file=file)
print('', file=file)
return os.EX_OK
def detail(config, docs, **kwargs):
file = kwargs.get('file', sys.stdout)
width = Namespace()
width.doctype = max([len(x.__name__) for x in knowndoctypes])
width.status = max([len(x) for x in status_types])
width.stem = max([len(x.stem) for x in docs])
# -- if user just said "list" with no args, then give the user something
@ -58,45 +176,181 @@ def detail(config, docs, **kwargs):
# "all" seems to be less surprising
#
for doc in docs:
stdout = kwargs.get('file', sys.stdout)
doc.detail(width, config.verbose, file=stdout)
return 0
doc.detail(width, config.verbose, file=file)
return os.EX_OK
def build(config, docs, **kwargs):
def removeOrphans(docs):
sources = list()
for x, doc in enumerate(docs, 1):
if not isinstance(doc, SourceDocument):
logger.info("%s (%d of %d) removing: no source for orphan",
doc.stem, x, len(docs))
continue
sources.append(doc)
return sources
def removeUnknownDoctypes(docs):
sources = list()
for x, doc in enumerate(docs, 1):
if not doc.doctype:
logger.info("%s (%d of %d) removing: unknown doctype",
doc.stem, x, len(docs))
continue
sources.append(doc)
return sources
def createBuildDirectory(d):
if not arg_isdirectory(d):
logger.debug("Creating build directory %s.", d)
try:
os.mkdir(d)
except OSError as e:
logger.critical("Could not make --builddir %s.", d)
return False, e.errno
return True, d
def builddir_setup(config):
'''create --builddir; ensure it shares a filesystem with --pubdir'''
if not config.builddir:
builddir = opj(opd(opa(config.pubdir)), 'ldptool-build')
ready, error = createBuildDirectory(builddir)
if not ready:
return ready, error
config.builddir = builddir
if not sameFilesystem(config.pubdir, config.builddir):
return False, "--pubdir and --builddir must be on the same filesystem"
return True, None
def create_dtworkingdir(config, docs):
for source in docs:
classname = source.doctype.__name__
source.dtworkingdir = opj(config.builddir, classname)
ready, error = createBuildDirectory(source.dtworkingdir)
if not ready:
return ready, error
return True, None
def post_publish_cleanup(workingdirs):
'''clean up empty directories left under --builddir'''
for d in workingdirs:
if os.path.isdir(d):
try:
logger.debug("removing build dir %s", d)
os.rmdir(d)
except OSError as e:
if e.errno != errno.ENOTEMPTY:
raise
logger.debug("Could not remove %s; files still present", d)
def prepare_docs_script_mode(config, docs):
for source in docs:
if not source.output:
fromsource = OutputDirectory.fromsource
if not config.pubdir:
source.working = fromsource(source.dirname, source)
else:
source.working = fromsource(config.pubdir, source)
else:
source.working = source.output
return True, None
def prepare_docs_build_mode(config, docs):
ready, error = create_dtworkingdir(config, docs)
if not ready:
return ready, error
for source in docs:
d = source.dtworkingdir
source.working = OutputDirectory.fromsource(d, source)
if not source.output:
source.output = OutputDirectory.fromsource(config.pubdir, source)
return True, None
def docbuild(config, docs, **kwargs):
buildsuccess = False
result = list()
for x, source in enumerate(docs, 1):
if not isinstance(source, SourceDocument):
logger.info("%s (%d of %d) skipping, no source for orphan",
source.stem, x, len(docs))
continue
if not source.doctype:
logger.warning("%s (%d of %d) skipping unknown doctype",
source.stem, x, len(docs))
continue
if not source.output:
dirname = os.path.join(config.pubdir, source.stem)
source.output = OutputDirectory.fromsource(config.pubdir, source)
output = source.output
runner = source.doctype(source=source, output=output, config=config)
logger.info("%s (%d of %d) initiating build",
source.stem, x, len(docs))
result.append(runner.generate())
working = source.working
runner = source.doctype(source=source, output=working, config=config)
status = 'progress, %d failures, %d successes'
status = status % (result.count(False), result.count(True),)
logger.info("%s (%d of %d) initiating build [%s]",
source.stem, x, len(docs), status)
result.append(runner.generate(**kwargs))
if all(result):
return 0
for errcode, source in zip(result, docs):
if not errcode:
logger.error("%s build failed", source.stem)
return 1
buildsuccess = True
return buildsuccess, list(zip(result, docs))
def script(config, docs, **kwargs):
if preamble:
print(preamble, file=sys.stdout)
ready, error = prepare_docs_script_mode(config, docs)
if not ready:
return error
file = kwargs.get('file', sys.stdout)
print(preamble, file=file)
buildsuccess, results = docbuild(config, docs, **kwargs)
print(postamble, file=file)
for errcode, source in results:
if not errcode:
logger.error("Could not generate script for %s", source.stem)
if buildsuccess:
return os.EX_OK
else:
return "Script generation failed."
def build(config, docs, **kwargs):
if not config.pubdir:
return ERR_NEEDPUBDIR + "to --build"
ready, error = builddir_setup(config)
if not ready:
return error
ready, error = prepare_docs_build_mode(config, docs)
if not ready:
return error
buildsuccess, results = docbuild(config, docs, **kwargs)
for x, (buildcode, source) in enumerate(results, 1):
if buildcode:
logger.info("success (%d of %d) available in %s",
x, len(results), source.working.dirname)
else:
logger.info("FAILURE (%d of %d) available in %s",
x, len(results), source.working.dirname)
if buildsuccess:
return os.EX_OK
else:
return "Build failed, see logging output in %s." % (config.builddir,)
def publish(config, docs, **kwargs):
config.build = True
result = build(config, docs, **kwargs)
if postamble:
print(postamble, file=sys.stdout)
return result
if result != os.EX_OK:
return result
for x, source in enumerate(docs, 1):
logger.info("Publishing (%d of %d) to %s.",
x, len(docs), source.output.dirname)
# -- swapdirs must raise an error if there are problems
#
swapdirs(source.working.dirname, source.output.dirname)
if os.path.isdir(source.working.dirname):
logger.debug("%s removing old directory %s",
source.stem, source.working.dirname)
shutil.rmtree(source.working.dirname)
workingdirs = list(set([x.dtworkingdir for x in docs]))
workingdirs.append(config.builddir)
post_publish_cleanup(workingdirs)
return os.EX_OK
def getDocumentNames(args):
@ -189,40 +443,7 @@ def extractExplicitDocumentArgs(config, args):
return docs, remainder
def run(argv):
# -- may want to see option parsing, so set --loglevel as
# soon as possible
if '--loglevel' in argv:
levelarg = 1 + argv.index('--loglevel')
level = arg_isloglevel(argv[levelarg])
# -- set the root logger's level
logging.getLogger().setLevel(level)
# -- produce a configuration from CLI, ENV and CFG
#
tag = 'ldptool'
config, args = collectconfiguration(tag, argv)
# -- and reset the loglevel (after reading envar, and config)
logging.getLogger().setLevel(config.loglevel)
logger.debug("Received the following configuration:")
for param, value in sorted(vars(config).items()):
logger.debug(" %s = %r", param, value)
logger.debug(" args: %r", args)
# -- summary does not require any args
if config.summary:
if args:
return "Unknown args received for --summary: " + ' '.join(args)
if not config.pubdir:
return "Option --pubdir (and --sourcedir) required for --summary."
if not config.sourcedir:
return "Option --sourcedir (and --pubdir) required for --summary."
return summary(config)
def collectWorkset(config, args):
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# -- argument handling logic; try to avoid creating an inventory unless it
# is necessary
@ -238,7 +459,7 @@ def run(argv):
if not workset:
need_inventory = True
# -- by default, we only --list, --script or --build on work-to-be-done
# -- We only --list, --script, --build, or --publish on work-to-be-done
# so, if there have been no special arguments at this point, we will
# simply grab the work to be done; see below the line that says:
#
@ -250,23 +471,20 @@ def run(argv):
#
if need_inventory:
if not config.pubdir:
return " --pubdir (and --sourcedir) required for inventory."
return None, ERR_NEEDPUBDIR + "for inventory"
if not config.sourcedir:
return " --sourcedir (and --pubdir) required for inventory."
return None, ERR_NEEDSOURCEDIR + "for inventory"
inv = Inventory(config.pubdir, config.sourcedir)
logger.info("Collected inventory containing %s documents.",
len(inv.all.keys()))
logger.info("Inventory contains %s source and %s output documents.",
len(inv.source.keys()), len(inv.output.keys()))
else:
inv = None
if stati:
oldsize = len(workset)
for status in stati:
collection = getattr(inv, status)
workset.update(collection.values())
growth = len(workset) - oldsize
if growth:
logger.info("Added %d docs, found by status class .", growth)
docs = getDocumentsByStatus(inv.all.values(), stati)
workset.update(docs)
if docs:
logger.info("Added %d docs, found by status class .", len(docs))
unknownargs = None
if remainder:
@ -275,8 +493,7 @@ def run(argv):
logger.info("Added %d docs, found by stem name.", len(docs))
if unknownargs:
return "Unknown arguments (neither stem, file, nor status_class): " \
+ ' '.join(remainder)
return None, ERR_UNKNOWNARGS + ' '.join(unknownargs)
# -- without any arguments (no files, no stems, no status_classes), the
# default behaviour is to either --build, --list or --script any
@ -289,29 +506,83 @@ def run(argv):
# -- and, of course, apply the skipping logic
#
workset, excluded = processSkips(config, workset)
workset, _ = processSkips(config, workset)
if not workset:
logger.info("No work to do.")
return 0
# -- listify the set and sort it
#
docs = sorted(workset, key=lambda x: x.stem.lower())
return docs, None
def handleArgs(config, args):
if config.version:
return show_version(config, *args)
if config.doctypes:
return show_doctypes(config, *args)
if config.statustypes:
return show_statustypes(config, *args)
if config.summary:
return summary(config, *args)
docs, error = collectWorkset(config, args)
if error:
return error
if not docs:
logger.info("No work to do.")
return os.EX_OK
if config.detail:
return detail(config, docs)
# -- build(), script() and publish() will not be able to deal
# with orphans or with unknown source document types
#
docs = removeUnknownDoctypes(removeOrphans(docs))
if config.script:
return script(config, docs, preamble=preamble, postamble=postamble)
return script(config, docs)
if config.publish:
return publish(config, docs)
if not config.build:
logger.info("Assuming --build, since no other action was specified...")
config.build = True
if not config.pubdir:
return " --pubdir required to --build."
return build(config, docs)
if config.build:
return build(config, docs)
return "Fell through handleArgs(); programming error."
def run(argv):
# -- may want to see option parsing, so set --loglevel as
# soon as possible
if '--loglevel' in argv:
levelarg = 1 + argv.index('--loglevel')
level = arg_isloglevel(argv[levelarg])
# -- set the root logger's level
logging.getLogger().setLevel(level)
# -- produce a configuration from CLI, ENV and CFG
#
tag = 'ldptool'
config, args = collectconfiguration(tag, argv)
# -- and reset the loglevel (after reading envar, and config)
#
logging.getLogger().setLevel(config.loglevel)
logger.debug("Received the following configuration:")
for param, value in sorted(vars(config).items()):
logger.debug(" %s = %r", param, value)
logger.debug(" args: %r", args)
return handleArgs(config, args)
def main():

View File

@ -1,12 +1,14 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import copy
import logging
from tldp.utils import max_mtime, mtime_gt
from collections import OrderedDict
from tldp.sources import SourceCollection
from tldp.outputs import OutputCollection
@ -16,28 +18,29 @@ logger = logging.getLogger(__name__)
# -- any individual document (source or output) will have a status
# from the following list of status_types
#
status_types = [
'source',
'output',
'published',
'new',
'orphan',
'broken',
'stale',
]
stypes = OrderedDict()
stypes['source'] = 'found in source repository'
stypes['output'] = 'found in output repository'
stypes['published'] = 'matching stem in source/output; doc is up to date'
stypes['stale'] = 'matching stem in source/output; but source is newer'
stypes['orphan'] = 'stem located in output, but no source found (i.e. old?)'
stypes['broken'] = 'output is missing an expected output format (e.g. PDF)'
stypes['new'] = 'stem located in source, but missing in output; unpublished'
status_types = stypes.keys()
# -- the user probably doesn't usually care (too much) about listing
# every single published document and source document, but is probably
# mostly interested in specific documents grouped by status; so the
# status_classes are just sets of status_types
#
status_classes = dict(zip(status_types, [[x] for x in status_types]))
status_classes = OrderedDict(zip(status_types, [[x] for x in status_types]))
status_classes['outputs'] = ['output']
status_classes['sources'] = ['source']
status_classes['problems'] = ['orphan', 'broken', 'stale']
status_classes['work'] = ['new', 'orphan', 'broken', 'stale']
status_classes['orphans'] = ['orphan']
status_classes['orphaned'] = ['orphan']
status_classes['problems'] = ['orphan', 'broken', 'stale']
status_classes['work'] = ['new', 'orphan', 'broken', 'stale']
status_classes['all'] = ['published', 'new', 'orphan', 'broken', 'stale']
@ -70,8 +73,7 @@ class Inventory(object):
len(self.orphan),
len(self.new),
len(self.stale),
len(self.broken),
)
len(self.broken),)
def __init__(self, pubdir, sourcedirs):
'''construct an Inventory
@ -120,23 +122,7 @@ class Inventory(object):
self.published = s
logger.debug("Identified %d published documents.", len(self.published))
# -- stale identification
#
self.stale = SourceCollection()
for stem, sdoc in s.items():
odoc = sdoc.output
mtime = max_mtime(odoc.statinfo)
fset = mtime_gt(mtime, sdoc.statinfo)
if fset:
sdoc.newer = fset
for f in fset:
logger.debug("%s found updated source file %s", stem, f)
odoc.status = sdoc.status = 'stale'
self.stale[stem] = sdoc
logger.debug("Identified %d stale documents: %r.", len(self.stale),
self.stale.keys())
# -- stale identification
# -- broken identification
#
self.broken = SourceCollection()
for stem, sdoc in s.items():
@ -146,6 +132,31 @@ class Inventory(object):
logger.debug("Identified %d broken documents: %r.", len(self.broken),
self.broken.keys())
# -- stale identification
#
self.stale = SourceCollection()
for stem, sdoc in s.items():
odoc = sdoc.output
omd5, smd5 = odoc.md5sums, sdoc.md5sums
if omd5 != smd5:
logger.debug("%s differing MD5 sets %r %r", stem, smd5, omd5)
changed = set()
for gone in set(omd5.keys()).difference(smd5.keys()):
logger.debug("%s gone %s", stem, gone)
changed.add(('gone', gone))
for new in set(smd5.keys()).difference(omd5.keys()):
changed.add(('new', new))
for sfn in set(smd5.keys()).intersection(omd5.keys()):
if smd5[sfn] != omd5[sfn]:
changed.add(('changed', sfn))
for why, sfn in changed:
logger.debug("%s differing source %s (%s)", stem, sfn, why)
odoc.status = sdoc.status = 'stale'
sdoc.differing = changed
self.stale[stem] = sdoc
logger.debug("Identified %d stale documents: %r.", len(self.stale),
self.stale.keys())
def getByStatusClass(self, status_class):
desired = status_classes.get(status_class, None)
assert isinstance(desired, list)

View File

@ -1,12 +1,19 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
import collections
import sys
if sys.version_info[:2] >= (3, 8): # pragma: no cover
from collections.abc import MutableMapping
else: # pragma: no cover
from collections import MutableMapping
class LDPDocumentCollection(collections.MutableMapping):
class LDPDocumentCollection(MutableMapping):
'''a dict-like container for DocumentCollection objects
Intended to be subclassed.

View File

@ -1,16 +1,19 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import sys
import errno
import shutil
import codecs
import logging
from tldp.ldpcollection import LDPDocumentCollection
from tldp.utils import logdir, statfiles
from tldp.utils import logdir
logger = logging.getLogger(__name__)
@ -31,6 +34,10 @@ class OutputNamingConvention(object):
self.dirname = dirname
self.stem = stem
@property
def MD5SUMS(self):
return os.path.join(self.dirname, '.LDP-source-MD5SUMS')
@property
def name_txt(self):
return os.path.join(self.dirname, self.stem + '.txt')
@ -70,7 +77,7 @@ class OutputNamingConvention(object):
for prop in self.expected:
name = getattr(self, prop, None)
assert name is not None
present.append(os.path.isfile(name))
present.append(os.path.exists(name))
return all(present)
@property
@ -84,6 +91,21 @@ class OutputNamingConvention(object):
missing.add(name)
return missing
@property
def md5sums(self):
d = dict()
try:
with codecs.open(self.MD5SUMS, encoding='utf-8') as f:
for line in f:
if line.startswith('#'):
continue
hashval, fname = line.strip().split()
d[fname] = hashval
except IOError as e:
if e.errno != errno.ENOENT:
raise
return d
class OutputDirectory(OutputNamingConvention):
'''A class providing a container for each set of output documents
@ -120,45 +142,16 @@ class OutputDirectory(OutputNamingConvention):
if not os.path.isdir(parent):
logger.critical("Missing output collection directory %s.", parent)
raise IOError(errno.ENOENT, os.strerror(errno.ENOENT), parent)
self.statinfo = statfiles(self.dirname, relative=self.dirname)
self.status = 'output'
self.source = source
self.logdir = os.path.join(self.dirname, logdir)
def clean(self):
'''remove the output directory for this document
This is done as a matter of course when the output documents must be
regenerated. Better to start fresh.
'''
if os.path.isdir(self.dirname):
logger.debug("%s removing dir %s.", self.stem, self.dirname)
shutil.rmtree(self.dirname)
def hook_prebuild(self):
self.clean()
for d in (self.dirname, self.logdir):
if not os.path.isdir(d):
logger.debug("%s creating dir %s.", self.stem, d)
os.mkdir(d)
logger.info("%s ready to build %s.", self.stem, self.dirname)
return True
def hook_build_failure(self):
logger.error("%s FAILURE, see logs in %s", self.stem, self.logdir)
return True
def hook_build_success(self):
logger.info("%s build SUCCESS %s.", self.stem, self.dirname)
logger.debug("%s removing logs %s)", self.stem, self.logdir)
if os.path.isdir(self.logdir):
shutil.rmtree(logdir)
return True
def detail(self, widths, verbose, file=sys.stdout):
template = '{s.status:{w.status}} {s.stem:{w.stem}}'
outstr = template.format(s=self, w=widths)
print(outstr)
template = ' '.join(('{s.status:{w.status}}',
'{u:{w.doctype}}',
'{s.stem:{w.stem}}'))
outstr = template.format(s=self, w=widths, u="<unknown>")
print(outstr, file=file)
if verbose:
print(' missing source', file=file)

View File

@ -1,7 +1,10 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import sys
@ -10,11 +13,13 @@ import logging
from tldp.ldpcollection import LDPDocumentCollection
from tldp.utils import statfiles, stem_and_ext
from tldp.utils import md5files, stem_and_ext
from tldp.typeguesser import guess, knownextensions
logger = logging.getLogger(__name__)
IGNORABLE_SOURCE = ('index.sgml')
def scansourcedirs(dirnames):
'''return a dict() of all SourceDocuments discovered in dirnames
@ -81,6 +86,8 @@ def scansourcedirs(dirnames):
def arg_issourcedoc(filename):
filename = os.path.abspath(filename)
if os.path.isfile(filename):
if os.path.basename(filename) in IGNORABLE_SOURCE:
return None
return filename
elif os.path.isdir(filename):
return sourcedoc_fromdir(filename)
@ -184,25 +191,34 @@ class SourceDocument(object):
self.doctype = guess(self.filename)
self.status = 'source'
self.output = None
self.newer = set()
self.working = None
self.differing = set()
self.dirname, self.basename = os.path.split(self.filename)
self.stem, self.ext = stem_and_ext(self.basename)
parentbase = os.path.basename(self.dirname)
logger.debug("%s found source %s", self.stem, self.filename)
if parentbase == self.stem:
self.statinfo = statfiles(self.dirname, relative=self.dirname)
parentdir = os.path.dirname(self.dirname)
self.md5sums = md5files(self.dirname, relative=parentdir)
else:
self.statinfo = statfiles(self.filename, relative=self.dirname)
self.md5sums = md5files(self.filename, relative=self.dirname)
def detail(self, widths, verbose, file=sys.stdout):
'''produce a small tabular output about the document'''
template = '{s.status:{w.status}} {s.stem:{w.stem}}'
template = ' '.join(('{s.status:{w.status}}',
'{s.doctype.__name__:{w.doctype}}',
'{s.stem:{w.stem}}'))
outstr = template.format(s=self, w=widths)
print(outstr, file=file)
if verbose:
for f in sorted(self.newer):
print(' doctype {}'.format(self.doctype), file=file)
if self.output:
print(' output dir {}'.format(self.output.dirname),
file=file)
print(' source file {}'.format(self.filename), file=file)
for why, f in sorted(self.differing):
fname = os.path.join(self.dirname, f)
print(' newer source {}'.format(fname), file=file)
print(' {:>7} source {}'.format(why, fname), file=file)
if self.output:
for f in sorted(self.output.missing):
print(' missing output {}'.format(f), file=file)

View File

@ -1,14 +1,15 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
import os
import codecs
import inspect
import logging
from tldp.utils import makefh
import tldp.doctypes
logger = logging.getLogger(__name__)
@ -32,10 +33,10 @@ def getDoctypeClasses():
return getDoctypeMembers(inspect.isclass)
def guess(thing):
def guess(fname):
'''return a tldp.doctype class which is a best guess for document type
thing: Could be a filename or an open file.
:parama fname: A filename.
The guess function will try to guess the document type (doctype) from the
file extension. If extension matching produces multiple possible doctype
@ -55,11 +56,10 @@ def guess(thing):
* It could/should use heuristics or something richer than signatures.
'''
try:
f = makefh(thing)
except TypeError:
stem, ext = os.path.splitext(fname)
except (AttributeError, TypeError):
return None
stem, ext = os.path.splitext(f.name)
if not ext:
logger.debug("%s no file extension, skipping %s.", stem, ext)
return None
@ -77,19 +77,27 @@ def guess(thing):
# -- for this extension, multiple document types, probably SGML, XML
#
logger.debug("%s multiple possible doctypes for extension %s on file %s.",
stem, ext, f.name)
stem, ext, fname)
for doctype in possible:
logger.debug("%s extension %s could be %s.", stem, ext, doctype)
try:
with codecs.open(fname, encoding='utf-8') as f:
buf = f.read(1024)
except UnicodeDecodeError:
# -- a wee bit ugly, but many SGML docs used iso-8859-1, so fall back
with codecs.open(fname, encoding='iso-8859-1') as f:
buf = f.read(1024)
guesses = list()
for doctype in possible:
sindex = doctype.signatureLocation(f)
sindex = doctype.signatureLocation(buf, fname)
if sindex is not None:
guesses.append((sindex, doctype))
if not guesses:
logger.warning("%s no matching signature found for %s.",
stem, f.name)
stem, fname)
return None
if len(guesses) == 1:
_, doctype = guesses.pop()
@ -100,10 +108,10 @@ def guess(thing):
# first signature in the file as the more likely document type.
#
guesses.sort()
logger.info("%s multiple doctype guesses for file %s", stem, f.name)
logger.info("%s multiple doctype guesses for file %s", stem, fname)
for sindex, doctype in guesses:
logger.info("%s could be %s (sig at pos %s)", stem, doctype, sindex)
logger.info("%s going to guess %s for %s", stem, doctype, f.name)
logger.info("%s going to guess %s for %s", stem, doctype, fname)
_, doctype = guesses.pop(0)
return doctype

View File

@ -1,20 +1,28 @@
#! /usr/bin/python
# -*- coding: utf8 -*-
#
# Copyright (c) 2016 Linux Documentation Project
from __future__ import absolute_import, division, print_function
from __future__ import unicode_literals
import os
import io
import time
import errno
import operator
import codecs
import hashlib
import subprocess
import functools
from functools import wraps
from tempfile import mkstemp
from tempfile import mkstemp, mkdtemp
import logging
logger = logging.getLogger(__name__)
opa = os.path.abspath
opb = os.path.basename
opd = os.path.dirname
opj = os.path.join
logdir = 'tldp-document-build-logs'
@ -40,7 +48,7 @@ def firstfoundfile(locations):
return None
def arg_isloglevel(l):
def arg_isloglevel(l, defaultlevel=logging.ERROR):
try:
level = int(l)
return level
@ -48,10 +56,16 @@ def arg_isloglevel(l):
pass
level = getattr(logging, l.upper(), None)
if not level:
level = logging.ERROR
level = defaultlevel
return level
def arg_isstr(s):
if isstr(s):
return s
return None
def arg_isreadablefile(f):
if isreadablefile(f):
return f
@ -70,11 +84,50 @@ def arg_isexecutable(f):
return None
def sameFilesystem(d0, d1):
return os.stat(d0).st_dev == os.stat(d1).st_dev
def stem_and_ext(name):
'''return (stem, ext) for any relative or absolute filename'''
return os.path.splitext(os.path.basename(os.path.normpath(name)))
def swapdirs(a, b):
'''use os.rename() to make "a" become "b"'''
if not os.path.isdir(a):
raise OSError(errno.ENOENT, os.strerror(errno.ENOENT), a)
tname = None
if os.path.exists(b):
tdir = mkdtemp(prefix='swapdirs-', dir=opd(opa(a)))
logger.debug("Created tempdir %s.", tdir)
tname = opj(tdir, opb(b))
logger.debug("About to rename %s to %s.", b, tname)
os.rename(b, tname)
logger.debug("About to rename %s to %s.", a, b)
os.rename(a, b)
if tname:
logger.debug("About to rename %s to %s.", tname, a)
os.rename(tname, a)
logger.debug("About to remove %s.", tdir)
os.rmdir(tdir)
def logfilecontents(logmethod, prefix, fname):
'''log all lines of a file with a prefix '''
with codecs.open(fname, encoding='utf-8') as f:
for line in f:
logmethod("%s: %s", prefix, line.rstrip())
def conditionallogging(result, prefix, fname):
if logger.isEnabledFor(logging.DEBUG):
logfilecontents(logger.debug, prefix, fname) # -- always
elif logger.isEnabledFor(logging.INFO):
if result != 0:
logfilecontents(logger.info, prefix, fname) # -- error
def execute(cmd, stdin=None, stdout=None, stderr=None,
logdir=None, env=os.environ):
'''(yet another) wrapper around subprocess.Popen()
@ -154,16 +207,10 @@ def execute(cmd, stdin=None, stdout=None, stderr=None,
logger.error("Find STDOUT/STDERR in %s/%s*", logdir, prefix)
if isinstance(stdout, int) and stdoutname:
os.close(stdout)
if result != 0:
with open(stdoutname) as f:
for line in f:
logger.info("STDOUT: %s", line.rstrip())
conditionallogging(result, 'STDOUT', stdoutname)
if isinstance(stderr, int) and stderrname:
os.close(stderr)
if result != 0:
with open(stderrname) as f:
for line in f:
logger.info("STDERR: %s", line.rstrip())
conditionallogging(result, 'STDERR', stderrname)
return result
@ -177,6 +224,16 @@ def isreadablefile(f):
return os.path.isfile(f) and os.access(f, os.R_OK)
def isstr(s):
'''True if argument is stringy (unicode or string)'''
try:
unicode
stringy = (str, unicode)
except NameError:
stringy = (str,) # -- python3
return isinstance(s, stringy)
def which(program):
'''return None or the full path to an executable (respecting $PATH)
http://stackoverflow.com/questions/377017/test-if-executable-exists-in-python/377028#377028
@ -193,16 +250,25 @@ http://stackoverflow.com/questions/377017/test-if-executable-exists-in-python/37
return None
def makefh(thing):
'''return a file object; given an existing filename name or file object'''
if isinstance(thing, io.IOBase):
f = thing
elif isinstance(thing, str) and os.path.isfile(thing):
f = open(thing)
else:
raise TypeError("Cannot make file from %s of %r" %
(type(thing), thing,))
return f
def writemd5sums(fname, md5s, header=None):
'''write an MD5SUM file from [(filename, MD5), ...]'''
with codecs.open(fname, 'w', encoding='utf-8') as file:
if header:
print(header, file=file)
for fname, hashval in sorted(md5s.items()):
print(hashval + ' ' + fname, file=file)
def md5file(name):
'''return MD5 hash for a single file name'''
with open(name, 'rb') as f:
bs = f.read()
md5 = hashlib.md5(bs).hexdigest()
try:
md5 = unicode(md5)
except NameError:
pass # -- python3
return md5
def statfile(name):
@ -216,7 +282,24 @@ def statfile(name):
return st
def md5files(name, relative=None):
'''get all of the MD5s for files from here downtree'''
return fileinfo(name, relative=relative, func=md5file)
def statfiles(name, relative=None):
'''
>>> statfiles('./docs/x509').keys()
['./docs/x509/tutorial.rst', './docs/x509/reference.rst', './docs/x509/index.rst']
>>> statfiles('./docs/x509', relative='./').keys()
['docs/x509/reference.rst', 'docs/x509/tutorial.rst', 'docs/x509/index.rst']
>>> statfiles('./docs/x509', relative='./docs/x509/').keys()
['index.rst', 'tutorial.rst', 'reference.rst']
'''
return fileinfo(name, relative=relative, func=statfile)
def fileinfo(name, relative=None, func=statfile):
'''return a dict() with keys being filenames and posix.stat_result values
Required:
@ -239,27 +322,18 @@ def statfiles(name, relative=None):
least we can try to rely on them as best we can--mostly, by just
excluding any files (in the output dict()) which did not return a valid
posix.stat_result.
Examples:
>>> statfiles('./docs/x509').keys()
['./docs/x509/tutorial.rst', './docs/x509/reference.rst', './docs/x509/index.rst']
>>> statfiles('./docs/x509', relative='./').keys()
['docs/x509/reference.rst', 'docs/x509/tutorial.rst', 'docs/x509/index.rst']
>>> statfiles('./docs/x509', relative='./docs/x509/').keys()
['index.rst', 'tutorial.rst', 'reference.rst']
'''
statinfo = dict()
info = dict()
if not os.path.exists(name):
return statinfo
return info
if not os.path.isdir(name):
if relative:
relpath = os.path.relpath(name, start=relative)
else:
relpath = name
statinfo[relpath] = statfile(name)
if statinfo[relpath] is None:
del statinfo[relpath]
info[relpath] = func(name)
if info[relpath] is None:
del info[relpath]
else:
for root, dirs, files in os.walk(name):
inodes = list()
@ -267,48 +341,16 @@ def statfiles(name, relative=None):
inodes.extend(files)
for x in inodes:
foundpath = os.path.join(root, x)
if os.path.isdir(foundpath):
continue
if relative:
relpath = os.path.relpath(foundpath, start=relative)
else:
relpath = foundpath
statinfo[relpath] = statfile(foundpath)
if statinfo[relpath] is None:
del statinfo[relpath]
return statinfo
def att_statinfo(statinfo, attr='st_mtime', func=max):
if statinfo:
return func([getattr(v, attr) for v in statinfo.values()])
else:
return 0
max_size = functools.partial(att_statinfo, attr='st_size', func=max)
min_size = functools.partial(att_statinfo, attr='st_size', func=min)
max_mtime = functools.partial(att_statinfo, attr='st_mtime', func=max)
min_mtime = functools.partial(att_statinfo, attr='st_mtime', func=min)
max_ctime = functools.partial(att_statinfo, attr='st_ctime', func=max)
min_ctime = functools.partial(att_statinfo, attr='st_ctime', func=min)
max_atime = functools.partial(att_statinfo, attr='st_atime', func=max)
min_atime = functools.partial(att_statinfo, attr='st_atime', func=min)
def sieve(operand, statinfo, attr='st_mtime', func=operator.gt):
result = set()
for fname, stbuf in statinfo.items():
if func(getattr(stbuf, attr), operand):
result.add(fname)
return result
mtime_gt = functools.partial(sieve, attr='st_mtime', func=operator.gt)
mtime_lt = functools.partial(sieve, attr='st_mtime', func=operator.lt)
size_gt = functools.partial(sieve, attr='st_size', func=operator.gt)
size_lt = functools.partial(sieve, attr='st_size', func=operator.lt)
info[relpath] = func(foundpath)
if info[relpath] is None:
del info[relpath]
return info
#
# -- end of file

13
tox.ini Normal file
View File

@ -0,0 +1,13 @@
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py39, py310
skip_missing_interpreters = True
[testenv]
commands = {envpython} setup.py test
deps =
networkx