Importing Gnome data from their omf files.

Split document_file up into document_file and filename
File reports can now be disabled for non-cvs files.
Enabled autocommit in the database for performance.
Cleaned up docfiles HTML table to fit in reasonable browser width.
Various user-interface improvements.
Improve meta-data gathering and management.
This commit is contained in:
david 2002-07-25 10:56:40 +00:00
parent 7f84dea824
commit f408059b5f
26 changed files with 571 additions and 448 deletions

View File

@ -24,8 +24,6 @@ Display the mode information in octal, and as a string so it's a bit more readab
The last_update field should mean the date of the most recent file that was updated,
and add a new field for last_published.
Lintadas must update format_code in document to match the top file.
Add ScrollKeeper categories, per Eric Baudais. We need to keep existing categories
so we can import LDPDB data. Maybe the way to go is to provide for more than
one categorization scheme? LDP, OMF, Trove?
@ -37,6 +35,21 @@ Update db2omf to support ScrollKeeper extensions and work with Gnome.
Nonspecific:
Allow running of file reports on nonlocal files by using the mirrored copy.
Use shared memory to cache Lampadas data so a) it's not reloaded by
every Apache process and b) everyone sees up to date data and
c) we avoid collisions, where one user saves data and another
changes it.
To completely avoid overwriting data, look at each record's timestamp
before saving it. If the timestamp in the database has changed since we loaded,
abort the save and inform the user. This will require timestamps on
every file, which is already elsewhere on this list.
Translation report. Show a list of documents that have translations.
Indicate the master and list the children.
Clean up menus. We no longer need strange constructs to test out functionality.
It's been working well for over a week now.
@ -87,6 +100,13 @@ them. The admin has to have a way to reset the server if things lock up.
UNSORTED
========
Rename file_error to sourcefile_error.
Prevent the same file being assigned to a doc twice, both in database
and in object layer.
Don't call sql for every count() method.
Add ability to refer to local documents using file:// urls. This
lets us publish documents outside the CVS tree.
@ -204,6 +224,12 @@ the stabilization process:
DONE:
Import Gnome document meta-data from their omf files.
-- Done 2002-07-25 DCM.
Lintadas must update format_code in document to match the top file.
Done 2002-07-24 DCM.
Add search form using doctable API.
-- Done 2002-07-16 DCM.
It could still be extended.

View File

@ -28,19 +28,19 @@ import re
import string
# Set to 1 to get debugging messages, 2 to get more
DEBUG = 1
DEBUG = 0
# These tags don't contain text, so we just mark them
# in the tag stack, but don't extract anything from them.
IGNORE_TAGS = ('resource',
'subject',
'person',
'#text')
'person')
class Person:
def __init__(self):
self.firstname = ''
self.middlename = ''
self.lastname = ''
self.email =''
@ -50,11 +50,21 @@ class OMF:
self.title = ''
self.categories = []
self.creators = []
self.maintainers = []
self.contributors = []
self.mime = ''
self.language = ''
self.url = ''
self.description = ''
self.type = ''
self.date= ''
self.version= ''
self.version_date= ''
self.version_description= ''
self.seriesid=''
self.license = ''
self.license_version = ''
self.copyright_holder = ''
self.tags = []
@ -75,6 +85,10 @@ class OMF:
p = re.compile('<\/omf>.*')
temp = p.sub('', temp)
# Throw away comments
p = re.compile('<!--.*?-->')
temp = p.sub('', temp)
self.parse_tags(temp)
def parse_tags(self, xml):
@ -101,16 +115,52 @@ class OMF:
creator = Person()
self.creators = self.creators + [creator]
self.parse_tags(contents)
elif tag=='maintainer':
maintainer = Person()
self.maintainers = self.maintainers + [maintainer]
self.parse_tags(contents)
elif tag=='contributor':
contributor = Person()
self.contributors = self.contributors + [contributor]
self.parse_tags(contents)
elif tag=='firstname':
self.creators[-1].firstname = contents
person = self.get_last_person()
person.firstname = contents
elif tag=='lastname':
self.creators[-1].lastname = contents
person = self.get_last_person()
person.lastname = contents
elif tag=='email':
self.creators[-1].email = contents
person = self.get_last_person()
person.email = contents
elif tag=='description':
self.description = contents
elif tag=='type':
self.type = contents
elif tag=='date':
self.date = contents
elif tag=='version':
self.version = elements['identifier']
self.version_date = elements['date']
self.version_description = elements['description']
elif tag=='relation':
self.seriesid = elements['seriesid']
elif tag=='rights':
self.license = elements['type']
self.license_version = elements['license.version']
self.copyright_holder = elements['holder']
elif tag=='#text':
if self.tags[-2]=='creator':
person = self.creators[-1]
person.firstname, person.middlename, person.lastname, person.email = self.parse_person(contents)
elif self.tags[-2]=='maintainer':
person = self.maintainers[-1]
person.firstname, person.middlename, person.lastname, person.email = self.parse_person(contents)
elif self.tags[-2]=='contributor':
person = self.contributors[-1]
person.firstname, person.middlename, person.lastname, person.email = self.parse_person(contents)
else:
print 'ERROR: this belongs to what? ' + contents
sys.exit(1)
else:
print 'ERROR: cannot handle tag %s' % tag
sys.exit(1)
@ -122,9 +172,12 @@ class OMF:
self.tags.pop()
def parse_next_tag(self, xml):
p = re.compile('<(\w+)\s*.*?>')
m = p.match(xml)
tag = m.group(1)
if xml[0]=='<':
p = re.compile('<(\w+)\s*.*?>')
m = p.match(xml)
tag = m.group(1)
else:
return '#text', '', xml, ''
p = re.compile('<' + tag + '\s*(.*?)>(.*?)<\/' + tag + '>(.*)')
m = p.match(xml)
@ -142,7 +195,7 @@ class OMF:
contents = ''
outside = m.group(2)
else:
print 'ERROR: ' + xml
print 'ERROR: cannot find either a full or a shortcut element in ' + xml
sys.exit(1)
# lowercase tag once we're done matching it.
@ -157,39 +210,125 @@ class OMF:
return tag, elements, contents, outside
def parse_elements(self, xml):
elements = {}
name, value, remainder = self.parse_next_element(xml)
if DEBUG >= 2:
print 'ELEMENTS: '
print 'name: ' + name
print 'value: ' + value
print 'remainder: ' + remainder
elements = {}
elements[name] = value
if remainder > '':
self.parse_elements(remainder)
newelements = self.parse_elements(remainder)
keys = newelements.keys()
for key in keys:
elements[key] = newelements[key]
return elements
def parse_next_element(self, xml):
if xml=='':
return '', '', ''
p = re.compile('(\w+)="(.*?)"\s*(.*)')
p = re.compile('([\w|\.]+)="(.*?)"\s*(.*)')
m = p.match(xml)
name = m.group(1)
value = m.group(2)
remainder = m.group(3)
return name, value, remainder
def parse_person(self, xml):
"""
Sometimes a <creator>, <maintainer> or <contributor) tag contains only text.
This parses it to extract firstname, middlename, lastname and email.
Example text: kevin@kevindumpscore.com (Kevin Conder)
"""
if DEBUG >= 2:
print 'NAME PARSING: ' + xml
# Find an email address
p = re.compile('(.*?)([^\s]+@[^\s]+)(.*)')
m = p.match(xml)
if m:
email = m.group(2)
name = trim(m.group(1) + m.group(3))
if DEBUG >= 2:
print 'name: ' + name
else:
email = ''
name = xml
# Discard parentheses around name
name = name.replace('(','')
name = name.replace(')','')
spaces = name.count(' ')
if spaces==0:
firstname = name
middlename = ''
lastname = ''
elif spaces==1:
firstname, lastname = name.split()
middlename = ''
elif spaces==2:
firstname, middlename, lastname = name.split()
else:
print 'ERROR: two many names ' + str(spaces + 1) + ' in ' + name
sys.exit(1)
if DEBUG >= 2:
print 'firstname: ' + firstname
print 'middlename: ' + middlename
print 'lastname: ' + lastname
print 'email: ' + email
return firstname, middlename, lastname, email
def get_last_person(self):
last_person_tag = ''
for tag in self.tags:
if tag=='creator' or tag=='maintainer' or tag=='contributor':
last_person_tag = tag
if last_person_tag=='creator':
person = self.creators[-1]
elif last_person_tag=='maintainer':
person = self.maintainers[-1]
elif last_person_tag=='contributor':
person = self.contributors[-1]
else:
print 'ERROR: this belongs to who? ' + self.tags[-1]
sys.exit(1)
return person
def print_debug(self):
print 'title: %s' % self.title
print 'title: %s' % self.title
for category in self.categories:
print 'category: %s' % category
print 'category: %s' % category
for creator in self.creators:
print 'creator: %s, %s, %s' % (creator.firstname, creator.lastname, creator.email)
print 'mime: %s' % self.mime
print 'language: %s' % self.language
print 'url: %s' % self.url
print 'description: %s' % self.description
print 'type: %s' % self.type
print 'creator: %s, %s, %s' % (creator.firstname, creator.lastname, creator.email)
for maintainer in self.maintainers:
print 'maintainer: %s, %s, %s' % (maintainer.firstname, maintainer.lastname, maintainer.email)
for contributor in self.contributors:
print 'contributor: %s, %s, %s' % (contributor.firstname, contributor.lastname, contributor.email)
print 'mime: %s' % self.mime
print 'language: %s' % self.language
print 'url: %s' % self.url
print 'description: %s' % self.description
print 'type: %s' % self.type
print 'date: %s' % self.date
print 'version: %s' % self.version
print 'version_date: %s' % self.version
print 'version_description: %s' % self.version
print 'seriesid: %s' % self.seriesid
print 'license: %s' % self.license
print 'license_version: %s' % self.license_version
print 'copyright_holder: %s' % self.copyright_holder
def callback(arg, directory, files):
for file in files:
if fnmatch.fnmatch(file, arg):
if DEBUG >= 1:
print '===================================='
print 'Processing %s/%s' % (directory, file)
fh = open(os.path.abspath(os.path.join(directory, file)))
xml = fh.read()
@ -202,35 +341,39 @@ def callback(arg, directory, files):
if omf.language=='C':
omf.language = 'EN'
omf.language = omf.language[:2]
if omf.license=='GNU FDL':
omf.license = 'gfdl'
if omf.url > '':
omf.url = 'file://%s/%s' % (directory, omf.url)
if omf.type=='manual':
omf.type = 'userguide'
elif omf.type=='user\'s guide':
omf.type = 'userguide'
if DEBUG >= 1:
omf.print_debug()
doc = lampadas.docs.add(omf.title,
'', # short_title,
omf.type, # type_code
'', # format_code
'', # dtd_code
'', # dtd_version
'', # version
'', # last_update
'', # isbn
'N', # pub_status_code
'', # review_status_code
'', # tickle_date
'', # pub_date
'', # tech_review_status_code
'', # license_code
'', # license_version
'', # copyright_holder
omf.description, # abstract
'', # short_desc
omf.language, # lang
'' # sk_seriesid
'', # short_title,
omf.type, # type_code
'', # format_code
'', # dtd_code
'', # dtd_version
omf.version, # version
omf.date, # last_update
'', # isbn
'N', # pub_status_code
'', # review_status_code
'', # tickle_date
omf.version_date, # pub_date
'', # tech_review_status_code
omf.license, # license_code
omf.license_version, # license_version
omf.copyright_holder, # copyright_holder
omf.description, # abstract
'', # short_desc
omf.language, # lang
omf.seriesid # sk_seriesid
)
if omf.url > '':
docfile = doc.files.add(doc.id, omf.url, 1)
@ -251,5 +394,6 @@ if len(sys.argv) <> 2:
gnome_dir = sys.argv[1]
# Read in the omf files.
print 'Loading all omf files...'
os.path.walk(gnome_dir, callback, '*.omf')

View File

@ -529,6 +529,8 @@ def copy_document_files():
if type=='QUICK': filename = 'ref/' + filename
if type=='TEMPLATE': filename = 'howto/' + filename
create_source_file(filename)
sql = 'INSERT INTO document_file(doc_id, filename, top)'
sql += ' VALUES(' + str(doc_id) + ', ' + wsq(filename) + ', ' + wsq('t') + ')'
if DEBUG > 0:
@ -537,6 +539,17 @@ def copy_document_files():
lampadas_db.commit()
def create_source_file(filename):
# Create the file if it doesn't already exist
sql = 'SELECT COUNT(*) FROM sourcefile WHERE filename=' + wsq(filename)
cursor = lampadas_db.select(sql)
row = cursor.fetchone()
if row[0]==0:
sql = 'INSERT INTO sourcefile(filename) VALUES (' + wsq(filename) + ')'
lampadas_db.runsql(sql)
lampadas_db.commit()
def copy_notes():
note_id = lampadas_db.max_id('notes', 'note_id')
sql = 'SELECT doc_id, date_entered, notes, username FROM notes'

View File

@ -1,14 +1,8 @@
CREATE TABLE document_file
(
doc_id INT4 NOT NULL REFERENCES document(doc_id),
filename TEXT NOT NULL UNIQUE,
filename TEXT NOT NULL REFERENCES sourcefile(filename),
top BOOLEAN DEFAULT False,
format_code CHAR(20) REFERENCES format(format_code),
dtd_code CHAR(12) REFERENCES dtd(dtd_code),
dtd_version CHAR(12),
filesize INT4,
filemode CHAR(20),
modified TIMESTAMP,
PRIMARY KEY (doc_id, filename)
);

View File

@ -1,3 +1,3 @@
m4_define(insert,
[INSERT INTO document_file(doc_id, filename, top, format_code, dtd_code, dtd_version)
VALUES ($1, '$2', '$3', '$4', '$5', '$6');])m4_dnl
[INSERT INTO document_file(doc_id, filename, top)
VALUES ($1, '$2', '$3');])m4_dnl

View File

@ -1,6 +1,6 @@
CREATE TABLE file_error
(
filename TEXT NOT NULL REFERENCES document_file(filename),
filename TEXT NOT NULL REFERENCES sourcefile(filename),
err_id INT4 NOT NULL REFERENCES error(err_id),
date_entered TIMESTAMP NOT NULL DEFAULT now(),

View File

@ -1,3 +1,5 @@
insert([cvs_log], [/home/david/ldp/cvs/LDP/lampadas/bin/file_reports/cvs_log])
insert([file_listing], [/home/david/ldp/cvs/LDP/lampadas/bin/file_reports/file_listing])
insert([cvs_log], t,
[/home/david/ldp/cvs/LDP/lampadas/bin/file_reports/cvs_log])
insert([file_listing], f,
[/home/david/ldp/cvs/LDP/lampadas/bin/file_reports/file_listing])

View File

@ -1,6 +1,7 @@
CREATE TABLE file_report
(
report_code CHAR(20) NOT NULL,
only_cvs BOOLEAN DEFAULT False,
command TEXT,
PRIMARY KEY (report_code)

View File

@ -1,2 +1,2 @@
m4_define(insert, [INSERT INTO file_report(report_code, command)
VALUES ('$1', '$2');])m4_dnl
m4_define(insert, [INSERT INTO file_report(report_code, only_cvs, command)
VALUES ('$1', '$2', '$3');])m4_dnl

View File

@ -42,5 +42,5 @@ insert(password_mailed, default, [], 0, [], f,
insert(logged_in, default, [], 0, [], f, f, f)
insert(logged_out, default, [], 0, [], f, f, f)
insert(type, default, [], 0, [type], f, f, f)
insert(file_reports, default, [], 0, [filename], f, f, f)
insert(sourcefile, default, [], 0, [filename], f, f, f)
insert(file_report, default, [], 0, [report filename], f, f, f)

View File

@ -377,13 +377,13 @@ insert([subtopic], [Liste der Unterthemen], [],
insert([editdoc], [Metadaten eines Dokuments ändern], [Metadaten ändern],
[
|tabeditdoc|
<p>|tabdocerrors|
<p>|tabdocfiles|
<p>|tabdocfileerrors|
<p>|tabdocusers|
<p>|tabdocversions|
<p>|tabdoctopics|
<p>|tabdocnotes|
<p>|tabdocerrors|
], 2)
insert([404], [Fehler 404, Seite nicht gefunden], Fehler,
@ -453,7 +453,7 @@ insert([type], [|type.name|], [],
|tabtypedocs|
], 1)
insert([file_reports], [Report von Dateien], [],
insert([sourcefile], [Source File], [],
[
|tabfile_reports|
], 2)

View File

@ -380,13 +380,13 @@ insert([subtopic], [View Subtopic], [],
insert([editdoc], [Edit Document Meta-data], [Edit Document Meta-data],
[
|tabeditdoc|
<p>|tabdocerrors|
<p>|tabdocfiles|
<p>|tabdocfileerrors|
<p>|tabdocusers|
<p>|tabdocversions|
<p>|tabdoctopics|
<p>|tabdocnotes|
<p>|tabdocerrors|
], 2)
insert([404], [Error 404, Page Not Found], Error,
@ -452,10 +452,10 @@ insert([type], [|type.name|], [],
|tabtypedocs|
], 1)
insert([file_reports], [File Reports], [],
insert([sourcefile], [Source File], [],
[
|tabfile_reports|
], 2)
], 3)
insert([file_report], [File Report], [],
[

View File

@ -278,13 +278,13 @@ insert([subtopic], [View Subtopic], [],
insert([editdoc], [Méta-données du doc], [Méta-données du doc],
[
|tabeditdoc|
<p>|tabdocerrors|
<p>|tabdocfiles|
<p>|tabdocfileerrors|
<p>|tabdocusers|
<p>|tabdocversions|
<p>|tabdoctopics|
<p>|tabdocnotes|
<p>|tabdocerrors|
], 2])
insert([404], [Introuvable], [Introuvable],
@ -343,7 +343,7 @@ insert([type], [|type.name|], [],
|tabtypedocs|
], 1)
insert([file_reports], [File Reports], [],
insert([sourcefile], [Source File], [],
[
|tabfile_reports|
], 1)

View File

@ -96,3 +96,4 @@ insert(strerrors)
insert(strfilesize)
insert(strfilemode)
insert(strsearch)
insert(strunknown)

View File

@ -96,3 +96,4 @@ insert(strerrors, [Fehler])
insert(strfilesize, [Dateigröße])
insert(strfilemode, [Dateimodus])
insert(strsearch, [Suche])
insert(strunknown, [Unknown])

View File

@ -96,3 +96,4 @@ insert(strerrors, [Errors])
insert(strfilesize, [Filesize])
insert(strfilemode, [File Mode])
insert(strsearch, [Search])
insert(strunknown, [Unknown])

View File

@ -96,3 +96,4 @@ insert(strerrors, [Errors])
insert(strfilesize, [Filesize])
insert(strfilemode, [File Mode])
insert(strsearch, [Search])
insert(strunknown, [Unknown])

View File

@ -32,10 +32,11 @@ performed through this layer.
# the imported module changes or makes a mistake --nico
from Globals import *
from BaseClasses import *
from Config import config
from Database import db
from Log import log
from BaseClasses import *
from SourceFiles import sourcefiles
import string
import os.path
@ -203,8 +204,7 @@ class Docs(LampadasCollection):
self[doc.id] = doc
self.load_errors()
self.load_users()
self.load_files()
self.load_file_errors()
self.load_docfiles()
self.load_versions()
self.load_ratings()
self.load_topics()
@ -235,8 +235,8 @@ class Docs(LampadasCollection):
doc.users[docuser.username] = docuser
def load_files(self):
sql = "SELECT doc_id, filename, top, format_code, dtd_code, dtd_version, filesize, filemode, modified FROM document_file"
def load_docfiles(self):
sql = "SELECT doc_id, filename, top FROM document_file"
cursor = db.select(sql)
while (1):
row = cursor.fetchone()
@ -248,23 +248,6 @@ class Docs(LampadasCollection):
doc.files[docfile.filename] = docfile
def load_file_errors(self):
doc_ids = self.keys()
sql = 'SELECT filename, err_id, date_entered FROM file_error'
cursor = db.select(sql)
while (1):
row = cursor.fetchone()
if row==None: break
filename = trim(row[0])
for doc_id in doc_ids:
doc = self[doc_id]
if doc.files[filename]:
fileerr = FileErr()
fileerr.load_row(row)
doc.files[filename].errors[fileerr.err_id] = fileerr
break
def load_versions(self):
sql = "SELECT doc_id, rev_id, version, pub_date, initials, notes FROM document_rev"
cursor = db.select(sql)
@ -319,13 +302,13 @@ class Docs(LampadasCollection):
# rather than passing in all these parameters. --nico
def add(self, title, short_title, type_code, format_code, dtd_code, dtd_version, version, last_update, isbn, pub_status_code, review_status_code, tickle_date, pub_date, tech_review_status_code, license_code, license_version, copyright_holder, abstract, short_desc, lang, sk_seriesid):
self.id = db.next_id('document', 'doc_id')
id = db.next_id('document', 'doc_id')
# FIXME: use cursor.execute(sql,params) instead! --nico
sql = "INSERT INTO document(doc_id, title, short_title, type_code, format_code, dtd_code, dtd_version, version, last_update, isbn, pub_status, review_status, tickle_date, pub_date, tech_review_status, license_code, license_version, copyright_holder, abstract, short_desc, lang, sk_seriesid) VALUES (" + str(self.id) + ", " + wsq(title) + ", " + wsq(short_title) + ', ' + wsq(type_code) + ", " + wsq(format_code) + ", " + wsq(dtd_code) + ", " + wsq(dtd_version) + ", " + wsq(version) + ", " + wsq(last_update) + ", " + wsq(isbn) + ", " + wsq(pub_status_code) + ", " + wsq(review_status_code) + ", " + wsq(tickle_date) + ", " + wsq(pub_date) + ", " + wsq(tech_review_status_code) + ", " + wsq(license_code) + ", " + wsq(license_version) + ', ' + wsq(copyright_holder) + ', ' + wsq(abstract) + ", " + wsq(short_desc) + ', ' + wsq(lang) + ", " + wsq(sk_seriesid) + ")"
sql = "INSERT INTO document(doc_id, title, short_title, type_code, format_code, dtd_code, dtd_version, version, last_update, isbn, pub_status, review_status, tickle_date, pub_date, tech_review_status, license_code, license_version, copyright_holder, abstract, short_desc, lang, sk_seriesid) VALUES (" + str(id) + ", " + wsq(title) + ", " + wsq(short_title) + ', ' + wsq(type_code) + ", " + wsq(format_code) + ", " + wsq(dtd_code) + ", " + wsq(dtd_version) + ", " + wsq(version) + ", " + wsq(last_update) + ", " + wsq(isbn) + ", " + wsq(pub_status_code) + ", " + wsq(review_status_code) + ", " + wsq(tickle_date) + ", " + wsq(pub_date) + ", " + wsq(tech_review_status_code) + ", " + wsq(license_code) + ", " + wsq(license_version) + ', ' + wsq(copyright_holder) + ', ' + wsq(abstract) + ", " + wsq(short_desc) + ', ' + wsq(lang) + ", " + wsq(sk_seriesid) + ")"
assert db.runsql(sql)==1
db.commit()
doc = Doc(self.id)
self[self.id] = doc
doc = Doc(id)
self[id] = doc
return doc
def delete(self, id):
@ -448,6 +431,14 @@ class Doc:
db.runsql(sql)
db.commit()
def file_error_count(self):
error_count = 0
docfiles = self.files.keys()
for docfile in docfiles:
sourcefile = sourcefiles[docfile]
error_count = error_count + sourcefile.errors.count()
return error_count
# DocErrs
@ -524,34 +515,29 @@ class DocFiles(LampadasCollection):
def load(self):
# FIXME: use cursor.execute(sql,params) instead! --nico
sql = "SELECT doc_id, filename, top, format_code, dtd_code, dtd_version FROM document_file WHERE doc_id=" + str(self.doc_id)
sql = "SELECT doc_id, filename, top FROM document_file WHERE doc_id=" + str(self.doc_id)
cursor = db.select(sql)
while (1):
row = cursor.fetchone()
if row==None: break
docfile = DocFile()
docfile.load_row(row)
docfile.errors = FileErr(docfile.filename)
self.data[docfile.filename] = docfile
def error_count(self):
count = 0
for key in self.keys():
count = count + self[key].errors.count()
return count
def add(self, doc_id, filename, top):
# First, add a sourcefile record if it doesn't exist
sourcefile = sourcefiles[filename]
if sourcefile==None:
sourcefiles.add(filename)
def add(self, doc_id, filename, top, format_code=None, dtd_code=None, dtd_version=None):
# FIXME: use cursor.execute(sql,params) instead! --nico
sql = 'INSERT INTO document_file (doc_id, filename, top, format_code, dtd_code, dtd_version) VALUES (' + str(doc_id) + ', ' + wsq(filename) + ', ' + wsq(bool2tf(top)) + ', ' + wsq(format_code) + ', ' + wsq(dtd_code) + ', ' + wsq(dtd_version) + ')'
sql = 'INSERT INTO document_file (doc_id, filename, top) VALUES (' + str(doc_id) + ', ' + wsq(filename) + ', ' + wsq(bool2tf(top)) + ')'
assert db.runsql(sql)==1
db.commit()
file = DocFile()
file.doc_id = doc_id
file.filename = filename
file.errors.filename = filename
file.top = top
file.format_code = format_code
file.calc_local()
file.save()
self.data[file.filename] = file
return file
@ -571,155 +557,45 @@ class DocFiles(LampadasCollection):
db.commit()
self.data = {}
def error_count(self):
count = 0
for key in self.keys():
sourcefile = sourcefiles[key]
count = count + sourcefile.errors.count()
return count
class DocFile:
"""
An association between a document and a file.
"""
def __init__(self, filename=''):
self.filename = filename
self.calc_local()
self.format_code = ''
self.dtd_code = ''
self.dtd_version = ''
self.filesize = 0
self.filemode = ''
self.modified = ''
self.errors = FileErrs()
self.errors.filename = self.filename
if filename=='': return
self.load(filename)
def load(self, filename):
sql = 'SELECT doc_id, filename, top, format_code, dtd_code, dtd_version filesize, filemode, modified FROM document_file WHERE doc_id=' + str(self.doc_id) + ' AND filename=' + wsq(filename)
cursor = db.select(sql)
row = cursor.fetchone()
if row==None: return
self.load_row(row)
self.errors = FileErrs(self.filename)
def load_row(self, row):
self.doc_id = row[0]
self.filename = trim(row[1])
self.calc_local()
self.top = tf2bool(row[2])
self.format_code = trim(row[3])
self.dtd_code = trim(row[4])
self.dtd_version = trim(row[5])
self.filesize = safeint(row[6])
self.filemode = trim(row[7])
self.modified = time2str(row[8])
self.file_only = os.path.split(self.filename)[1]
self.basename = os.path.splitext(self.file_only)[0]
self.errors.filename = self.filename
def save(self):
# FIXME -- trying to start replacing wsq(), etc. --nico
#sql = 'UPDATE document_file SET top=' + wsq(bool2tf(self.top)) + ', format_code=' + wsq(self.format_code) + ' WHERE doc_id='+ str(self.doc_id) + ' AND filename='+ wsq(self.filename)
#db.runsql(sql)
dict = {'top':bool2tf(self.top),
'format_code':self.format_code,
'dtd_code':self.dtd_code,
'dtd_version':self.dtd_version,
'doc_id':self.doc_id,
'filename':self.filename,
'filesize':999,
'filemode':self.filemode,
'modified':self.modified,
}
sql = sqlgen.update('document_file',dict,['doc_id','filename'])
db.execute(sql,dict)
db.commit()
def calc_local(self):
if self.filename[:7]=='http://' or self.filename[:6]=='ftp://':
self.local = 0
self.localname = ''
self.in_cvs = 0
self.cvsname = ''
else:
self.local = 1
if self.filename[:7]=='file://':
self.localname = self.filename[7:]
self.in_cvs = 0
self.cvsname = ''
else:
self.localname = self.filename
self.in_cvs = 1
self.cvsname = config.cvs_root + self.filename
# FileErrs
class FileErrs(LampadasCollection):
"""
A collection object providing access to all file errors, as identified by the
Lintadas subsystem.
"""
def __init__(self, filename=''):
self.data = {}
self.filename = filename
if filename > '':
self.load()
def load(self):
# FIXME: use cursor.execute(sql,params) instead! --nico
sql = "SELECT filename, err_id, date_entered FROM file_error WHERE filename=" + wsq(self.filename)
cursor = db.select(sql)
while (1):
row = cursor.fetchone()
if row==None: break
file_err = FileErr()
file_err.load_row(row)
self.data[file_err.err_id] = file_err
def clear(self):
# FIXME: use cursor.execute(sql,params) instead! --nico
sql = "DELETE FROM file_error WHERE filename=" + wsq(self.filename)
db.runsql(sql)
db.commit()
self.data = {}
def count(self):
return db.count('file_error','filename=' + wsq(self.filename))
# FIXME: Try instantiating a FileErr object, then adding it to the *document*
# rather than passing all these parameters here. --nico
def add(self, err_id):
# FIXME: use cursor.execute(sql,params) instead! --nico
sql = "INSERT INTO file_error(filename, err_id) VALUES (" + wsq(self.filename) + ", " + str(err_id) + ')'
assert db.runsql(sql)==1
file_err = FileErr()
file_err.filename = self.filename
file_err.err_id = err_id
file_err.date_entered = now_string()
self.data[file_err.err_id] = file_err
db.commit()
class FileErr:
"""
An error filed against a document by the Lintadas subsystem.
"""
def __init__(self, filename=''):
self.filename = filename
if filename=='': return
self.load()
def load(self):
sql = 'SELECT filename, err_id, date_entered FROM file_error WHERE filename=' + wsq(self.filename)
sql = 'SELECT doc_id, filename, top FROM document_file WHERE doc_id=' + str(self.doc_id) + ' AND filename=' + wsq(self.filename)
cursor = db.select(sql)
while (1):
row = cursor.fetchone()
if row==None: break
self.load_row(row)
row = cursor.fetchone()
if row==None: return
self.load_row(row)
def load_row(self, row):
self.filename = trim(row[0])
self.err_id = safeint(row[1])
self.date_entered = time2str(row[2])
self.doc_id = row[0]
self.filename = trim(row[1])
self.top = tf2bool(row[2])
def save(self):
# FIXME -- trying to start replacing wsq(), etc. --nico
#sql = 'UPDATE document_file SET top=' + wsq(bool2tf(self.top)) + ', format_code=' + wsq(self.format_code) + ' WHERE doc_id='+ str(self.doc_id) + ' AND filename='+ wsq(self.filename)
#db.runsql(sql)
dict = {'doc_id':self.doc_id,
'filename':self.filename,
'top':bool2tf(self.top)}
sql = sqlgen.update('document_file',dict,['doc_id','filename'])
db.execute(sql,dict)
db.commit()
# DocUsers

View File

@ -24,6 +24,11 @@ Lampadas Database Module
This module generates a Database object for accessing a back-end RDBMS
"""
# Globals ##################################################################
AUTOCOMMIT = 1
# Modules ##################################################################
import pyPgSQL
@ -104,7 +109,8 @@ class Database:
def commit(self):
log(3, 'Committing database')
self.connection.commit()
if AUTOCOMMIT==0:
self.connection.commit()
# Specific derived DB classes ##################################################
@ -114,7 +120,7 @@ class PgSQLDatabase(Database):
def __init__(self,db_name):
from pyPgSQL import PgSQL
self.connection = PgSQL.connect(database=db_name)
self.connection.autocommit = AUTOCOMMIT
class MySQLDatabase(Database):

View File

@ -32,6 +32,7 @@ from Config import config
from Log import log
from URLParse import URI
from DataLayer import *
from SourceFiles import sourcefiles
from WebLayer import lampadasweb
from Lintadas import lintadas
from Sessions import sessions
@ -136,37 +137,6 @@ class ComboFactory:
combo.write("</select>\n")
return combo.get_value()
# def dtd(self, value, lang):
# combo = WOStringIO('<select name="dtd_code">\n' \
# '<option></option>')
# keys = lampadas.dtds.sort_by('dtd_code')
# for key in keys:
# dtd = lampadas.dtds[key]
# assert not dtd==None
# combo.write("<option ")
# if dtd.dtd_code==value:
# combo.write("selected ")
# combo.write("value='%s'>%s</option>\n"
# % (dtd.dtd_code,dtd.dtd_code))
# combo.write("</select>\n")
# return combo.get_value()
#
# def format(self, value, lang):
# combo = WOStringIO('<select name="format_code">\n' \
# '<option></option>')
# keys = lampadas.formats.sort_by_lang('name', lang)
# for key in keys:
# format = lampadas.formats[key]
# assert not format==None
# combo.write("<option ")
# if format.code==value:
# combo.write("selected ")
# combo.write("value='" + str(format.code) + "'>")
# combo.write(format.name[lang])
# combo.write("</option>\n")
# combo.write("</select>")
# return combo.get_value()
def language(self, value, lang):
combo = WOStringIO("<select name='lang'>\n")
keys = lampadas.languages.sort_by_lang('name', lang)
@ -297,7 +267,8 @@ class TableFactory:
box = WOStringIO()
if uri.id > 0:
lintadas.check(uri.id)
lintadas.check_doc(uri.id)
lintadas.import_doc_metadata(uri.id)
doc = lampadas.docs[uri.id]
box.write('<form method=GET action="/data/save/document" '\
'name="document">')
@ -430,63 +401,73 @@ class TableFactory:
log(3, 'Creating docfiles table')
doc = lampadas.docs[uri.id]
box = '''
<table class="box" width="100%">
<tr><th colspan="8">|strdocfiles|</th></tr>
<tr>
<th class="collabel">|strfilename|</th>
<th class="collabel">|strprimary|</th>
<th class="collabel">|strformat|</th>
<th class="collabel">|strupdated|</th>
<th class="collabel">|strfilesize|</th>
<th class="collabel">|strfilemode|</th>
<th class="collabel" colspan="2">|straction|</th>
</tr>
<tr><th colspan="6">|strdocfiles|</th></tr>
'''
doc = lampadas.docs[uri.id]
keys = doc.files.sort_by('filename')
for key in keys:
file = doc.files[key]
lintadas.check_file(key)
docfile = doc.files[key]
sourcefile = sourcefiles[key]
box = box + '<form method=GET action="/data/save/document_file" name="document_file">'
box = box + '<input name="doc_id" type=hidden value=' + str(doc.id) + '>\n'
box = box + '<input type=hidden name="filename" size=30 style="width:100%" value="' + file.filename + '">\n'
box = box + '<input type=hidden name="doc_id" value=' + str(doc.id) + '>\n'
box = box + '<input type=hidden name="filename" size=30 style="width:100%" value="' + docfile.filename + '">\n'
box = box + '<tr>\n'
if file.errors.count() > 0:
box = box + '<td class="error">' + file.filename + '</td>\n'
if sourcefile.errors.count() > 0:
box = box + '<td class="sectionlabel error" colspan="6">' + docfile.filename + '</td>\n'
else:
box = box + '<td><a href="/file_reports/' + file.filename + uri.lang_ext + '">' + file.filename + '</a></td>\n'
box = box + '<td>' + combo_factory.tf('top', file.top, uri.lang) + '</td>\n'
if file.format_code > '':
box = box + '<td>' + lampadas.formats[file.format_code].name[uri.lang] + '</td>\n'
box = box + '<td class="sectionlabel" colspan="6"><a href="/sourcefile/' + docfile.filename + uri.lang_ext + '">' + docfile.filename + '</a></td>\n'
box = box + '</tr>\n'
box = box + '<tr>\n'
box = box + '<th class="label">|strprimary|</th>'
box = box + '<td>' + combo_factory.tf('top', docfile.top, uri.lang) + '</td>\n'
box = box + '<th class="label">|strfilesize|</th>'
box = box + '<td>' + str(sourcefile.filesize) + '</td>\n'
box = box + '<th class="label">|strupdated|</th>'
if sourcefile.modified > '':
box = box + '<td>' + sourcefile.modified + '</td>\n'
else:
box = box + '<td></td>\n'
box = box + '<td>' + file.modified + '</td>\n'
box = box + '<td>' + str(file.filesize) + '</td>\n'
box = box + '<td>' + str(file.filemode) + '</td>\n'
box = box + '<td>|strunknown|</td>\n'
box = box + '</tr>\n'
box = box + '<tr>\n'
box = box + '<th class="label">|strformat|</th>'
if sourcefile.format_code > '':
box = box + '<td>' + lampadas.formats[sourcefile.format_code].name[uri.lang] + '</td>\n'
else:
box = box + '<td>|strunknown|</td>\n'
box = box + '<th class="label">|strfilemode|</th>'
if sourcefile.filemode > '':
box = box + '<td>' + str(sourcefile.filemode) + '</td>\n'
else:
box = box + '<td>|strunknown|</td>\n'
box = box + '''
<td><input type="checkbox" name="delete">|strdelete|</td>
<td><input type="submit" name="action" value="|strsave|">
</td>
<td><input type="submit" name="action" value="|strsave|"></td>
</tr>
</form>
'''
box = box + '</form>'
# Add a new docfile
box = box + '<tr>\n'
box = box + '<form method=GET action="/data/save/newdocument_file" name="document_file">'
box = box + '<input name="doc_id" type="hidden" value="' + str(doc.id) + '">\n'
box = box + '<td colspan="6"><input type="text" name="filename" size="30" style="width:100%"></td>\n'
box = box + '</tr>\n'
box = box + '<tr>\n'
box = box + '<td><input type="text" name="filename" size="30" style="width:100%"></td>\n'
box = box + '<th class="label">|strprimary|</th>'
box = box + '<td>' + combo_factory.tf('top', 0, uri.lang) + '</td>\n'
box = box + '<td></td>\n'
box = box + '<td></td>\n'
box = box + '<td></td>\n'
box = box + '<td></td>\n'
box = box + '''
<td></td>
<td><input type="submit" name="action" value="|stradd|"></td>
</tr>
</form>
</table>
'''
box = box + '</table>\n'
return box
@ -517,7 +498,13 @@ class TableFactory:
box = box + '<input type=hidden name="doc_id" value=' + str(doc.id) + '>\n'
box = box + '<input type=hidden name="username" value=' + docuser.username + '>\n'
box = box + '<tr>\n'
box = box + '<td>' + docuser.username + '</td>\n'
if sessions.session:
if sessions.session.user.admin==1 or sessions.session.user.sysadmin==1:
box = box + '<td><a href="/user/' + docuser.username + '">' + docuser.username + '</a></td>\n'
else:
box = box + '<td>' + docuser.username + '</td>\n'
else:
box = box + '<td>' + docuser.username + '</td>\n'
box = box + '<td>' + combo_factory.tf('active', docuser.active, uri.lang) + '</td>\n'
box = box + '<td>' + combo_factory.role(docuser.role_code, uri.lang) + '</td>\n'
box = box + '<td><input type=text name=email size=15 value="' +docuser.email + '"></td>\n'
@ -638,23 +625,15 @@ class TableFactory:
if sessions.session.user.can_edit(doc_id=doc_id)==0:
continue
if doc.lang==uri.lang:
show_doc = 0
show_files = 0
if doc.errors.count() > 0:
show_doc = 1
else:
filenames = doc.files.keys()
for filename in filenames:
if doc.files[filename].errors.count() > 0:
show_files = 1
break
if show_doc==1 or show_files==1:
uri.id = doc_id
doctable = self.docerrors(uri)
filestable = self.docfileerrors(uri)
if doctable > '' or filestable > '':
box = box + '<h1>' + doc.title + '</h1>'
uri.id = doc_id
if show_doc==1:
box = box + '<p>' + self.docerrors(uri)
if show_files==1:
box = box + '<p>' + self.docfileerrors(uri)
if doctable > '':
box = box + '<p>' + doctable
if filestable > '':
box = box + '<p>' + filestable
return box
def docerrors(self, uri):
@ -665,6 +644,10 @@ class TableFactory:
log(3, 'Creating docerrors table')
doc = lampadas.docs[uri.id]
if doc.errors.count()==0:
return ''
box = ''
box = box + '<table class="box" width="100%">'
box = box + '<tr><th colspan="2">|strdocerrs|</th></tr>\n'
@ -690,18 +673,22 @@ class TableFactory:
return '|blknopermission|'
log(3, 'Creating filereports table')
sourcefile = sourcefiles[uri.filename]
box = ''
box = box + '<table class="box" width="100%">'
box = box + '<tr><th colspan="2">|strfilereports| |uri.filename|</th></tr>\n'
report_codes = lampadasweb.file_reports.sort_by('name')
box = box + '<tr><th colspan="2">|strfilereports|</th></tr>\n'
box = box + '<tr><th colspan="2" class="sectionlabel">|uri.filename|</th></tr>\n'
report_codes = lampadasweb.file_reports.sort_by_lang('name', uri.lang)
for report_code in report_codes:
report = lampadasweb.file_reports[report_code]
box = box + '<tr>\n'
box = box + '<td><a href="/file_report/' + report.code + '/'
box = box + uri.filename + uri.lang_ext + '">'
box = box + report.name[uri.lang] + '</a></td>\n'
box = box + '<td>' + report.description[uri.lang] + '</td>\n'
box = box + '</tr>\n'
if report.only_cvs==0 or sourcefile.in_cvs==1:
box = box + '<tr>\n'
box = box + '<td><a href="/file_report/' + report.code + '/'
box = box + uri.filename + uri.lang_ext + '">'
box = box + report.name[uri.lang] + '</a></td>\n'
box = box + '<td>' + report.description[uri.lang] + '</td>\n'
box = box + '</tr>\n'
box = box + '</table>\n'
return box
@ -716,9 +703,10 @@ class TableFactory:
# Build and execute the command
report = lampadasweb.file_reports[uri.code]
command = report.command
sourcefile = sourcefiles[uri.filename]
fh = open('/tmp/lampadas_filename.txt', 'w')
fh.write(uri.filename + '\n')
fh.write(sourcefile.localname + '\n')
fh.close()
child_stdin, child_stdout, child_stderr = os.popen3(command)
@ -745,6 +733,10 @@ class TableFactory:
log(3, 'Creating docfileerrors table')
doc = lampadas.docs[uri.id]
if doc.file_error_count()==0:
return ''
box = ''
box = box + '<table class="box" width="100%">'
box = box + '<tr><th colspan="3">|strfileerrs|</th></tr>\n'
@ -755,13 +747,14 @@ class TableFactory:
box = box + '</tr>\n'
filenames = doc.files.sort_by('filename')
for filename in filenames:
file = doc.files[filename]
err_ids = file.errors.sort_by('date_entered')
sourcefile = sourcefiles[filename]
err_ids = sourcefile.errors.sort_by('date_entered')
for err_id in err_ids:
fileerror = file.errors[err_id]
fileerror = sourcefile.errors[err_id]
error = lampadas.errors[err_id]
box = box + '<tr>\n'
box = box + '<td>' + str(fileerror.err_id) + '</td>\n'
box = box + '<td>' + sourcefile.filename + '</td>\n'
box = box + '<td>' + error.name[uri.lang] + '</td>\n'
box = box + '</tr>\n'
box = box + '</table>\n'
@ -784,6 +777,8 @@ class TableFactory:
return '|tabnopermission|'
elif sessions.session.user.admin==0 and sessions.session.user.sysadmin==0:
return '|tabnopermission|'
elif uri.letter=='':
return ''
log(3, 'Creating users table')
box = '<table class="box" width="100%"><tr><th colspan=2>|strusers|</th></tr>\n'
box = box + '<tr>\n'
@ -936,11 +931,18 @@ class TableFactory:
return box
def userdocs(self, uri, username=''):
"""
Displays a DocTable containing documents linked to a user.
The default is to display docs for the logged-on user.
"""
if sessions.session==None:
return '|nopermission|'
if sessions.session.user.can_edit(username=username)==0:
return '|nopermission|'
return self.doctable(uri, username=sessions.session.username)
if username > '':
return self.doctable(uri, username=username)
else:
return self.doctable(uri, username=sessions.session.username)
def section_menu(self, uri, section_code):
log(3, "Creating section menu: " + section_code)
@ -1368,7 +1370,7 @@ class PageFactory:
newstring = '|blknotfound|'
if token=='user.docs':
if sessions.session:
newstring = self.tablef.userdocs(uri, sessions.session.username)
newstring = self.tablef.userdocs(uri, uri.username)
else:
newstring = '|blknotfound|'

View File

@ -32,6 +32,7 @@ from Globals import *
from Config import config
from Log import log
from DataLayer import lampadas
from SourceFiles import sourcefiles
import os
import stat
import string
@ -55,6 +56,16 @@ ERR_FILE_NOT_READABLE = 6
# Lintadas
class Lintadas:
"""
Updates file and document meta-data and analyzes it for errors.
NOTE: You must always check files *before* checking documents,
so that the file meta-data is up to date. Checking the documents
will pull the top file's meta-data over into the document.
Alternatively, you can call import_docs_metadata() at any time
to only upload meta-data from files to documents.
"""
# This is a list of file extensions and the file types
# they represent.
@ -66,118 +77,65 @@ class Lintadas:
'texi': 'texinfo',
}
def check_all(self):
def check_files(self):
keys = sourcefiles.keys()
for key in keys:
self.check_file(key)
def check_docs(self):
keys = lampadas.docs.keys()
for key in keys:
self.check(key)
self.check_doc(key)
def check(self, doc_id):
"""
Check for errors at the document level.
"""
def check_file(self, filename):
log(3, 'Running Lintadas on file ' + filename)
sourcefile = sourcefiles[filename]
log(3, 'Running Lintadas on document ' + str(doc_id))
doc = lampadas.docs[doc_id]
filenames = doc.files.keys()
usernames = doc.users.keys()
# See if the document is maintained
maintained = 0
for username in usernames:
docuser = doc.users[username]
if docuser.active==1 and (docuser.role_code=='author' or docuser.role_code=='maintainer'):
maintained = 1
doc.maintained = maintained
# CLear out errors before checking
sourcefile.errors.clear()
# Clear any existing errors
doc.errors.clear()
for filename in filenames:
file = doc.files[filename]
file.errors.clear()
# If document is not active or archived, do not flag
# any errors against it.
if doc.pub_status_code<>'N' and doc.pub_status_code<>'A':
# Do not check remote files.
# FIXME: It should check the local file if it has been
# downloaded already.
if sourcefile.local==0:
log(3, 'Skipping remote file ' + filename)
return
filename = sourcefile.localname
# If file the is missing, flag error and stop.
if os.access(filename, os.F_OK)==0:
sourcefile.errors.add(ERR_FILE_NOT_FOUND)
return
# Flag an error against the *doc* if there are no files.
if doc.files.count()==0:
doc.errors.add(ERR_NO_SOURCE_FILE)
# If file is not readable, flag error and top.
if os.access(filename, os.R_OK)==0:
sourcefile.errors.add(ERR_FILE_NOT_READABLE)
return
# Read file information
filestat = os.stat(filename)
sourcefile.filesize = filestat[stat.ST_SIZE]
sourcefile.filemode = filestat[stat.ST_MODE]
sourcefile.modified = time.ctime(filestat[stat.ST_MTIME])
# Determine file format.
file_extension = string.lower(string.split(filename, '.')[-1])
if self.extensions.has_key(file_extension) > 0:
sourcefile.format_code = self.extensions[file_extension]
# If we were able to read format code, post it to the document,
if sourcefile.format_code=='':
sourcefile.errors.add(ERR_NO_FORMAT_CODE)
# Determine DTD for SGML and XML files
if sourcefile.format_code=='xml' or sourcefile.format_code=='sgml':
sourcefile.dtd_code, sourcefile.dtd_version = self.read_file_dtd(filename)
else:
# Count the number of top files. There muse be exactly one.
# This takes advantage of the fact that true=1 and false=0.
top = 0
for filename in filenames:
top = top + doc.files[filename].top
if top==0:
doc.errors.add(ERR_NO_PRIMARY_FILE)
if top > 1:
doc.errors.add(ERR_TWO_PRIMARY_FILES)
for filename in filenames:
file = doc.files[filename]
file.errors.clear()
# Do not check remote files.
# FIXME: It should check the local file if it has been
# downloaded already.
if file.local==0:
log(3, 'Skipping remote file ' + filename)
continue
log(3, 'Checking filename ' + filename)
if file.in_cvs==1:
filename = file.cvsname
else:
filename = file.localname
# If file the is missing, flag error and stop.
if os.access(filename, os.F_OK)==0:
file.errors.add(ERR_FILE_NOT_FOUND)
continue
# If file is not readable, flag error and top.
if os.access(filename, os.R_OK)==0:
file.errors.add(ERR_FILE_NOT_READABLE)
continue
# Read file information
filestat = os.stat(filename)
file.filesize = filestat[stat.ST_SIZE]
file.filemode = filestat[stat.ST_MODE]
file.modified = time.ctime(filestat[stat.ST_MTIME])
# Determine file format.
file_extension = string.lower(string.split(filename, '.')[-1])
if self.extensions.has_key(file_extension) > 0:
file.format_code = self.extensions[file_extension]
# If we were able to read format code, post it to the document,
if file.format_code=='':
file.errors.add(ERR_NO_FORMAT_CODE)
# Determine DTD for SGML and XML files
if file.format_code=='xml' or file.format_code=='sgml':
file.dtd_code, file.dtd_version = self.read_file_dtd(filename)
else:
file.dtd_code = 'N/A'
file.dtd_version = ''
# If this was the top file, post to document.
if file.top==1:
doc.format_code = file.format_code
doc.dtd_code = file.dtd_code
doc.dtd_version = file.dtd_version
# FIXME: need a way to keep track of who is managing these fields.
# Probably it should be managed by Lampadas, but allow the user
# the ability to override it with their setting.
file.save()
doc.save()
log(3, 'Lintadas run on document ' + str(doc_id) + ' complete')
sourcefile.dtd_code = 'N/A'
sourcefile.dtd_version = ''
sourcefile.save()
def read_file_dtd(self, filename):
"""
@ -221,6 +179,68 @@ class Lintadas:
return dtd_code, dtd_version
def check_doc(self, doc_id):
"""
Check for errors at the document level.
"""
log(3, 'Running Lintadas on document ' + str(doc_id))
doc = lampadas.docs[doc_id]
filenames = doc.files.keys()
usernames = doc.users.keys()
# See if the document is maintained
maintained = 0
for username in usernames:
docuser = doc.users[username]
if docuser.active==1 and (docuser.role_code=='author' or docuser.role_code=='maintainer'):
maintained = 1
doc.maintained = maintained
# Clear any existing errors
doc.errors.clear()
# If document is not active or archived, do not flag
# any errors against it.
if doc.pub_status_code<>'N' and doc.pub_status_code<>'A':
return
# Flag an error against the *doc* if there are no files.
if doc.files.count()==0:
doc.errors.add(ERR_NO_SOURCE_FILE)
else:
# Count the number of top files. There muse be exactly one.
# This takes advantage of the fact that true=1 and false=0.
top = 0
for filename in filenames:
if doc.files[filename].top:
top = top + 1
if top==0:
doc.errors.add(ERR_NO_PRIMARY_FILE)
if top > 1:
doc.errors.add(ERR_TWO_PRIMARY_FILES)
doc.save()
log(3, 'Lintadas run on document ' + str(doc_id) + ' complete')
def import_docs_metadata(self):
doc_ids = lampadas.docs.keys()
for doc_id in doc_ids:
self.import_doc_metadata(doc_id)
def import_doc_metadata(self, doc_id):
doc = lampadas.docs[doc_id]
filenames = doc.files.keys()
for filename in filenames:
docfile = doc.files[filename]
if docfile.top==1:
sourcefile = sourcefiles[filename]
doc.format_code = sourcefile.format_code
doc.dtd_code = sourcefile.dtd_code
doc.dtd_version = sourcefile.dtd_version
doc.save()
lintadas = Lintadas()
@ -236,12 +256,16 @@ def main():
config.log_level = 3
docs = sys.argv[1:]
if len(docs)==0:
print "Running Lintadas on all documents..."
lintadas.check_all()
print 'Running Lintadas on all documents...'
lintadas.check_docs()
print 'Running Lintadas on all files...'
lintadas.check_files()
print 'Updating Meta-data on all documents...'
lintadas.import_docs_metadata()
else:
for doc_id in docs:
print "Running Lintadas on document " + str(doc_id)
lintadas.check(int(doc_id))
lintadas.check_doc(int(doc_id))
def usage():
print "Lintadas version " + VERSION

View File

@ -82,7 +82,7 @@ class Mirror:
os.mkdir(cachedir)
file = doc.files[filekey]
filename = file.filename
filename = file.localname
file_only = file.file_only
cachename = cachedir + file_only
@ -91,11 +91,11 @@ class Mirror:
# It is expensive to copy local documents into a cache directory,
# but it avoids publishing documents directly out of CVS.
# Some publishing tools leave clutter in the directory on failure.
if not os.access(config.cvs_root + filename, os.F_OK):
if not os.access(filename, os.F_OK):
log(2, 'Cannot mirror missing file: ' + filename)
continue
log(3, 'mirroring local file ' + filename)
command = 'cd ' + cachedir + '; cp -pu ' + config.cvs_root + filename + ' .'
command = 'cd ' + cachedir + '; cp -pu ' + filename + ' .'
os.system(command)
else:

View File

@ -242,7 +242,7 @@ class FileReports(LampadasCollection):
def __init__(self):
self.data = {}
sql = 'SELECT report_code, command FROM file_report'
sql = 'SELECT report_code, only_cvs, command FROM file_report'
cursor = db.select(sql)
while (1):
row = cursor.fetchone()
@ -268,8 +268,9 @@ class FileReport:
self.description = LampadasCollection()
def load_row(self, row):
self.code = trim(row[0])
self.command = trim(row[1])
self.code = trim(row[0])
self.only_cvs = tf2bool(row[1])
self.command = trim(row[2])
# WebLayer

View File

@ -3,7 +3,12 @@
# This script runs the passed command line 10 times and reports the results.
#
echo `date` > /tmp/lampadas-timer
for x in 1 2 3 4 5 6 7 8 9 10
do
/usr/bin/time -f "%e" $1 $2 $3 $4 $5
done
echo `date` >> /tmp/lampadas-timer
cat /tmp/lampadas-timer

View File

@ -54,6 +54,7 @@ TH {
color: white;
background-color: brown;
vertical-align: top;
font-size: 12pt;
}
.box .label {
@ -69,6 +70,16 @@ TH {
text-align: left;
color: brown;
background: transparent;
vertical-align: bottom;
font: 11pt bold;
}
.box .sectionlabel {
padding-top: 20;
vertical-align: bottom;
text-align: center;
font-size: 11pt;
font-weight: bold;
}
TD {

View File

@ -49,13 +49,15 @@ TH {
color: black;
background-color: #BBCCEE;
vertical-align: top;
font-size: 12pt;
font-weight: bold;
}
.box .label {
background-color: transparent;
text-align: right;
vertical-align: top;
font-size: 10pt;
font-size: 11pt;
font-style: bold;
color: #003355;
}
@ -63,6 +65,18 @@ TH {
.box .collabel {
text-align: left;
color: black;
background: transparent;
vertical-align: bottom;
font-size: 11pt;
}
.box .sectionlabel {
padding-top: 20;
vertical-align: bottom;
text-align: center;
background: transparent;
font-size: 11pt;
font-weight: bold;
}
TD {