Components of a running system This chapter reviews the components of a running CNews+NNTPd server. Analogous components will be found in an INN-based system too. We invite additions from readers familiar with INN to add their pieces to this chapter.
<literal>/var/lib/news</literal>: the CNews control area This directory is more popularly known as $NEWSCTL. It contains configuration, log and status files. There are no articles or binaries kept here. Let's see what some of the files are meant for. sys: One line per system/NDN listing all the newsgroup hierarchies each system subscribes to. Each line is prefixed with the system name and the one beginning with ME: indicates what we are going to receive. Look up manpage of newssys. explist: This file has entries indicating articles of which newsgroup expire and when and if they have to be archived. The order in which the newsgroups are listed is important. See manpage of expire for file format. batchparms: Details of how to feed other sites/NDN, like the size of batches, the mode of transmission (UUCP/NNTP) are specified here. manpage to refer: newsbatch. controlperm: If you wish to authenticate a control message before any action is taken on it, you must enter authentication-related information here. The controlperm manpage will list all the fields in detail. mailpaths: It features the e-mail address of the moderator for each newsgroup who is responsible for approving/disapproving articles posted to moderated newsgroups. The sample mailpaths file in the tar will you give an idea of how entries are made. nntp_access/user_access: These files contain entries of servernames and usernames on whom restrictions will apply when accessing newsgroups. Again, the sample file in the tarball shall explain the format of the file. log, errlog: These are log files that keep growing large with each batch that is received. The log file has one entry per article telling you if it has been accepted by your news server or rejected. To understand the format of this file, refer to Chapter 2.2 of the CNews guide. Errors, if any, while digesting the articles are logged in errlog. These log files have to be rolled as the files hog a lot of disk space. nntplog: This file logs information of the nntp daemon giving details of when a connection was established/broken and what commands were issued. This file needs to be configured in syslog and syslog daemon should be running. active: This file has one line per newsgroup to be found in your news server. Besides other things, it tells you how many articles are currently present in each newsgroup. It is updated when each batch is digested or when articles are expired. The active manpage will furnish more details about other paramaters. history: This file, again, contains one line per article, mapping message-id to newsgroup name and also giving its associated article no. in that newsgroup. It is updated each time a feed is digested and when doexpire is run. Plays a key role in loop-detection and serves as an article database. Read manpage of newsdb, doexpire for the file format newsgroups: It has a one line description for each newsgroup explaining what kind of posts go into each of them. Ideally speaking, it should cover all the newsgroups found in the active file. Miscellaneous files: Files like mailname, organisation, whoami contain information required for forming some of the headers of an article. The contents of mailname form the From: header and that of organisation form the Organisation: header. whoami contains the name of the news system. Refer to chapter 2.1 of guide.ps for a detailed list of files in the $NEWSCTL area. Read RFC 1036 for description of article headers .
<literal>/var/spool/news</literal>: the article repository This is also known as the $NEWSARTS or $NEWSSPOOL directory. This is where the articles reside on your disk. No binaries or control files should belong here. Enough space should be allocated to this directory as the number of articles keep increasing with each batch that is digested. An explanation of the following sub-directories will give you an overview of this directory: in.coming: Feeds/batches/articles from NDNs on their arrival and before being processed reside in this directory. After processing, they appear in $NEWSARTS or in its bad sub-directory if there were errors. out.going: This directory contains batches/feeds to be sent to your NDNs i.e. feeds to be pushed to your neighbouring sites reside here before they are transmitted. It contains one sub-directory per NDN mentioned in the sys file. These sub-directories contain files called togo which contain information about the article like the message-id or the article no. that is queued for transmission. newsgroup directories: For each newsgroup hierarchy that the news server has subscribed to, a directory is created under $NEWSARTS. Further sub-directories are created under the parent to hold articles of specific newsgroups. For instance, for a newsgroup like comp.music.compose, the parent directory comp will appear in $NEWSARTS and a sub-directory called music will be created under comp. The music sub-directory shall contain a further sub-directory called compose and all articles of comp.music.compose shall reside here. In effect, article 242 of newsgroup comp.music.compose shall map to file $NEWSARTS/comp/music/compose/242. control: The control directory houses only the control messages that have been received by this site. The control messages could be any of the following: newgroup, rmgroup, checkgroup and cancel appearing in the subject line of the article. junk: The junk directory contains all articles that the news server has received and has decided, after processing, that it does not belong to any of the hierarchies it has subscribed to. The news server transfers/passes all articles in this directory to NDNs that have subscribed to the junk hierarchy.
<literal>/usr/lib/newsbin</literal>: the executables
<literal>crontab and cron jobs </literal> The heart of the Usenet news server is the various scripts that run at regular intervals processing articles, digesting/rejecting them and transmitting them to NDNs. I shall try to enumerate the ones that are important enough to be cronned. :) newsrun: The key script. This script picks the batches in the in.coming directory, uncompresses them if necessary and feeds it to relaynews which then processes each article digesting and batching them and logging any errors. This script needs to run through cron as frequently as you want the feeds to be digested. Every half hour should suffice for a non-critical requirement. sendbatches: This script is run to transmit the togo files formed in the out.going directory to your NDNs. It reads the batchparms file to know exactly how and to whom the batches need to be transmitted. The frequency, again, can be set according to your requirements. Once an hour should be sufficient. newsdaily: This script does maintenance chores like rolling logs and saving them, reporting errors/anomalies and doing cleanup jobs. It should typically run once a day. newswatch: This looks for news problems at a more detailed level than newsdaily like looking for persistent lock files, determining if there is enough space for a minimum no. of files, if there is a huge queue of unattended batches and the likes. This should typically run once every hour. For more on this and the above, read the newsmaint manpage. doexpire: This script expires old articles as determined by the control file explist and updates the active file. This is necessary if you do not want unnecessary/unwanted articels hogging up your disk space. Run it once a day. Manpage: expire newsrunning off/on: This script shuts/starts off the news server for you. You could choose to add this in your cron job if you think the news server takes up lots of CPU time during peak hours and you wish to keep a check on it.
<literal>newsrun</literal> and <literal>relaynews</literal>: digesting received articles The heart and soul of the Usenet News system, newsrun just picks up the batches/ articles in the in.coming directory of $NEWSARTS and uncompresses them (if required) and calls relaynews. It should run from cron. relaynews picks up each article one by one through stdin, determines if it belongs to a subscribed group by looking up sys file, looks in the history file to determine that it does not already exist locally, digests it updating the active and history file and batches it for neighbouring sites. Logs errors on encountering problems while processing the article and takes appropriate action if it happens to be a control message. More info in manpage of relaynews.
<literal>doexpire</literal> and <literal>expire</literal>: removing old articles A good way to get rid of unwanted/old articles from the $NEWSARTS area is to run doexpire once a day. It reads the explist file from the $NEWSCTL directory to determine what articles expire today. It can archive the said article if so configured. It then updates the active and the history file accordingly. If you wish to retain the article entry in the history file to avoid re-digesting it as a new article after having expired it add a special /expired/; line in the control file. More on the options and functioning in the expire manpage.
<literal>nntpd</literal> and <literal>msgidd</literal>: managing the NNTP interface As has already been discussed in the chapter on setting up the software, nntpd is a TCP-based server daemon which runs under inetd. It is fired by inetd whenever there's an incoming connection on the NNTP port, and it takes over the dialogue from there. It reads the C-News configuration and data files in $NEWSCTL, article files from $NEWSARTS>, and receives incoming posts and transfers. These it dutifully queues in $NEWSARTS/in.coming, either as batch files or single article files. It is important that inetd be configured to fire nntpd as user news, not as root like it does for other daemons like telnetd or ftpd. If this is not done correctly, a lot of problems can be caused in the functioning of the C-News system later. nntpd is fired each time a new NNTP connection is received, and dies once the NNTP client closes its connection. Thus, if one nntpd receives a few articles by an incoming batch feed (not a POST but an XFER), then another nntpd will not know about the receipt of these articles till the batches are digested. This will hamper duplicate newsfeed detection if there are multiple upstream NDNs feeding our server with the same set of articles over NNTP. To fix this, nntpd uses an ally: msgidd, the message ID daemon. This daemon is fired once at server bootup time through newsboot, and keeps running quietly in the background, listening on a named Unix socket in the $NEWSCTL area. It keeps in its memory a list of all message IDs which various incarnations of nntpd have asked it to remember. Thus, when one copy of nntpd receives an incoming feed of news articles, it updates msgidd with the message IDs of these messages through the Unix socket. When another copy of nntpd is fired later and the NNTP client tries to feed it some more articles, the nntpd checks each message ID against msgidd. Since msgidd stores all these IDs in memory, the lookup is very fast, and duplicate articles are blocked at the NNTP interface itself. On a running system, expect to see one instance of nntpd for each active NNTP connection, and just one instance of msgidd running quietly in the background, hardly consuming any CPU resources. Our nntpd is configured to die if the NNTP connection is more than a few minutes idle, thus conserving server resources. This does not inconvenience the user because modern NNTP clients simply re-connect. If an nntpd instance is found to be running for days, it is either hung due to a network error, or is receiving a very long incoming NNTP feed from your upstream server. We used to receive our primary incoming feed from our service provider through NNTP sessions lasting 18 to 20 hours without a break, every day.
<literal>nov</literal>, the News Overview system NOV, the News Overview System is a recent augmentation to the C-News and NNTP systems and to the NNTP protocol. This subsystem maintains a file for each active newsgroup, in which it maintains one line per current article. This line of text contains some key meta-data about the article, e.g. the contents of the From, Subject, Date and the article size and message ID. This speeds up NNTP response enormously. The nov library has been integrated into the nntpd code, and into key binaries of C-News, thus providing seamless maintenance of the News Overview database when articles are added or deleted from the repository. When newsrun adds an article into starcom.test, it also updates $NEWSARTS/starcom/test/.overview and adds a line with the relevant data, tab-separated, into it. When nntpd comes to life with an NNTP client, and it sees the XOVER NNTP command, it reads this .overview file, and returns the relevant lines to the NNTP client. When expire deletes an article, it also removes the corresponding line from the .overview file. Thus, the maintenance of the NOV database is seamless.
Batching feeds with UUCP and NNTP Some information about batching feeds has been provided in earlier sections. More will be added later here in this document.