Linux Gazette... making Linux just a little more fun!

Copyright © 1996-97 Specialized Systems Consultants, Inc. linux@ssc.com


Welcome to Linux Gazette!(tm)


Published by:

Linux Journal


Sponsored by:

InfoMagic

S.u.S.E.

Red Hat

Our sponsors make financial contributions toward the costs of publishing Linux Gazette. If you would like to become a sponsor of LG, e-mail us at sponsor@ssc.com.


Table of Contents
October 1997 Issue #22


The Answer Guy
The Weekend Mechanic will be back next month


The Whole Damn Thing 1 (text)
The Whole Damn Thing 2 (HTML)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.


Got any great ideas for improvements! Send your comments, criticisms, suggestions and ideas.


This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com

"Linux Gazette...making Linux just a little more fun!"


 The Mailbag!

Write the Gazette at gazette@ssc.com

Contents:


Help Wanted -- Article Ideas


 Date: Mon, 25 Aug 1997 15:02:14 -0700
From: cooldude cooldude@digitalcave.com
Subject: how do

how do i setup a linux server from scratch?
my freind has the t1 connection and im gonna admin it with his ermission need ta know A. S.A.P.
=)
thanks


 Date: Mon, 1 Sep 97 18:59:51 UT
From: Richard Wang rzlw1@classic.msn.com
Hi,
I have just set up a system for RedHat Linux, but I am finding getting real support for this system is very difficult. In fact, I cannot even setup my webpage via SLIP from the manuals I have. Redhat seems to go against it'scompetitor Caldera, and I am finding it hard to find the right manuals and guides for this system.
Do you have an online help person, who I can log to ?
Looking forward to your reply,

Richard Wang
Cambridge
United Kingdom


 Date: Wed, 17 Sep 1997 19:49:55 -0700
From: Garry Jackson gjackson@home.com
Subject: Linux Problem.

I'm a linux newbie and I'm having major problems. I have a monitor that is kapible of 800X600 and I don't know anything else about it. I Also have a Trio 32/64. I cannot get Xwindows to go so what should I do.

Also I'm have a problem with my SB16 PNP and I can't get that to work and I can't get a Supra 28.9 PnP and a SN-3200 witch is a NE-200 clone if you could give me any tips on getting this stuff work It would be thanked.

Garry Jackson


 Date: Wed, 3 Sep 1997 19:28:20 -0400
From: Prow Prowlyr@mindspring.com
Subject: Just some really basic help please.

I want to learn about unix but really dont know where to start. Can I get a free version somewhere to get me started? Do you know of a good Unix for dummies site that might help? Would greatly appreciate any reply via e-mail. Thanx in advance.


 Date: Tue, 09 Sep 1997 00:49:50 +0200
From: Michael Stumpf ms@astat.de
Subject: Linux Kernel

I'm searching information about the status of the current kernel (release and/or developer). Do you have a web-address from an up-to-date site ? I used to look at "http://www.linuxhq.com" for this, but it seems that it is forever down.

tia

Michael


 Date: Sat, 27 Sep 1997 11:02:04 -0400
From: Dave Runnels drunnels@panix.com
Subject: 3com509b problems

I recently added a 3com509b Ethernet card to my Win95/Linux machine. I run the machine in PnP mode and the RedHat 4.2 install process won't recognize the card. RedHat's solution was to disable PnP for the machine. While this might be fine for Linux, I am forced to use Win95 for a number of things and turning off PnP (which works great for me on Win95) will be a real pain in the ass.

Is there a way I might have my cake and eat it too? I do know which IRQ the card is being assigned to.

Thanks, Dave


 Date: Mon, 22 Sep 1997 10:06:04 +0200
From: Erwin Penders ependers@cobweb.nl
Subject: email only

Hi,

My name is Erwin Penders an i'm working for a local ISP in the Netherlands. I don't know if i send this mail to the right place, but i have a question about a Linux problem. I want to know how to set up an email-only account (so you can call the ISP, make a connection and send/receive email) without the possiblity for WWW, Telnet etc. The main problem is that i don't know how to set up the connection (the normal accounts get a /etc/ppp/ppplogin).... . .

Can anybody help me with this problem !?

Thanks,

Erwin Penders
(CobWeb)


 Date: Sat, 20 Sep 1997 22:00:38 +0200
From: Richard Torkar richard.torkar@goteborg.mail.telia.com
Subject: Software for IDE cd-r?

First of all Thanks for a great e-zine!

And then to my question... (You didn't really think that I wrote to you just to be friendly did you? ;-)

Is there any software written for IDE cd-r for example Mitsumi CR2600TE?

I found two programs; Xcdroast and CDRecord for Linux, but unfortunately they don't support IDE cd-r :-(

I haven't found anything regarding this problem and I've used darned near all search tools on the net... Any answer would be appreciated. If the answer is no, can I solve this problem somehow?

Regards,
Richard Torkar from the lovely land of ice beers .. ;-)


 Date: Thu, 18 Sep 1997 16:03:04 -0400 (EDT)
From: Eric Maude sabre2@mindspring.com
Subject: Redhat Linux 4.3 Installation Help

I am trying to install Redhat Linux 4.3 on a Windows 95 (not OSR 2) machine. I do want to set this machine up as dual boot but that's not really my problem. I have been totally unable to set up Linux because I am unable to set up the Non-MS DOS partition that Linux requires. I am pretty new to Linux. I would appreciate anyone that could give me detailed step by step instructions on how I go about setting up Redhat Linux. I would call Redhat directly but I am at work during their operating hours and not near the machine I need help with this! Please, somebody help me out!!

Thanks!!


General Mail


 Date: Fri, 29 Aug 1997 11:02:39 -0300
From: Mario Storti mstorti@minerva.unl.edu.ar
Subject: acknowledge to GNU software

(Sorry if this is off-topic)

From now on I will put a mention to the GNU (and free in general) software I make use in the "acknowledgment" section of my (scientific) papers. I suggest to do the same to all those who are working on scientific applications. Since Linux is getting stronger every day in the scientific community, this could represent an important support, specially when requesting funding. Even better would be to make a database with all these "acknowledgments" in a Web site or something similar. Do anyone know of something like this that is already working? Any comments?

Mario


 Date: Sun, 07 Sep 1997 23:58:16 -0500
From: mike shimanski mshiman@xnet.com
Subject: Fun

I just discovered Linux in July and am totally pleased. After years of Dos, Win 3.1, OS/2 and Win95, ( I won't discuss my experience with Apple), I think I found an operating system I can believe in.I cannot make this thing crash!

The Linux Gazette has been a rich source of information and makes being a newbe a great deal easier.I want to thank you for the time and effort you put into this publication. It has made my induction into the Linux world a lot easier.

Did I mention I am having way too much fun exploring this operating system? Am I wierd or what?

Again, thanks for a great resource.

Mike Shimanski


 Date: Sat, 06 Sep 1997 18:01:52 -0700
From: George Smith gbs@swdc.stratus.com
Subject: Issue 21

THANKS! Thanks! Thank You!

Issue 21 was great! I loved it! I most appreciate the ability to download it to local disk and read it without my network connection being live and with the speed of a local disk. Please keep offering this feature - I wish everyone did. BTW, I am a subscriber to the Linux Journel from issue 1 and enjoy it immensely also.

Thanks again.


 Date: Wed, 03 Sep 1997 19:34:29 -0500
From: Mark C. Zolton trustno1@kansas.net
Subject: Thank you Linux Gazzette

Hello There,

I just wanted to thank you for producing such a wonderful publication. As a relative newbie to Linux, I have found your magazine of immense use in answering the plethora of questions I have. Keep up the good work. Maybe oneday I'll be experienced enough to write for you.

Mark


 Date: Mon, 1 Sep 1997 00:09:53 -0500 (CDT)
From: Arnold Hennig amjh@qns.com
Subject: Response to req. for help - defrag

I saw the request for information about the (lack of) need for defragging in issue 20, and have just been studying the disk layout a bit anyway.

Hope the following is helpful:

In reference to the question titled "Disk defrag?" in issue 20 of the Linux Gazette:

I had the same question in the back of my mind once I finally Linux up and running after some years of running a DOS based computer. After I was asked the same question by someone else, I poked around a bit and did find a defrag utility buried someplace on sunsite. The documentation pretty much indicated that with the ext2 file system it is rarely necessary to use the utility (he wrote it prior to the general use of ext2fs). He gave a bit of an explanation and I found some additional information the other day following links that (I believe) originated in the Gazette.

Basically, DOS does not keep a map of the disk usage in memory, and each new write simply starts from the next available free cluster (block), writes till it gets to the end of the free space and then jumps to the next free space and continues. After it reaches the end of the disk or at the next reboot, the "next free cluster" becomes the "first free cluster", which is probably where something was deleted, and may or may be an appropriate amount of free space for the next write. There is no planning ahead for either using appropriate sized available spaces or for clustering related files together. The result is that the use of space on the disk gets fragmented and disorganized rather quickly, and the defrag utilities are a necessary remedy.

In fairness to DOS, it was originally written for a computer with precious little memory, and this method of allocating write locations didn't strain the resources much.

The mounting requirement under unices allows the kernel to keep a map of the disk usage and allocate disk space more intelligently. The Ext2 filesystem allocates writes in "groups" spread across the area of the disk, and allocates files in the same group as the directory to which they belong. This way the disk optimization is done as the files are written to disk, and a separate utility is not needed to accomplish it.

Your other probable source of problems is unanticipated shutdowns (power went out, Dosemu froze the console and you don't have a way to dial in through the modem to kill it - it kills clean, btw ;-), or your one year old niece discovered the reset button). This will tend to cause lost cluster type problems with the files you had open at the time, but the startup scripts almost universally run fsck, which will fix these problems. You WILL notice the difference in the startup time when you have had an improper shutdown.

So, yes, you may sleep with peace of mind in this respect.

Arnold M.J. Hennig


 Date: Wed, 3 Sep 1997 16:19:17 -0600 (MDT)
From: Mark Midgley midgley@pht.com
Subject: Commercial Distribution

Mo'Linux, a monthly Linux distribution produced by Pacific HiTech, Inc. includes current Linux Gazette issues. They are copied in whole, according to the copyright notice.

Mark


 Date: Thu, 11 Sep 1997 12:26:53 -0400
From: Brian Connors connorbd@bc.edu
Subject: Linux and Mac worlds vs Microsoft?

Michael Hammel made an interesting comment in the September letters column about aligning with Mac users against Microsoft. The situation's not nearly as rosy as all that, what with Steve Jobs' latest activity in the Mac world. As a Mac diehard, I'm facing the prospect of a good platform being wiped out by its own creator, whether it's really his attention or not. IMHO the Linux world should be pushing for things like cheap RISC hardware (which IBM and Motorola have but aren't pushing) and support from companies like Adobe. I know that in my case, if the MacOS is robbed of a future, I won't be turning to Windows for anything but games...


 Date: Thu, 11 Sep 1997 22:59:19 +0900
From: mark stuart mark@www.hotmail.com
Subject: article ideas

why not an issue on linux on sparc and alpha(especially for scientific applications) and also how about an issue on SMP with linux?


 Date: Sat, 27 Sep 1997 01:57:09 -0700 (PDT)
From: Ian Justman ianj@chocobo.org

Except for the SNA server, all I've got to say about Linux with all the necessary software is: "Eat your heart out, BackOffice!"

--Ian.


 Date: Wed, 24 Sep 1997 21:49:28 -0700
From: Matt Easton measton@lausd.k12.ca.us
Subject: Thanks

Thank you for Linux Gazette. I learn a lot there; and also feel more optimistic about things not Linux after visiting.


 Date: Fri, 26 Sep 1997 13:24:29 -0500
From: "Samuel Gonzalez, Jr." buzz@pdq.net
Subject: Excellent Job

Excellent job !!!

Sam


Published in Linux Gazette Issue 22, October 1997


[ TABLE OF 
CONTENTS ] [ FRONT 
PAGE ]  Next

This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1997 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun! "


More 2¢ Tips!


Send Linux Tips and Tricks to gazette@ssc.com


Contents:


Netscape and Seyon questions

Date: Mon, 8 Sep 1997 11:23:51 -0600 (MDT)
From: "Michael J. Hammel" mjhammel@long.emass.com

Lynn Danielson asked:

I downloaded Netscape Communicator just a few weeks ago from the Netscape site. I'm not sure older versions of Netscape are still available. I'm probably wrong, but I was under the impression that only the most current beta versions were freely available.

Answer:

A quick search through Alta-Vista for Netscape mirrors showed a couple of different listing for mirror sites. I perused a few and found most either didn't have anything or had non-English versions, etc. One site I did find with all the appropriate pieces is:

ftp://ftp.adelaide.edu.au/pub/WWW/Netscape/pub/

Its a long way to go to get it (Australia), but thats all I could find. If you want to go directly to the latest (4.03b8) Communicator directory, try:

ftp://ftp.adelaide.edu.au/pub/WWW/Netscape/pub/communicator/4.03/4.03b8/english/unix/

I did notice once while trying to download from Netscape that older versions were available, although I didn't try to download them. I noticed this while looking for the latest download of Communicator through their web sites. Can't remember how I found that, though.

The 3.x version is available commercially from Caldera. I expect that the 4.x versions will be as well, though I don't know if Caldera keeps the beta versions on their anonymous ftp sites.

BTW, the Page Composer is pretty slick, although it has no interface for doing Javascript. It has a few bugs, but its the best WYSIWYG interface for HTML composition on Linux that I've seen. Its better than Applix's HTML Editor, although that one does allow exporting to non-HTML stuff. Collabra Discussions sucks. The old news reader was better at most things. I'd still like to be able to mark a newsgroup read up to a certain point instead of the all-or-nothing bit.

For anyone who is interested - 4.x now supports CSS (Cascading Style Sheets) and layers. Both of these are *very* cool. They are the future of Web design and, IMHO, a very good way to create Multimedia applications for distribution on CDs. One of C|Net's web pages (I think) has some info on these items, including a demo of layers (moves an image all over the screen *over* the underlying text - way cool). The only C|Net URL I ever remember is www.news.com, but you can get to the rest of their sites from there.

-- Michael J. Hammel


Keeping track of tips

Date: Tue, 26 Aug 1997 16:29:13 +0200
From: Ivo Saviane saviane@astrpd.pd.astro.it

Dear LG,

it always happens to me that I spend a lot of time finding out how to do a certain thing under Linux/Unix, and then I forget it. The next time I need that information I will start all the `find . ...', `grep xxx *' process again and waste the same amount of time!

To me, the best way to avoid that is to send a mail to myself telling how to do that particular operation. But mail folders get messy and, moreover, are not useful to other users who might need that same information.

Finally I found something that contributes solving this problem. I set up a dummy user who reads his mail and puts it in www readable form. Now it is easy for me to send a mail to news@machine as soon as I learn something, and be sure that I will be able to find that information again just clicking on the appropriate link. It would also be easy to set up a grep script and link it to the same page.

The only warning is to put a meaningful `subject: ' to the mail, since this string will be written besides the link.

I am presently not aware of something similar. At least, not that simple. It you know, let me know too!

If you want to see how this works, visit

http://obelix.pd.astro.it/~news

A quick description of the basic operations needed is given below.

--------------------------------------------------------------------------

The following lines briefly describe how to set up the light news server.

1. Create a new user named `news'

2. Login as news and create the directories ~/public_html and ~/public_html/folders (I assume that your http server is configured so that `http://machine/~user' will point to `public_html' in the user's $HOME).

3. Put the wmanager.sh script in the $HOME/bin directory. The script follows the main body of this message as attachment [1]. The script does work under bash.

The relevant variables are grouped at the beginning of the script. These should be changed according to the machine/user setup

4. The script uses splitmail.c in order to break the mail file in sub-folders The binary file should be put in the $HOME/bin dir. See attachment [2].

5. Finally, add a line in the `news' user crontab, like the following

00 * * * * /news_bin_dir/wmanager.sh

where `news_bin_dir' stands for $HOME/bin. In this case the mail will be checked once every hour.

---------------------------------- attachment [1]

#!/bin/sh

# wmanager.sh

# Updates the www news page reading the user's mails 
# (c) 1997 Ivo Saviane

# requires splitmail (attachment [2]) 

## --- environment setup

BIN=/home/obelnews/bin                  # contains all the executables
MDIR=/usr/spool/mail                    # mail files directory
USR=news                                # user's login name
MFOLDER=$MDIR/$USR                      # user's mail file
MYFNAME=`date +%y~%m~%d~%H:%M:%S.fld`   # filename for mail storage under www

FLD=folders                             # final dir root name
PUB=public_html                         # httpd declared public directory
PUBDIR=$HOME/$PUB/$FLD                  
MYFOLDER=$PUBDIR/$MYFNAME
INDEX=$HOME/$PUB/index.html

## --- determines the mailfile size

MSIZE=`ls -l $MFOLDER | awk '{print $5}'`

## --- if new mail arrived goes on; otherwise does nothing

if [ $MSIZE != "0" ]; then 

## --- writes the header of index.html in the pub dir

 echo "<html><head><title> News! </title></head>" > $INDEX
 echo "<h2> Internal news archive </h2> <p><p>" >> $INDEX
 echo "Last update: <i>`date`</i> <hr>" >> $INDEX

## --- breaks the mail file in single folders; splitmail.c must be compiled

 $BIN/splitmail $MFOLDER > $MFOLDER

## --- each folder is copied in the folder dir, under the pub dir, 
##     and given an unique name

 for f in $MFOLDER.*; do\
   NR=`echo $f | cut -d. -f2`;\
   MYFNAME=`date +%y~%m~%d~%H:%M:%S.$NR.fld`;\
   MYFOLDER=$PUBDIR/$MYFNAME;\
   mv $f $MYFOLDER;\
 done

## --- prepares the mailfile for future messages

 rm $MFOLDER
 touch $MFOLDER 

## --- Now creates the body of the www index page, searching the folders
##     dir

 for f in `ls $PUBDIR/* | grep -v index`; do\
   htname=`echo $f | cut -d/ -f5,6`;\
   rfname=`echo $f | cut -d/ -f6 | sed 's/.fld//g'`;\
   echo \<a href\=\"$htname\"\> $rfname\<\/a\> >> $INDEX;\
   echo \<strong\> >> $INDEX;\
   grep "Subject:" $f | head -1  >> $INDEX;\
   echo \</strong\> >> $INDEX;\
   echo \<br\> >> $INDEX;\
 done

  echo "<hr>End of archive" >> $INDEX
  echo "</html>" >> $INDEX
fi 

---- attachment [2]


/****************************************************************************** 
   Reads stdin. Assuming that this has a mailfile format, it breaks the input
   in single messages. A filestem must be given as argument, and single 
   messages will be written as  filestem.1 filestem.2 etc.
   (c) 1997 I.Saviane

******************************************************************************/

#define NMAX 256
/*****************************************************************************/

#include <stdio.h>
/*****************************************************************************/

/*****************************************************************************/

/**************************  MAIN **************************************/

int main(int argc, char *argv[]) {

  FILE *fp;
  char mline[NMAX], mname[NMAX];
  int nmail=0, open;

  if(argc < 2) {
    fprintf(stderr, "splitmail: no input filestem");
    return -1;
  }

  fp = fopen("/tmp/xx", "w");
  while(fgets(mline, NMAX, stdin) != NULL) {

    open = IsFrom(mline);
    if(open==1) {

      fclose(fp);
      nmail++;
      sprintf(mname, "%s.%d", argv[1], nmail);
      fp = fopen(mname, "w");
      open = 0;
    }
    fprintf(fp, "%s", mline);
  }
  fclose(fp);
  system("rm /tmp/xx");
  return 1;
}


/*****************************************************************************/

int IsFrom(char *s) {

  if(s[0]=='F' && s[1]=='r' && s[2]=='o' && s[3]=='m' && s[4]==' ') {

    return 1;
  } else {

    return 0;
  }
}


Displaying File Tree

Date: Tue, 26 Aug 1997 16:40:43 -0400 (EDT)
From: Scott K. Ellis storm@gate.net

A nice tool for displaying a graphic tree of files or directories in your filesystem can be found at your local sunsite mirror under /pub/Linux/utils/file/tree-1.2.tgz. It is also included as the package tree included in the Debian distribution.


Making Changing X video modes easier

Date: Thu, 28 Aug 1997 20:29:59 +0100
From: Jo Whitby pandore@globalnet.co.uk
Hi

In issue 20 of the Linux gazette there was a letter from Greg Roelofs on changing video modes in X - this was something I had tried and had found changing colour depths awkward, and didn't know how to start multiple versions of X.

I also found the syntax of the commands difficult to remember, so here's what I did.

First I created 2 files in /usr/local/bin called x8 and x16 for the colour depths that I use, and placed the command in them -

for x8

#!/bin/sh
startx -- :$* -bpp 8 &

and for x16

#!/bin/sh
startx -- :$* -bpp 16 &

then I made them executable -

chmod -c 755 /usr/local/bin/x8
chmod -c 755 /usr/local/bin/x16

now I simply issue the command x8 or x16 for the first instance of X and x8 1 or x16 1 for the next and so on, this I find much easer to remember:-) An addition I would like to make would be to check which X servers are running and to increment the numbers automatically, but as I have only been running Linux for around 6 months my script writing is extremely limited, I must invest in a book on the subject.

Linux is a fantastic OS, now I've tried it I could not go back to Windoze and hate having to turn my Linux box into a wooden doze box just to run the couple of progs that I can't live without (Quicken 4 and a lottery checking prog), so if anyone knows of a good alternative to these please let me know, the sooner doze is gone for good the better - then Linux can have the other 511Mb of space doze95 is hogging!

ps. Linux Gazette is just brilliant, I've been reading all the back issues, nearly caught up now - only been on the net for 3 months. I hope to be able to contribute something a little more useful to the Gazette in the future, when my knowledge is a little better:-)

keep up the good work.


Tree Program

Date: Mon, 01 Sep 1997 03:28:57 -0500
From: Ian Beth13@mail.utexas.edu

Try this instead of the tree shell-script mentioned earlier:
--------- Cut here --------


#include <stdlib.h>
#include <stdio.h>

#include <sys/stat.h>
#include <unistd.h>

#include <sys/types.h>
#include <dirent.h>


// This is cool for ext2.
#define MAXLEN 256
#define maxdepth 4096

struct dnode {
 dnode *sister;
 char name[MAXLEN];
};

const char *look;
const char *l_ascii="|+`-";
const char l_ibm[5]={179,195,192,196,0};

int total;

char map[maxdepth];

void generate_header(int level) {
 int i;
 for (i=0;i<level;i++) printf(" %c ",(map[i]?look[0]:32));
 printf (" %c%c ",(map[level]?look[1]:look[2]),look[3]);
}

dnode* reverselist(dnode *last) {
 dnode *first,*current;
 first=NULL;
 current=last;

 // Put it back in order:
 // Pre: last==current, first==NULL, current points to backwards linked
list
 while (current != NULL) {
  last=current->sister;
  current->sister=first;
  first=current;
  current=last;
 }

 return first;
}

void buildtree(int level) {
 dnode *first,*current,*last;
 first=current=last=NULL;
 char *cwd;
 struct stat st;

 if (level>=maxdepth) return;

 // This is LINUX SPECIFIC: (ie it may not work on other platforms)
 cwd=getcwd(NULL,maxdepth);
 if (cwd==NULL) return;

 // Get (backwards) Dirlist:
 DIR *dir;
 dirent *de;

 dir=opendir(cwd);
 if (dir==NULL) return;

 while ((de=readdir(dir))) {
  // use de->d_name for the filename
  if (lstat(de->d_name,&st) != 0) continue; // ie if not success go on.
  if (!S_ISDIR(st.st_mode)) continue; // if not dir go on.
  if (!(strcmp(".",de->d_name) && strcmp("..",de->d_name))) continue; //
skip ./
..
  current=new dnode;
  current->sister=last;
  strcpy(current->name,de->d_name);
  last=current;
 }

 closedir(dir);

 first=reverselist(last);

 // go through each printing names and subtrees

 while (first != NULL) {
  map[level]=(first->sister != NULL);
  generate_header(level);
  puts(first->name);
  total++;
  // consider recursion here....
  if (chdir (first->name) == 0) {
   buildtree(level+1);
   if (chdir (cwd) != 0) return;
  }
 current=first->sister;
  delete first;
  first=current;
 }
 free (cwd);
}

void tree() {
 char *cwd;
 cwd=getcwd(NULL,maxdepth);
 if (cwd==NULL) return;
 printf("Tree of %s:\n\n",cwd);
 free (cwd);
 total=0;
 buildtree(0);
 printf("\nTotal directories = %d\n",total);
}

void usage() {
 printf("usage: tree {-[agiv]} {dirname}\n\n");
 printf("Tree version 1.0 - Copyright 1997 by Brooke Kjos
<beth13@mail.utexas.ed
u>\n");
 printf("This program is covered by the Gnu General Public License
version 2.0\n
");
 printf("or later (copyleft). Distribution and use permitted as long
as\n");
 printf("source code accompanies all executables and no additional\n");
 printf("restrictions are applied\n");
 printf("\n\n Options:\n\t-a use ascii for drawings\n");
 printf("\t-[ig] use IBM(tm) graphics characters\n");
 printf("\t-v Show version number and exit successfully\n");
};

void main (int argc,char ** argv)  {
 look=l_ascii;
 int i=1;
 if (argc>1) {
  if (argv[1][0]=='-') {
   switch ((argv[1])[1]) {
    case 'i':
    case 'I':
    case 'g':
    case 'G':
    look = l_ibm;
    break;
    case 'a':
    case 'A':
    look = l_ascii;
    break;
    case 'v':
    case 'V':
    usage();
    exit(0);
    default:
    printf ("Unknown option: %s\n\n",argv[1]);
    usage();
    exit(1);
   } // switch
   i=2;
  } // if2
 } // if1
 if (argc > i) {
  char *cwd;
  cwd=getcwd(NULL,maxdepth);
  if (cwd==NULL) {
   printf("Failed to getcwd:\n");
   perror("getcwd");
   exit(1);
  }
  for (;i>argc;i++) {
   if (chdir(argv[i]) == 0) {
    tree();
    if (chdir(cwd) != 0) {
     printf("Failed to chdir to cwd\n");
     exit(1);
    }
   }
   else printf("Failed to chdir to %s\n\n",argv[i]);
  } // for
  free (cwd);
 } else tree();
}

------- Cut Here --------

Call this tree.cc and run gcc -O2 tree.cc -o /usr/local/bin/tree.


Managing an Entire Project

Date: Tue, 26 Aug 1997 16:44:06 -0400 (EDT)
From: Scott K. Ellis storm@gate.net

While RCS is useful for managing one or a small set of files, CVS is a wrapper around RCS that allows you to easily keep track of revisions across an entire project.


Finding what you want with find

Date: Tue, 2 Sep 1997 21:53:41 -0500 (CDT)
From: David Nelson dnelson@psa.pencom.com

While the find . -type f -exec grep "string" {} \; works, it does not tell you what file it found the string in. Try using find . -type f -exec grep "string" /dev/null {} \; instead.

David /\/elson


Minicom kermit help

Date: Wed, 10 Sep 1997 12:21:55 -0400 (EDT)
From: "Donald R. Harter Jr." ah230@traverse.lib.mi.us

With minicom, ckermit was hanging up the phone line after I exited it to return to minicom. I was able to determine a quick fix for this. In file ckutio.c comment out (/* */) line 2119 which has tthang() in it. tthang hangs up the line. I don't know why ckermit thought that it should hang up the line.

Donald Harter Jr.


Postscript printing

Date: Sun, 7 Sep 1997 15:12:17 +0200 (MET DST)
From: Roland Smith mit06@ibm.net

Regarding your question in the Linux Gazette, there is a program that can interpret postscript for different printers. It's called Ghostscript.

The smartest thing to do is to encapsulate it in a shell-script and then call this script from printcap.


----- Ghostscript shell script -------
#!/bin/sh # # pslj This shell script is called as an input filter for the # HP LaserJet 5L printer as a PostScript printer # # Version: /usr/local/bin/pslj 1.0 # # Author: R.F. Smith <rsmit06@ibm.net> # Run GhostScript, which runs quietly at a resolution # of 600 dpi, outputs for the laserjet 4, in safe mode, without pausing # at page breaks, writing and reading from standard input/output /usr/bin/gs -q -r600 -sDEVICE=ljet4 -dSAFER -dNOPAUSE -sOutputFile=- - ------- Ghostscript shell script ------

You should only have to change the resolution -r and device -sDEVICE options to something more suitable to your printer. See gs -? for a list of supported devices. I'd suggest you try the cdeskjet or djet500c devices. Do a chmod 755 <scriptname>, and copy it to /usr/local/bin as root.

Next you should add a Postscript printer to your /etc/printcap file. Edit this file as root.

-------- printcap excerpt -----------
ps|HP LaserJet 5L as PostScript:\ :lp=/dev/lp1:\ :sd=/var/spool/lp1:\ :mx#0:\ :if=/usr/local/bin/pslj:sh
-------- printcap excerpt ------------

This is the definition of a printer called ps. It passes everything it should print through the pslj filter, which converts the postscript to something my Laserjet 5 can use.

To print Postscript, use lpr -Pps filename.

change this to reflect your script name.

Hope this helps!

Roland


Realaudio without X-windows

Date: Sun, 7 Sep 1997 00:45:58 -0700 (PDT)
From: Toby Reed toby@eskimo.com

This is more of a pointer than a tip, but your readers might want to check out traplayer on sunsite, it lets you play realaudio without starting up an X server on your screen. Kinda useful if you don't like to use memory-hog browsers just to listen to realaudio.

The file is available at sunsite.unc.edu/pub/Linux in the Incoming directory (until it gets moved), and then who knows where. It's called traplayer-0.5.tar.gz.


Connecting to dynamic IP via ethernet

Date: Fri, 12 Sep 1997 13:22:06 +0200
From: August Hoerandl hoerandl@elina.htlw1.ac.at

in LG 21 Denny wrote:

"Hello. I want to connect my Linux box to our ethernet ring here at my company. The problem is that they(we) use dynamic IP adresses, and I don't know how to get an address."

There is a program called bootpc (a bootp client for linux). From the LSM entry (maybe there is a newer version now):

Title:          Linux Bootp Client
Version:        V0.50
Entered-date:   1996-Apr-16
Description:    This is a boot protocol client used to grab the machines
                ip number, set up DNS nameservers and other useful information.
Keywords:       bootp bootpc net util
Author:         ceh@eng.cam.ac.uk (Charles Hawkins)
Maintained-by:  J.S.Peatfield@damtp.cam.ac.uk (Jon Peatfield)
Primary-site:   ftp.damtp.cam.ac.uk:/pub/linux/bootpc/bootpc.v050.tgz
Alternate-site:
sunsite.unc.edu:/pub/Linux/system/Network/admin/bootpc.v050.tgz
Platform:       You need a BOOTP server too.
Copying-policy: This code is provided as-is, with no warrenty, share and
enjoy.

The package inludes a shell script to set up the ethernet card, send the bootp request, receive the answer and set up everything needed.

I hope this helps

Gustl


Running commands from X w/out XTerm

Date: Fri, 26 Sep 1997 18:28:51 -0600
From: "Kenneth R. Kinder" Ken@KenAndTed.com

I often found myself running XTerm just to type a single shell commmand. After a while, you just wish you could run a single command without even accessing a menu. To solve this problem, I wrote exec. As the program name would emply, the exec program mearly prompts (in X11) for a command, and replaces its own process with the shell-orriented command you type in. Exec can also browse files, and insert the path in the text box, incase you need a file in your command line. Pretty simple huh? Exec (of course!) is GPL, and can be downloaded at http://www.KenAndTed.com/software/exec/ -- I would appreciate it if someone would modify my source to do more! =)


Ascii problems with FTP

Date: Wed, 24 Sep 1997 12:42:05 -0400
From: Carl Hohman carl@microserv-canada.com

Andrew, I read your letter to the Linux Gazzette in issue 19. I don't know if you have an answer yet, but here's my 2 bits...
If I understand correctly, you are using FTP under DOS to obtain Linux scripts. Now, as you may know, the line terminators in text files are different between Unix systems and DOS (and Apples, for that matter). I suspect that what's happening is this: FTP is smart enough to know about terminator differences between systems involved in an ascii mode transfer and performs appropriate conversions silently and on the fly. This give you extra ^M's on each line if you download the file in DOS and then simply copy it (or use an NFS mount) to see it from Unix. I suspect that if you use a binary tranfer (FTP> image) the file will arrive intact for Linux use if it originates on a Unix server.

Hope this helps.
Carl Hohman


Red Hat Questions

Date: Thu, 18 Sep 1997 14:06:08 -0700
From: James Gilb p27451@am371.geg.mot.com

Signal 11 crashes are often caused by hardware problems. Check out the The Sig11 FAQ on: http://www.bitwizard.nl/sig11/

James Gilb


Published in Linux Gazette Issue 22, October 1997


[ TABLE OF 
CONTENTS ] [ FRONT PAGE ]  Back  Next


This page maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1997 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"


  
Welcome to the Graphics Muse
Set your browser as wide as you'd like now.  I've fixed the Muse to expand to fill the aviailable space!
© 1997 by mjh 
 


Button Bar muse: 
  1. v; to become absorbed in thought 
  2. n; [ fr. Any of the nine sister goddesses of learning and the arts in Greek Mythology ]: a source of inspiration 
 Welcome to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect, the above definitions are pretty much the way I'd describe my own interest in computer graphics: it keeps me deep in thought and it is a daily source of inspiration. 
[Graphics Mews] [WebWonderings][Musings] [Resources]
 
This column is dedicated to the use, creation, distribution, and discussion of computer graphics tools for Linux systems.
 
As expected, two months of material piled up while I was out wondering the far reaches of the US in August.  My travels took me to California for SIGGRAPH, Washington DC for vacation (honest), Huntsville Alabama for work (they kind that pays the rent) and just last week I was in Dallas for a wedding.  All that plane travel gave me lots of time to ponder just where the Muse has come in the past year and where it should go from here.  Mixed with a good dose of reality from SIGGRAPH, I came up with the topics for this month.

First, there are two new sections: Reader Mail and Web Wonderings. Reader Mail is an extension of Did You Know and Q and A.  I'm getting much more mail now than I did when I first started this column and many of the questions are worthy of passing back to the rest of my readers.  I've also gotten many suggestions for topics.  I wish I had time to cover them all.

Web Wonderings is new but may be temporary.  I know that many people are reading my column as part of learning how to do Web page graphics.  Its hard to deny how important the Web has become or how much more important it will become in the future.  I started reading a bit more on JavaScript to see if the language is sufficient to support a dynamically changing version of my Linux Graphics mini-Howto.  Well, it is.  I'll be working (slowly, no doubt) on converting the LGH to a JavaScript based set of pages.  My hope is to make it easier to search for tools of certain types.  I can do this with JavaScript, although the database will be psuedo static as an JavaScript array.  But it should work and requires no access to a Web server.

Readers with Netscape 3.x or later browsers should notice a lot more color in this column.  The Netscape 4.x Page Composer makes it pretty easy to add color to text and tables so I make greater use of color now.  Hopefully it will add more than it distracts.  We'll see. I may do a review of Netscape 4.x here or maybe for Linux Journal soon.  There are some vast improvements to this release of Netscape, although the new reader (known as Collabra Discussions) is not one of them.
 
      In this months column I'll be covering ...

Oh yeah, one other thing:  Yes, I know I spelled "Gandhi" wrong in the logo used in the September 1997 Linux Gazette.  I goofed.  I was more worried about getting the quote correct and didn't pay attention to spelling.  Well, I fixed it and sent a new version to our new editor, Viki.  My apologies to anyone who might have been offended by the misspelling.  Note:  the logo has been updated at the SSC site.


 
Graphics Mews
      Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month.
 
indent

VRML 98

The third annual technical symposium focusing upon the research, technology and applications of VRML, the Vritual Reality Modeling Language will be held Feb 16-19, 1998 in Monterey, California.  VRML 98 is sponsored by ACM SIGGRAPH and ACM SIGCOMM in cooperation with the VRML Consortium. Deadlines for submission are as follows: 
Papers Mon. 22 Sep
Panels Fri. 3 Oct
Workshops
Courses
Video Mon. 2 Feb
Contact Information: 
 
VRML 98 Main Web Site http://ece.uwaterloo.ca/vrml98
Courses vrml98-courses@ ece.uwaterloo.ca
Workshops vrml98-workshops@ ece.uwaterloo.ca
Panels vrml98-panels@ ece.uwaterloo.ca
Papers vrml98-papers@ ece.uwaterloo.ca
Video Submissions vrml98-video@ ece.uwaterloo.ca
Demo Night vrml98-demos@ ece.uwaterloo.ca
 
indent

Iv2Rib

Cow House Productions is please to present the first release of Iv2Rib, an Inventor 2.0 (VRML 1.0) to Renderman / BMRT converter. Source (C++) and an Irix 5.3 binary are available at: 

http://www.cowhouse.com/ Home/Converters/converters.html 

Additionally, new updates (V0.12, 30-Jul-97) of both Iv2Ray (the Inventor to Rayshade converter) and Iv2POV (the inventor to POVRAY converter) are also available on the same page, as both source (C++) and binaries for Irix 5.3

Crack released the Abuse source code to the public domain recently. Abuse was a shareware and retail game released for DOS, MacOS, Linux, Irix, and AIX platforms. 

The source is available at 

  http://games.3dreview.com/abuse/files/abuse_pd.tgz 
  and 
  http://games.3dreview.com/abuse/files/abuse_pd.zip 

If you don't know the 1st thing about Abuse, 

  http://crack.com/games/abuse 
  and 
  http://games.3dreview.com/abuse 

Lastly, if you want to discuss the source (this is a just-in-case thing-it may very well not get used), we put a small newsgroup up at news://addicted.to.crack.com/crack.technical. That is also where we'll prolly host a newsgroup about Golgotha DLL's, mods, editting, movies and stuff like that later on. 
Dave Taylor

 
 

Version 0.2.0 of DeltaCine

DeltaCine is a software implemented MPEG (ISO/IEC 11172-1 and 11172-2) decompressor and renderer for GNU/Linux and X-Windows. It is available from ftp://thumper.moretechnology.com/pub/deltacine

This project aims to provide portable C++ source code that implements the system and video layers of the MPEG standard.  This first release will interpret MPEG 1 streams, either 11172-1 or raw 11172-2, and render them to an X-Windows display.  The project emphasizes correctness and source code readability, so the performance suffers. It cannot maintain synchronized playback on a 166MHz Pentium. 

Still, the source code contains many comments about the quality of the implementation and the problems encountered when interpreting the standard.  All of the executing code was written from scratch, though there is an IDCT (Inverse Discrete Cosine Transform) implementation adapted from Tom Lane's IJG project that was used during development. 

This is an ALPHA release which means that the software comes with no warranties, expressed or implied.  It is being released under the GNU Public License for the edification of the GNU/Linux user community. 

Limitations: 

  • Requires ix86
  • No playback synchronization.  Movies play as fast as the decoder can render the frames.
  • Requires X-Windows server in 16bpp mode.
Features: 
  • Full decode of Part 1 (System) and Part 2 (Video) specification for ISO/IEC 11172.  Full implementation except for synchronization.
  • Reference quality output as compared to the Stanford implementation.
  • User-mode multi-threading implemented as part of the decoder.

RenderMan Module v0.01 for PERL 5

This module acts as a Perl5 interface to the Blue Moon Rendering Tools (BMRT) RenderMan-compliant client library, written by Larry Gritz: 
http://www.seas.gwu.edu/student/gritz/bmrt.html 

REQUIREMENTS 
This module requires Perl 5, a C compiler, and BMRT. 

EXAMPLES 
Some extra code has been added to the examples directory that should enable you to convert LightWave objects to RIB or to a Perl script using the RenderMan binding.  More useful examples will be provided in future releases. 

Updates will hopefully be uploaded to PAUSE once I am authorized to upload there, and will be posted to my personal home page at: 
http://www.gmlewis.com/ 

AUTHOR 
Glenn M. Lewis | glenn@gmlewis.com 
 

Sven Neumann released two more GIMP scripts for the megaperls script collection. You can find them as usual at: 
http://www-public.rz.uni-duesseldorf.de/ ~neumanns/gimp/megaperls 

You'll need to patch the waves-plug-in if you want to use the waves-anim script. The patch was posted a while ago on the list but hasn't made its way into any semi-official release yet. It is also available from the web-site mentioned above. 

Ed. Note:  Please note that the current release of the GIMP is a developers only release and not a public release.  If you plan on using it you should be very familiar with software development and C.  A public release is expected sometime before the end of the year. 

Sven Neumann 
<neumanns@uni-duesseldorf.de>

 

t1lib-0.3-beta

t1lib is a library for generating character- and string-glyphs from Adobe Type 1 fonts under UNIX. t1lib uses most of the code of the X11 rasterizer donated by IBM to the X11-project. But some disadvantages of the rasterizer being included in X11 have been eliminated. Here are the main features: For X11-users a special set of functions exists which: Author:      Rainer Menzner (rmz@neuroinformatik.ruhr-uni-bochum.de)

You can get t1lib by anonymous ftp at:
ftp://ftp.neuroinformatik.ruhr-uni-bocum.de/pub/software/t1lib/t1lib-0.3-beta.tar.gz

An overview on t1lib including some screenshots of xglyph can be found at:
http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/rmz/t1lib.html
 

 GTK Needs A Logo!

GTK, the GIMP Toolkit (I think, at least thats what it used to stand for) is looking a for a logo. Something that defines the essence of GTK, something that captures its soul and personality. A frozen image of everything that GTK stands for. Or maybe just something cool.

The Prize

The prize for submitting the winning logo is a very cool yourname@gimp.org email alias. Thats right, if you win, you can be the envy of your friends with your sparkling @gimp.org email alias.

See http://www.gimp.org/contest.html for more details.
 
 

Announcing MpegTV SDK 1.0 for Unix

MpegTV SDK 1.0 is the first toolkit that allows any X-windows application to support MPEG video without having to include the complex code necessary to decode and play MPEG streams. 

MpegTV SDK 1.0 is currently available for: 

  • Solaris 2.5 SPARC
  • Solaris 2.5 x86
  • IRIX 6.2
  • Linux x86
  • BSD/OS 3.0
MpegTV also announces more good news: MpegTV Player 1.0 for Unix is now free for non-commercial use! 
For more information on MpegTV products and to download MpegTV software, please visit the MpegTV website: 
http://www.mpegtv.com 

Regards, 
Tristan Savatier - President, MpegTV LLC 

Announcing MpegTV Plug-in 1.0 for Unix

MpegTV Plug-in 1.0 is a streaming-capable Netscape Plug-in that allows you to play MPEG movie embedded inside HTML documents. 

Unlike other similar Netscape Plug-ins (e.g. the Movieplayer Plug-in on SGI), MpegTV Plug-in is capable of streaming from the network, i.e. you can play a remote MPEG stream immediately, without having to wait for the MPEG file to be downloaded on your hard disk. 

MpegTV Plug-in 1.0 is currently available for: 

  • Solaris 2.5 SPARC
  • IRIX 6.2
  • Linux x86
  • Solaris 2.5 x86 (coming soon)
  • BSD/OS 3.0      (coming soon)
Get it now at http://www.mpegtv.com/plugin.html
Regards, -- Tristan Savatier (President, MpegTV LLC) 

MpegTV:   http://www.mpegtv.com 
MPEG.ORG: http://www.mpeg.org 

 
 
 

USENIX 1998 Annual Technical Conference

The 1998 USENIX Technical Conference Program Committee seeks original and innovative papers about the applications, architecture, implementation, and performance of modern computing systems. Papers that analyze problem areas and draw important conclusions from practical experience are especially welcome. Some particularly interesting application topics are: 

 ActiveX, Inferno, Java, and other embeddable environments 
 Distributed caching and replication 
 Extensible operating systems 
 Freely distributable software 
 Internet telephony 
 Interoperability of heterogeneous systems 
 Nomadic and wireless computing 
 Privacy and security 
 Quality of service 
 Ubiquitous computing and messaging 

A major focus of this conference is the challenge of technology: What is the effect of commodity hardware on how we build new systems and applications? What is the effect of next-generation hardware? We seek original work describing the effect of hardware technology on software. Examples of relevant hardware include but are not limited to: 

 Cheap, fast personal computers 
 Cheap, large DRAM and disks 
 Flash memory 
 Gigabit networks 
 Wireless networks 
 Cable modems 
 WebTV 
 Personal digital assistants 
 Network computers 

The conference will also feature tutorials, invited talks, BOFs, 
and Vendor Exhibits. 

For more information about this event: 

* Visit the USENIX Web site: 
  http://www.usenix.org/events/no98/index.html 

* Send email to the USENIX mailserver at info@usenix.org.  Your message should contain the line:  "send usenix98 conferences". 

* Or watch comp.org.usenix for full postings 

The USENIX Association brings together the community of engineers, system administrators, scientists, and technicians working on the cutting edge of computing. Its technical conferences are the essential meeting grounds for the presentation and discussion of the most advanced information on new developments in all aspects of advanced computing systems. 

Ra-vec version 2.1b - convert plan drawings to 3D vector format

Ra-vec is a program which can convert plan drawings of buildings into a vector format suitable for the creation of 3D models using the popular modelling package AC3D. It is freely avalible for linux from 
http://www.comp.lancs.ac.uk/ computing/users/aspinr/ra-vec.html 
 

xfpovray 1.2.4

A new release of the graphical interface to the cool ray-tracer POV-Ray called xfpovray is now available.  It requires the most recent (test) version of the XForms library (0.87), and supports most of the numerous options of POV-Ray.  Hopefully 0.87 will migrate from test release to public release soon. 

This version of xfpovray adds a couple nice features, such as POV-Ray templates to aid in writing scene files. Binary and source RPMs are also available.  Since xforms does not come in rpm, you may get a failed dependency error.  If you get this, just use the --nodeps option. 

You can view an image of the interface and get the RPMs and source code from 

http://cspar.uah.edu/~mallozzir/

There is a link there to the XForms home page if you don't yet have this library installed. 

Bob Mallozzi <mallozzir@cspar.uah.edu
 

WSCG'98 - Call for Papers and Participation

Just a reminder: 

The Sixth International Conference in Central Europe on Computer Graphics and Visualization 98, in cooperation with EUROGRAPHICS and IFIP working group 5.10 on Computer Graphics and Virtual Worlds, will be held in February 9 - 13, 1998 in Plzen at the University of West Bohemia close to PRAGUE, the capital of Czech Republic 

Information for authors: http://wscg.zcu.cz select WSCG'98 
Contribution deadline:  September 30, 1997

 
 

ivtools 0.5.7

ivtools contains, among other things, a set of drawing editors written in C++ for Unix/X11.  They extend idraw with networked export/import, multi-frame flipbook editing, and node/graph topology editing.  A new release, 0.5.7, is now available.

Source code at:
http://www.vectaport.com/pub/src/ivtools-0.5.7.tar.gz
ftp://sunsite.unc.edu/pub/Linux/apps/graphics/draw/ivtools-0.5.7.tar.gz

Linux elf binaries at:
http://www.vectaport.com/pub/src/ivtools-0.5.7-LINUXx.tar.gz
ftp://sunsite.unc.edu/pub/Linux/apps/graphics/draw/ivtools-0.5.7-LINUX.tar.gz

Web page at:
http://www.vectaport.com/ivtools/

Vectaport Inc.
http://www.vectaport.com
info@vectaport.com
 

Pixcon & Anitroll 1.04

New features since version 1.04: Pixcon 3D rendering package that creates high quality images by using a combination of 11 rendering primitives.  Anitroll is a forward kinematic hierarchical based animation system that has some support for some non-kinematic based animation (such as flock of birds, and autonomous cameras).  These tools are based upon the Graph library which is full of those neat rendering and animation algorithms that those 3D faqs keep mentioning.

Why Pixcon & Anitroll?  Well, systems like Alias, Renderman, 3DS/3DSMAX, Softimage, Lightwave, etc are too expensive for average users (anywhere from $1000 - $5000 US)  and require expensive hardware to get images in a reasonable amount of time.  Conventional freeware systems, such as BMRT, Rayshade, and POV are too slow (they're raytracers...). Pixcon & Anitroll is FREE, and doesn't take a long time to render a frame (true, its not real time... but I'm working on it). It also implements some rendering techniques that were presented at Siggraph 96 by Ken Musgrave and was used to generate an animation for Siggraph '95.

The Pixcon & Anitroll Home page is at: http://www.radix.net/~dunbar/index.html

Comments to dunbar@saltmine.radix.net
Availabe from:  ftp://sunsite.unc.edu/incoming/Linux/pixcon-105.tgz
and will be moved to:
ftp://sunsite.unc.edu/pub/Linux/apps/graphics/rays/pixcon-105.tgz
 

Glide 2.4 ported to Linux

Glide version 2.4 has now been ported to Linux and is available free of charge. This library enables Linux users with 3Dfx Voodoo Graphics based cards such as the Orchid Righteous 3D, Diamond Monster 3D, Canopus Pure 3D, Realvision Flash 3D, and Quantum Obsidian to write 3D applications for the cards. The Voodoo Rush is not yet supported. The library is available only in binary form.

To quote 3Dfx's web page:

Glide is an optimized rasterization library that serves as a software 'micro-layer' to the 3Dfx Voodoo accelerators. With Glide, developers can harness the power of the Voodoo to provide perspective correct, filtered, and MIP mapped textures at real-time frame rates - without having to work directly with hardware registers and memory, enabling faster product development and cleaner code.
As a separate effort, a module for Mesa is also under development to provide an OpenGL like interface for the Voodoo Graphics cards.

For more information on Glide please see:
http://www.3dfx.com/download/sdk/index.html
For more download informtion for Glide see:
http://www.3dfx.com/download/sdk/index.html
For more information on Mesa see:
http://www.ssec.wisc.edu/~brianp/Mesa.html
For an FAQ on 3Dfx on Linux see:
http://www.gamers.org/~bk/xf3D/
Finally, if you need to discuss all this, try the 3Dfx newsgroup:
news://news.3dfx.com/3dfx.glide.linux
 

Did You Know?

Q and A

Q: Let me ask a graphic related question: is there a software which converts GIF/JPEG file to transparent GIF/JPEG file?  Raju Bathija <bathija@sindhu.theory.tifr.res.in>

A: JPEG, to my knowledge, doesn't support transparency.  You have to use GIF (or PNG).  GIF files can have a transparency added by picking the color you want to be transparent.  One of the colors, and only one, can be specified as transparent.  You can use xv to pick the color.  Then you can use the NetPBM tools to convert the image to a transparent GIF.  You would do something like

giftopnm file.gif | ppmtogif -transparent rgb:ff/ff/ff > newfile.gif

Check the man page for ppmtogif for how to specify the color to use.
 

Reader Mail

Chris Bentzel <cbentzel@rhythm.com> wrote:
At the end of your gamma correction discussion of graphics muse issue 17, you mention that you were unable to find contact info for Greg Ward. He is at gregl@sgi.com (he is now Greg Ward Larson-> believes in reciprocating on the maiden-married name thing).However, a better link is to the radiance page: a high-end, physically correct ray-tracing/radiosity renderer used mostly for architectural design (and runs on Linux! Free source!)  http://radsite.lbl.gov/radiance/HOME.html
Jean Francois Martinez <jfm2@club-internet.fr> wrote:
I had just finished reading your article in LJ about Megahedron and I was reading some of the examples and playing with them.  I looked in mhd/system/smpl_prims and found the following:
coord_system=right_handed;
so you can do this
picture smokey_train_pic with
        coord_system=left_handed;
Notice than I put it just under the declaration of the top level object (the one called by do). Of course if you use this for the examples provided you will notice that now the camera is not focusing on the subject.
John P. Pomeroy <pomerojp@ttc2.lu.farmingdale.edu> wrote:
Usually I skip over the Graphics Muse, (I'm a bit head, not a graphic artist) but something drew me in this time.  Perhaps it's because I'm investigating the development of a Linux based Distance Learning platform for for use in my networking classes.Anyway, one of the  least expensive resources I've found over time has been the Winnov Videum-AV.  An outstanding card but near as I can tell, there are no Linux drivers .  I contacted Winnov a while back and they're not interested in Linux at all, but after reading about the efforts of the QuickCam folks I was wondering if you could just mention that the Videum card exists, perhaps simply asking if anyone is working on a driver?  (And, no, I don't own stock in Winnov nor know anyone that does.)Perhaps some of the programmers out there are working on something, or maybe Winnov will take the hint.  I'm certain that a Videum card on Linux would outperform the same card under NT.  Imagine a streaming video service (Either Java based or using the just released 60 stream Real Video Linux server) with a live feed under Linux. Sure wish the folks at Winnov could!Anyway, thanks. The 'Muse has a good balance of technical material and artistic issues.  I'll be reading the 'Muse a lot more often, but first...... the back issues!
'Muse:  Well?  Anyone working on a driver for this?

Jim Tom Polk  <jtpolk@camalott.comhttp://camalott.com/~jtpolk/ wrote:

Reading your column I noticed that you state that you don't know of any animated GIF viewers for Linux. I use xanim.  I usually use gifmerge to create the image, and then load up the image and step through it with xanim.  I also find it useful to see just how some animations are composed / created.  The version I have installed is: XAnim Rev 2.70.6.4 by Mark Podlipec (c) 1991-1997 I only found it out by accident when I loaded an animated GIF by accident (I was clicking on an mpeg file and missed it). You can start/stop/pause.  Go forward and backwards one frame at a time, and speed up or slow down the entire sequence.  You still have to use another utility to create the GIF, but I use it all the time.Really enjoy your column.
'Muse: I got a number of replies like this.  I never tried xanim for animated GIFs.  Sure enough, it works.  It just goes to show how much this wonderful tool can do.

Alf Stockton <stockton@acenet.co.za> wrote:

I have a number of JPEGs that I want to add external text to. ie Comments on photographs I have taken with my QV-10 digital camera. Now I don't want the text to appear on the picture. It must appear either next to or below same. So in other words I want to create a large JPEG consisting of some text and my picture. Of course it does not necessarily have to be a JPEG but it must be something that a web browser can display as I intend uploading same to my ISP.The thought was that I would create a HTML document for each image and this would work but now I have a large number of images & I don't want to create an equal amount of HTMLs.
'Muse: I'm a little confused here.  Do you want the text visible at all?  Or just include the text as unprintable info (like in the header of the image)? If you want the text in the header I'm not sure how to do this.  I'm pretty sure it can be done, but I've never messed with it.

If you want the text visible but not overlapping the original image there are lots of ways to get it done.  I highly recommend the GIMP, even though you feel its overkill - once you've learned to use it you'll find it makes life much easier.  However, if you just want a shell script to do it you can try some of the NetPBM tools.  NetPBM is a whole slew of simple command line programs that do image conversion and manipulations.  One of the tools is pnmcat.  To use this you'd take two images and convert them to pnm files.  For GIFs that would be like

giftoppm file1.gif > file1.pnm

Then you use pnmcat like this:

pnmcat -leftright file1.pnm file2.pnm > final-image.pnm

This would place the two images side by side.  You could then convet this back to a GIF file for placing on the Web page.  pnmcat has other options allowing you to stack the images (-topbottom) and specify the way to justify the smaller image if the images are not the same width/height. There is a man page for pnmcat that comes with NetPBM.

Note that the NetPBM tools do not have tools for dealing with JPEG images. However, there are some tools called jpegtoppm and ppmtojpeg available from the JPEG web site (I think).  I'm not positive abou that.  I don't use these specific tools for dealing with JPEGs.

If you want, you can always read in the JPEG with xv first and save it as a PPM/PNM (these two formats are essentially the same) file first, then use the NetPBM tools.

Jeff Taylor <jeff@adeno.wistar.upenn.edu> wrote:

1)  You mentioned [in your review of Megahedron in the September issue of Linux Journal]some difficulty in writing the model information to a file for rendering with an alternative renderer.  This is important to me as I would like to use PVMPOV for the final rendering.  It wasn't clear from what you wrote, is it difficult to do or impossible?
'Muse: Difficult, but not impossible.  I think you can get model information via polygon data (vectors), but you'll have to do the work of getting that out to the file format of interest. I'm no expert, however.  I used it only for a little while, to get modestly familiar with it.  The best thing to do is write to them and ask the same question.  It will get a better answer (one would hope, anyway) and also show that the Linux community is interested in supporting commercial products.
2)  Does the modeller allow 2D images to be printed?  I'm thinking of CAD type 3-angle-view drawings.  I'd like to use it for CAD applications where a model is created and scale drawing can be printed.
'Muse: There isn't a print function for the 2D images, but you can save the images to a file and then print them using some other tool, like xv or the GIMP. The manual has a section on how to save the images.  BTW, I'm assuming you mean the images that have been rendered.  These images can be saved in RAW  or TGA format using functions provided in the SMPL language.

Daniel Weeks <danimal@blueskystudios.com> wrote:

I just want to start of by telling you that you are doing a great job with the Graphic Muse and on the current article in the Linux Jornal on Megahedron.  This is where my questions come from.
'Muse: Thanks for the compliments!
First, with Megahedron I noticed that it is a progamatic/procedural language for modeling (interestingly enough the language itself is not that dissimilar to our cgiStudio language in structure and function {except for that wierd commenting style}, in fact I already have a perl script that translates most of SPML to cgiStudio :).  The question here is does Megahedron have any sort of interface over the demo mode, I guess I mean something like (but it doesn't have to be as fully functional or bloated as) SoftImage or Alias|Wavefront.  Second can Megahedron support NURBS patches and deforming geometry.
'Muse: Megahedron is a programming API - actually a scripting API.  The CD I got (which is the $99 version they sell from their web pages) wasn't a demo, although it had lots of demos on it.  There is no X interface to the language (ie no graphical front end/modeler).  I suppose if there was enough interest they'd look into it.  Best thing to do is check their web page and get an email address to ask for it.  There might be a non-Unix graphical front end, but I didn't check on that. As for Nurbs, there wasn't any mention of support for it on the disk I got. In fact, I don't think I've come across any modellers (or modelling languages) aside from BMRT that has support for NURBS on Linux.  But Linux is just beginning to move into this arena anyway, so its just a matter of time.
BTW:  for those that don't know it, Blue Sky Studios is the special effects house that is doing, among other things, the special effects for the upcoming Alien Resurrection movie.  Yes, it appears Ripley may live forever.

Hap Nesbitt <hap@handmadesw.com>, of Handmade Software wrote in reply to my review of Image Alchemy:

A very nice review thanks.  BTW we do 81 formats now.  The new formats are documented in addendum.pdf. The Mews seems quite ambitious.  Is this all your work or do you get some help?
'Muse: Its all mine, although I've had a couple of people write articles on two separate occassions.  And Larry Gritz offered lots of help when I did the BMRT write ups.  I still owe the readers an advanced version of that series.
We've found a tool for porting Mac libraries to X. Our Mac interface is beautiful and we should get it ported sometime in the next 6 months or so.  I'll keep you posted. BTW people don't really buy much Image Alchemy, they buy Image Alchemy PS to RIP PostScript files out to large format inkjet plotters in HP-RTL format. If you give me your mailing address I'll send you a poster done this way. I think you might enjoy it.
'Muse: Sounds great.  Thanks for the info Hap!

G. Lee Lewis <GLLewis@ecc.com> wrote:

Your web pages look really nice.
'Muse: Thanks.
Did you use Linux software to create your web pages?
'Muse: Yes.  In fact, thats all I use - Linux.  I don't use MS for anything anymore.  All the software used to create the graphic images on my pages runs on Linux.
Can you say what you used?.
'Muse: Mostly the GIMP, a Photoshop clone for Unices.  "GIMP" stand for GNU Image Manipulation Program, but the acronym kinda stinks (IMHO, of course).  Its quite a powerful program though. I also use xv quite a bit, along with tools like the NetPBM toolkit (a bunch of little command line programs for doing various image processing tasks), MultiGIF (for creating GIF animations) and Netscape's 4.x Page Composer for creating HTML.  I just started using the latter and not all my pages were created with it, but eventually I'll probably switch from doing the HTML by hand (through vi) to only using the Page Composer. For 3D images I use POV-Ray and BMRT.  These require a bit more understanding of the technology than a tool like the GIMP, but then 3D is at a different state of development than 2D tools like the GIMP.
What flavor of Linux do you like, redhat, debian, etc..??
'Muse: Right now two of my 3 boxes at home are WGS Linux Pro's (which is really a Red Hat 3.x distribution) and one is a Slackware (on my laptop).  I like the Red Hat 4.2 distribution, but it lacks support for network installs using the PCMCIA ethernet card I have for my laptop.  I plan on upgrading all my systems to the RH4.2 release by the end of the year.

I've not seen the Debian distribution yet.  Slackware is also quite good. I liked their "setup" tool for creating packages for distribution because it used a simple tar/gzip/shell script combination that was easy to use and easy to diagnose.  However, its not a real package management system like RPM.  "Consumers" (not hackers) will probably appreciate RPM more than "setup".

I also use commercial software for Linux when possible.  I run Applixware, which I like very much, and Xi Graphics AcceleratedX server instead of the XFree86 servers.  The Xi server is much easier to install and supports quite a few more video adapters.  However, it doesn't yet support the X Input Extension unfortunately.  The latest XFree86 servers do, and thats going to become important over the next year with respect to doing graphics.

What do you think of Caldera OpenLinux?
'Muse: I haven't had a chance to look it over.  However, I fully support the commercial distributions.  I'm an avid supporter of getting Linux-based software onto the shelves of software reseller stores like CompUSA or Egghead Software.  Caldera seems the most likely candidate to be able to get that done the quickest.  After that, we'll start to see commercial applications on the shelves too.  And thats something I'd love to see happen.  I did buy the Caldera Network Desktop last year but due to some hardware limitations decided to go back to the Slackware distributions I had then.

Of all the distributions Caldera probably has a better understanding of what it takes to make a "product" out of Linux - something beyond just packing the binaries and sticking them on a CD.  A successful product will require 3rd party products (ones with full end-user quality, printed documentation and professional support organizations) and strategic alliances to help prevent fragmentation.  Fragmentation is part of what hurt the early PC Unix vendors (like Dell and Everex) and what has plagued Unix workstation vendors for years.

So, in summary, I strongly support the efforts of Caldera, as well as Red Hat, Xi Graphics, and all vendors who strive to productize Linux.

<veliath@jasmine.hclt.com> wrote:

Is there some documentation available on GIMP - please, please say there is and point me towards it.
'Muse: No, not yet.  A couple of books are planned, but nothing has been started officially.  No online documentation exists yet.  Its a major flaw in free software in general which annoys me to no end, but even in commercial organizations the documentation is usually the last thing to get done.

There will be a 4 part series on the GIMP in the Linux Journal starting with the November issue.  I wrote this series.  It is very introductory but should help a little. I also did the cover art for that issue.  Let me know what you think!

You can also grab any Photoshop 3 or Photoshop 4 book that covers the basics for that program.  The Toolbox (the main window with all the little icons in it) is nearly exactly the same in both programs (GIMP and Photoshop).  Layers work the same (with some minor differences in the way the dialog windows look).  I taught myself most of what I know based on "The Photoshop 3 Wow! Book" and a couple of others.





 

Browser Detection with JavaScript

I recently started reading up on the latest features that will be supported in the upcoming releases of the Netscape and MSIE Web browsers through both the C|Net web site known as Builder.com and another site known as Developer.com.  A couple of the more interesting features are Cascading Style Sheets, which you'll often see referred to as CSS, and layers.  CSS will allow HTML authors to define more definitive characteristics to their pages.  Items such as the font family(Ariel, Helvetica, and so forth), style (normal, italic, oblique), size, and weight can be specified for any text on the page.  Browsers will attempt to honor these specifications and if they can't do so they will select appropriate defaults.  CSS handles most of the obvious characteristics of text on a page plus adds the ability to position text in absolute or relative terms.  You can also clip, overflow, and provide a z-index to the position of the text.  The z-index positioning is useful because it provides a means of accesing text and graphics in layers.  By specifying increasing values of z to a position setting you can effectively layer items on a page. Builder.com and Developer.com both have examples of these extensions to HTML that are fairly impressive.  There is a table of the new CSS features available at http://www.cnet.com/Content/Builder/Authoring/CSS/table.html.   You will need Netscape 4.x to view these pages.

CSS is about to make web pages a whole lot more interesting.

The down side to CSS is that its new.  Any new technology has a latency period that must pass before the technology is sufficiently distributed to be useful to the general populace.  In other words, the browsers aren't ready yet, or will just be released at the time this goes to print, so adding CSS elements to your pages will pretty much go unnoticed for some time.  I would, however, recommend becoming familiar with them if you plan on doing any serious Web page design in the future.  In the meantime we still have our JavaScript 1.1 and good ol' HTML 3.0.

Ok, enough philosophizing, down to some nitty gritty.

I just updated my GIMP pages to reflect the fact that the 0.54 version is pretty much dead and the 0.99 version is perpetually "about to become 1.0".  What that means is I've dropped most of my info and simply put up a little gallery with some of the images I've created with the GIMP.  Along with the images, including a background image that was created using nothing more than a set of gradients created or modified with the gradient editor in the GIMP, I've added some Javascript code to spice up my navigation menus and a couple of simple animated GIFs.  It was probably more fun to do than it is impressive.  If you check out these pages you'll find its a little more attractive with Netscape 4.x since I'm using a feature for tables that allows me to specify background images for tables, rows and even individual cells.  Netscape 3.x users can still see most of the effects, but a few are lost.

I had added some JavaScript code to the main navigation page of my whole site some time back.  I sent email to my brother, who does NT work at Compaq, and a Mac-using friend asking them to take a look at it and see what they thought.  It turned out MSIE really disliked that code and the Netscape browser on the Mac didn't handle the image rollers correctly (image rollovers cause one image to be replaced by another due to some user initiated action - we'll talk about those in a future Web Wonderings).  Shocking - JavaScript wasn't really cross platform as was first reported.  Well, its a new technology too.  The solution is to add code to determine if the rest of the code should really execute or not.  I needed to add some browser detection code.

That was .... a year ago?  I can't remember, its been so long now.  Well, while scanning the CSS and other info recently I ran across a few JavaScript examples that explained exactly how to do this.  I now take this moment to share it with my readers.  Its pretty basic, so I'll show it first, then explain it.   Note:  the following columns might be a little hard to read in windows less than about 660 pixels wide.  Sorry 'bout that.
 
<SCRIPT LANGUAGE="JavaScript1.1">
<!-- // Activate Cloaking Device 
//*************************************** 
// Browser Detection - check which browse 
// we're working with. 
// Based loosely on code from both Tim  
// Wallace and the Javascript section of
// www.developer.com. 
//*************************************** 
browserName = navigator.appName; 
browserVersion = parseInt(navigator.appVersion); 
browserCodeName = navigator.appCodeName; 
browserUserAgent = navigator.appUserAgent; 
browserPlatform = navigator.platform;

// Test for Netscape browsers 
if ( browserName == "Netscape" && 
   browserVersion >= 4 )  

   bVer = "n4"; 
if ( browserName == "Netscape" && 
   browserVersion == 3 )  

   bVer = "n3"; 
if ( browserName == "Netscape" && 
   browserVersion == 2 )  

   bVer = "n1"; 

// Test for Internet Explorer browsers 
if ( browserName == "Microsoft Internet Explorer" && 
     browserVersion == 2 ) bVer = "e2"; 
if ( browserName == "Microsoft Internet Explorer" && 
     browserVersion == 3 ) bVer = "e3"; 
if ( browserName == "Microsoft Internet Explorer" && 
     browserVersion >= 4 ) bVer = "e4"; 

// Deactivate Cloaking  --> 
</SCRIPT>
The first line tells browsers that a script is about to follow.  The LANGUAGE construct is supposed to signify the scripting language, but is not required. If the LANGUAGE tag is left off browsers are supposed to assume the scripting language to be JAVASCRIPT.  The only other language available that I'm aware of currently is VBSCRIPT for MSIE   Browsers that do not understand this HTML element simply ignore it.  The next line starts the script.  All scripts are enclosed in HTML comment structures.  By doing this the script can be hidden from browsers that don't understand them (thus the comment on "cloaking").  Note that scripts can start and stop anywhere in your HTML document.  Most are placed in the <HEAD> block at the top of the page to make debugging a little easier, but thats not required. 

Comments in scripts use the C++ style comment characters, either single lines prefixed with // or multiple lines that start with /* and end with */.  I placed the comments in the example in a purple color for those with browsers that support colored text, just to make them stand out from the real code a little. 

The next five lines grab identification strings from the browser by accessing the navigator object.  The first two, which set the browserName and browserVersion variables,  are obvious and what you will use most often to identify browsers in your scripts.  The appCodeName is "Mozilla" for Netscape and may not be set for MSIE.  The appUserAgent is generally a combination of the appCodeName and the appVersion, although it doesn't have to be.  Often you can use grab this string and parse out the information you are really looking for.  The last item, the platform property for the navigator object, was added in Javascript 1.2.  Be careful - this code tries to access a property that not all browsers can handle!  You may want to embed the browserPlatform assignment inside one of the IF statements below it to be safe.

Now we do some simple tests for the browsers our scripts can support. Note that the tests are fairly simply - we just test the string values that we grabbed for our browserName and browserVersion variables.  In the first set of tests we check for Netscape browsers.  The second set of tests test for MSIE browsers.  We could add code inside these tests to do platform specific things (like special welcome messages for Linux users!) but in practice you'll probably want this particular script to only set a global flag that can be tested later, in other scripts where the real work will be done.  Remember - you can have more than one script in a single HTML page and each script has access to variables set in other scripts.
 

Why is it important to test for browers versions?  Simple - JavaScript is a new technology, introduced in Netscape's 2.0 release of their Navigator browser.  Microsoft, despite whining that JavaScript isn't worth supporting, added support for the language in their 3.0 browser.  The problem is that each version, for either browser, supports the language to different extents.  For example, one popular use of the language is "image rollovers".  These allow images to change in the display based when the mouse is placed over the image.  Various versions of Netscape from 2.0 handled this just fine.  The Mac version had a bug in 3.0 that would not clear the original image before updating with the new image.  MSIE 2.0 and 3.0 didn't like this bit of JavaScript at all, popping up error windows in protest.  Knowing the browser and platform information can help you design your JavaScript to work reasonably well on any platform. 



 
Musings
 

SIGGRAPH 97

Unfortunately I'm not able to bring you my experiences at SIGGRAPH this month.  On my trip I took notes in my NEC Versa notebook (running Linux, of course).  Unfortunately I left the power supply and power cable in my motel room and by the time I realized it after I returned the motel could not find it.  Its probably on some used computer resellers shelves now.  Anyway, I just ordered a replacement.  I'll have my SIGGRAPH report for you next month.  Sorry about that.  I am, of course, taking donations to cover the cost of replacement.  <grin>
 
 

Designing Multimedia Applications

I recently picked up a copy of Design Graphics from my local computer bookstore.  This is a monthly magazine with a very high quality layout that covers many areas of computer graphics in great detail.  The magazine is rather pricey, about $9US, but so far has proven to be worth the price.  If you are into Graphic Design and/or User Interface Design it might be worth your time and money to check out this magazine.

The July issue focused on MetaCreations, the company that was created from the merger of MetaTools and Fractal Design.  MetaTools founders includeKai Krause, a unique designer and software architect, the man responsible for the bold interfaces found in MetaTools products like Soap and GOO.  This issue also included very detailed shots of the interface for Soap.  It was while reading this issue and studying the interface for Soap that I realized something basic:  Multimedia applications can look like anything you want.  You just have to understand a little about how Graphical Interfaces work and a lot about creating graphical images.

Graphical Interfaces are simply programs which provide easily recognizable displays that permit users to interact with the program.  These interfaces are event driven, meaning they sit in a loop waiting for an event such as a mouse click or movement and then perform some processing based on that event.  There are two common ways to create programs like this.  You can use a application programming interface, often referred to as an API, like Motif or OpenGL.  Or you can use a scripting interface like HTML with Java/JavaScript or VRML.  Which method you choose depends on the applications purpose and target audience.

So, who is the target audience?  My target audience for this column is the multitudes of Linux users who want to do something besides run Web servers.  Your target audience will either be Linux/Unix users or anyone with access to a computer no matter what platform they use.  In the first case you have a choice:  you can use either the API's or you can make use of HTML/VRML and browser technology.  If you are looking for cross-platform support you will probably go with browser technology.  Note that a third alternative exists - native Java which runs without the help of a browser - but that this is even newer than browser technology.  You'll have about a year to wait till Java can be used easily across platforms.  Browser technology, although a little shakey in some ways, is already here.

In order to use an API for your multimedia application you need to choose a widget set.  A widget set is the part of the API that handles windowing aspects for you.  Motif has a widget set that provides 3D buttons, scrollbars, and menus.  Mutlimedia applications have higher demands than this, however. The stock Motif API cannot handle MPEG movies, sound, or even colored bitmaps.  It must be used in conjunction with OpenGL, MpegTV's library, the OSS sound interface and the XPM library to provide a full multimedia development environment.  The advantage to the API method is control - the tools allow the developer the ability to create applications that are much more sophisticated and visually appealing than with browser based solutions.  An API solution, for example, can run in full screen mode without a window manager frame, thus creating the illusion that it is the only application running on the X server.  In order to get the effects you see in MetaTool's Soap you will need to create 2D and 3D pixmaps to be used in Motif label and button widgets.  If you do this you should turn off the border areas which are used to create Motif's 3D button effects. You will also need to write special callbacks (routines called based on an event which you specify) to swap the pixmaps quickly in order to give the feeling of motion or animation.

Even with the use of 3D pixmaps in Motif you still won't have the interactivity you desire in your multimedia application.  To add rotating boxes and other 3D effects with which the user can interact you will need to embed the OpenGL widget, available from the MesaGL package, into your Motif program.  By creating a number of OpenGL capable windows you can provide greater 3D interactivity than you can by simply swapping pixmaps in Motif labels and buttons.  The drawback here is that you will be required to write the code which registers events within given areas of the OpenGL widget.  This is not a simple task, but it is not impossible.  Using OpenGL with Motif is a very powerful solution for multimedia applications, but it is not for the faint of heart developer.

Using browser technology to create a multimedia application is a little different.  First, the browser will take care of the event catching for you.  You simply need to tell it what part of a page accepts events, which events it should watch for and what to do when that event happens.  This is, conceptually, just like using the API method.  In reality, using a browser this way is much simpler because the browser provides a layer of abstraction to simplify the whole process.  You identify what parts of the page accept input via HTML markup using links, anchors, and forms and then use JavaScript's onEvent style handlers, such as onClick or onMouseOver, to call an event handler.  Formatting your application is easier using the HTML markup language than trying to design the interface using the API.  You can have non-rectangular regions in imagemaps, for example, that accept user input.  API's can also have non-rectangular regions, but HTML only requires a single line of code to specify the region.  An API can use hundreds of lines of code.

-Top of next column-
More Musings...  
No other musings - what?  This wasn't enough for you?  <grin>
 


Ok, since we know using API's can be complex, and because I'm going to run out of room long before I can cover how to use an API to do a multimedia application, lets look at creating an application using browser technology.

Creating web pages is pretty easy.  If you haven't had a chance yet, take a look at Netscape 4.0.  It includes a tool called the Page Composer which allows for WYSIWYG creations of web pages.  This column was created using Page Composer.  Web pages are not enough, of course.  We need graphics, animations and sound.  Not to mention interaction with files on disk.

Graphics, animations and sound can easily be embedded in a web page with links.  Your application will probably need to provide players for any animations or sounds you provide unless you feel really confident users will already have players.   For animations on Linux systems, other than animated GIFs which are supported natively in most browsers these days, you can try xanim.  Your installation process will have to explain how to install the players.  JavaScript does permit you to query what players and plug-ins are available but doesn't appear to give you the ability to automatically launch them without having first registered them with the browser.

Sound can be added just like the graphics and animations.  You simply have links to the sound files.  Not all Linux systems will have sound available.  You might want to consider writing a plug-in which checks for the sound devices before trying to play sounds and having this plug-in installed for your sound files.  Security issues may prevent a plug-in from opening a device file.  You should check the Netscape plug-in API to find out what files you can and cannot open.

You might be wondering how you can use a browser for a multimedia application on a CD.  Don't forget - both MSIE and Netscape allow you to view HTML documents on the native system.  On Netscape you can just use something like file:/cdrom/start.html to open up the main page of the application.  Any links - sound, graphics, or animations - can be displayed or played when the page is first loaded using JavaScript's onLoad event handler.  Graphics, animations, sound and Java applets do not have to be served via a Web server to be viewed or run by the browser.  And JavaScript is embedded in the HTML page so it doesn't require a Web server either.  As long as you use relative links (relative to the directory where your applications start page is located) your users won't need access to a Web server to use your HTML-based multimedia application.

Well, we've covered just about all the things you'll want to do in your program except how to access files.  Security in browsers and with Java has traditionally been rather zealous - the systems were secure by denying all access to your hard drives.  Thats still the case even with JavaScript 1.2.  There are no real file I/O commands in the JavaScript language.  In order to place data in your application you will need to place it all in static arrays embedded in JavaScript code in a page.  Fortunately you can place this data in separate files and link to them when the page is loaded.  To do this you would use the SRC= attribute of the SCRIPT tag.  Netscape 3.0 or later browsers will read this and load the script file as if it were embedded at the SCRIPT tag of the original page.  This will not work for pre-3.0 browsers, some of the beta 4.0 browsers or (apparently) any of the MSIE browsers.

The SCR attribute  provides some level of control for maintaining your data files, but it also means your data is in user readable files on the CD. If you use Java applets instead you have the ability to compile this data into an object file but you still don't have access to your file system.  It may be possible to read data from files using plug-ins in order to perform some interactive operations but I'm not familiar with the Netscape or MSIE plug-in API's and suspect they also have some measure of security that may prevent this.  Reading files seems harmless enough, but there are reasons to disallow this practice. There is a way to get read/write access to the users filesystem from a JavaScript or Java application - certificates.  This is a new technology and I'm not that familiar with its use yet.  The Official Netscape JavaScript 1.2 Book describes certificates and how to obtain and create them.  I suggest taking a look at this book (at the end of chapter 14) if you are interested in this.

As I reread this article I realize what is so crystal clear in my mind now is probably still a muddy swamp to my readers.  Don't fret.  I covered a lot of material in a rather short space.  What you should do is first pick a method - API or browsers.  Then pick one part of that method and start reading all you can about it.  Personally, I understand the API methods better since I'm a programmer by trade.  The browser technology is interesting in that it provides the User Interface (UI) that is filled in by the developer with images and sound.  Abstrasting the UI in this manner is the future of applications but its still in its early days of development.  In either case you still need an understanding of what each piece of the puzzle does for you. The API method will give you more control and access to databases without the need for servers (you can embed the database code in the application).  The browser method is easier to prototype and develop but has limited access to the system for security reasons. Either method can produce stunning effects, if you understand how all the pieces fit together.  And when you look at MetaCreations products, like Soap and GOO, you can see the kinds of effects that are possible.

 
 
 
 
Resources
The following links are just starting points for finding more information about computer graphics and multimedia in general for Linux systems. If you have some application specific information for me, I'll add them to my other pages or you can contact the maintainer of some other web site. I'll consider adding other general references here, but application or site specific information needs to go into one of the following general references and not listed here.
 
Linux Graphics mini-Howto 
Unix Graphics Utilities 
Linux Multimedia Page 

Some of the Mailing Lists and Newsgroups I keep an eye on and where I get alot of the information in this column: 

The Gimp User and Gimp Developer Mailing Lists
The IRTC-L discussion list 
comp.graphics.rendering.raytracing 
comp.graphics.rendering.renderman 
comp.graphics.api.opengl 
comp.os.linux.announce 

Future Directions

Next month: Let me know what you'd like to hear about!


Copyright © 1997, Michael J. Hammel
Published in Issue 22 of the Linux Gazette, August 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Linux Benchmarking - Concepts

by André D. Balsa andrewbalsa@usa.net

With corrections and contributions by Uwe F. Mayer mayer@math.vanderbilt.edu and David C. Niemi bench@wauug.erols.com


This is the first article in a series of 4 articles on Linux Benchmarking, to be published by the Linux Gazette. This article deals with the fundamental concepts in computer benchmarking, as they apply to the Linux OS. An example of a classic benchmark, "Whetstone", is analyzed in more detail.

1. Basic concepts and definitions

2. A variety of benchmarks

3. FPU tests: Whetstone and Sons, Ltd.

4. References


1. Basic concepts and definitions

1.1 Benchmark

A benchmark is a documented procedure that will measure the time needed by a computer system to execute a well-defined computing task. It is assumed that this time is related to the performance of the computer system and that someh ow the same procedure can be applied to other systems, so that comparisons can be made between different hardware/software configurations.

1.2 Benchmark results

From the definition of a benchmark, one can easily deduce that there are two basic procedures for benchmarking:

  1. Measuring the time it takes for the system being examined to loop through a fixed number of iterations of a specific piece of code.
  2. Measuring the number of iterations of a specific piece of code executed by the system under examination in a fixed amount of time.

If a single iteration of our test code takes a long time to execute, procedure 1 will be preferred. On the other hand, if the system being tested is able to execute thousands of iterations of our test code per second, procedure 2 should be chosen.

Both procedures 1 and 2 will yield final results in the form "seconds/iteration" or "iterations/second" (these two forms are interchangeable). One could imagine other algorithms, e.g. self-modifying code or measuring the time needed to reach a steady s tate of some sort, but this would increase the complexity of the code and produce results that would probably be next to impossible to analyze and compare.

1.3 Index figures

Sometimes, figures obtained from standard benchmarks on a system being tested are compared with the results obtained on a reference machine. The reference machine's results are called the baseline results. If we divide the results of the system under examination by the baseline results, we obtain a performance index. Obviously, the performance index for the reference machine is 1.0. An index has no units, it is just a relative measurement.

1.4 Performance metrics

The final result of any benchmarking procedure is always a set of numerical results which we can call speed or performance (for that particular aspect of our system effectively tested by the piece of code).

Under certain conditions we can combine results from similar tests or various indices into a single figure, and the term metric will be used to describe the "units" of performance for this benchmarking mix.

1.5 Elapsed wall-clock time vs. CPU time

Time measurements for benchmarking purposes are usually taken by defining a starting time and an ending time, the difference between the two being the elapsed wall-clock time. Wall-clock means we are not considering just CPU time, but the "real" time usually provided by an internal asynchronous real-time clock source in the computer or an external clock source (your wrist-watch for example). Some tests, however, make use of CPU time: the time effectively spent by the CPU of the system being test ed in running the specific benchmark, and not other OS routines.

1.6 Resolution and precision

Resolution and precision both measure the information provided by a data point, but should not be confused.

Resolution is the minimum time interval that can be (easily) measured on a given system. In Linux running on i386 architectures I believe this is 1/100 of a second, provided by the GNU C system library function times (see /usr/include/time .h - not very clear, BTW). Another term used with the same meaning is "granularity". David C. Niemi has developed an interesting technique to lower granularity to very low (sub-millisecond) levels on Linux systems, I hope he will contribute an explanation of his algorithm in the next article.

Precision is a measure of the total variability in the results for any given benchmark. Computers are deterministic systems and should always provide the same, identical benchmark results if running under identical conditions. However, since Linux is a multi-tasking, multi-user system, some tasks will be running in the background and will eventually influence the benchmark results.

This "random" error can be expressed as a time measurement (e.g. 20 seconds + or - 0.2 s) or as a percentage of the figure obtained by the benchmark considered (e.g. 20 seconds + or - 1%). Other terms sometimes used to describe variations in results ar e "variance", "noise", or "jitter".

Note that whereas resolution is system dependent, precision is a characteristic of each benchmark. Ideally, a well-designed benchmark will have a precision smaller than or equal to the resolution of the system being tested. It is very important to iden tify the sources of noise for any particular benchmark, since this provides an indication of possibly erroneous results.

1.7 Synthetic benchmark

A program or program suite specifically designed to measure the performance of a subsystem (hardware, software, or a combination of both). Whetstone is an example of a synthetic benchmark.

1.8 Application benchmark

A commonly executed application is chosen and the time to execute a given task with this application is used as a benchmark. Application benchmarks try to measure the performance of computer systems for some category of real-world computing task. Measu ring the time your Linux box takes to compile the kernel can be considered as a sort of application benchmark.

1.9 Relevance

A benchmark or its results are said to be irrelevant when they fail to effectively measure the performance characteristic the benchmark was designed for. Conversely, benchmark results are said to be relevant when they allow an accurate prediction of re al-life performance or meaningful comparisons between different systems.



2. A variety of benchmarks

The performance of a Linux system may be measured by all sorts of different benchmarks:

  1. Kernel compilation performance.
  2. FPU performance.
  3. Integer math performance.
  4. Memory access performance.
  5. Disk I/O performance.
  6. Ethernet I/O performance.
  7. File I/O performance.
  8. Web server performance.
  9. Doom performance.
  10. Quake performance.
  11. X graphics performance.
  12. 3D rendering performance.
  13. SQL server performance.
  14. Real-time performance.
  15. Matrix performance.
  16. Vector performance.
  17. File server (NFS) performance.

Etc...


3. FPU tests: Whetstone and Sons, Ltd.

Floating-point (FP) instructions are among the least used while running Linux. They probably represent < 0.001% of the instructions executed on an average Linux box, unless one deals with scientific computations. Besides, if you really want to know how well designed the FPU in your processor is, it's easier to have a look at its data sheet and check how many clock cycles it takes to execute a given FPU instruction. But there are more benchmarks that measure FPU performance than anything else. Why ?

  1. RISC, pipelining, simultaneous issuing of instructions, speculative execution and various other CPU design tricks make the CPU performance, specially FPU performance, difficult to measure directly and simply. The execution time of an FPU instruction varies depending on the data, and a continuous stream of FPU instructions will execute under special circumstances that make direct predictions of performance impossible in most cases. Simulations (synthetic benchmarks) are needed.
  2. FPU tests are easier to write than other benchmarks. Just put a bunch of FP instructions together and make a loop: voilà !
  3. The Whetstone benchmark is widely (and freely) available in Basic, C and Fortran versions, in case you don't want to write your own FPU test.
  4. FPU figures look good for marketing purposes. Here is what Dave Sill, the author of the comp.benchmarks FAQ, has to say about MFLOPS: "Millions of Floating Point Operations Per Second. Supposedly the rate at which the system can execute floating point instructions. Varies widely between different benchmarks and different configurations of the same benchmarks. Popular with marketing types because it's sounds like a "hard" value like miles per hour, and represents a simple concept."
  5. If you are going to buy a Cray, you'd better have an excuse for it.
  6. You can't get a data sheet for the Cray (or don't believe the numbers), but still want to know its FP performance.
  7. You want to keep your CPU busy doing all sorts of useless FP calculations, and want to check that the chip gets very hot.
  8. You want to discover the next big bug in the FPU of your processor, and get rich speculating with the manufacturer's shares.

Etc...

3.1 Whetstone history and general features

The original Whetstone benchmark was designed in the 60's by Brian Wichmann at the National Physical Laboratory, in England, as a test for an ALGOL 60 compiler for a hypothetical machine. The compilation system was named after the small town of Whetstone, where it was designed, and the name seems to have stuck to the benchmark itself.

The first practical implementation of the Whetstone benchmark was written by Harold Curnow in FORTRAN in 1972 (Curnow and Wichmann together published a paper on the Whetstone benchmark in 1976 for The Computer Journal). Historically it is the first major synthetic benchmark. It is designed to measure the execution speed of a variety of FP instructions (+, *, sin, cos, atan, sqrt, log, exp) on scalar and vector data, but also contains some integer code. Results are provided in MWIPS (Millions of Whetstone Instructions Per Second). The meaning of the expression "Whetstone Instructions" is not clear, though, at least after close examination of the C source code.

During the late 80's and early 90's it was recognized that Whetstone would not adequately measure the FP performance of parallel multiprocessor supercomputers (e.g. Cray and other mainframes dedicated to scientific computations). This spawned the development of various modern benchmarks, many of them with names like Fhoostone, as a humorous reference to Whetstone. Whetstone however is still widely used, because it provides a very reasonable metric as a measure of uniprocessor FP performance.

Whetstone has other interesting qualities for Linux users:

3.2 Getting the source and compiling it

Getting the standard C version by Roy Longbottom.

The version of the Whetstone benchmark that we are going to use for this example was slightly modified by Al Aburto and can be downloaded from his excellent FTP site dedicated to benchmarks. After downloading the file whets.c, you will have to edit slightly the source: a) Uncomment the "#define POSIX1" directive (this enables the Linux compatible timer routine). b) Uncomment the "#define DP" directive (since we are only interested in the Double Precision results).

Compiling

This benchmark is extremely sensitive to compiler optimization options. Here is the line I used to compile it: cc whets.c -o whets -O2 -fomit-frame-pointer -ffast-math -fforce-addr -fforce-mem -m486 -lm.

Note that some compiler options of some versions of gcc are buggy, most notably one of -O, -O2, -O3, ... together with -funroll-loops can cause gcc to emit incorrect code on a Linux box. You can test your gcc with a short test program available at Uwe Mayer's site. Of course, if your compiler is buggy, then any test results are not written in stone, to say the least (pun intended). In short, don't use -funroll-loops to compile this benchmark, and try to stick to the optimization options listed above.

3.3 Running Whetstone and gathering results

First runs

Just execute whets. Whetstone will display its results on standard output and also write a whets.res file if you give it the information it requests. Run it a few times to confirm that variations in the results are very small.

With L1, L2 or both L1 and L2 caches disabled

Some motherboards allow you to disable the L1 (internal) or L2 (external) caches through the BIOS configuration menus (take a look at the motherboard's manual; the ASUS P55T2P4 motherboard, for example, allows disabling both caches separately or together). You may want to experiment with these settings and/or main memory (DRAM) timing settings.

Without optimization

You can try to compile whets.c without any special optimization options, just to verify that compiler quality and compiler optimization options do influence benchmark results.

3.4 Examining the source code, the object code and interpreting the results

General program structure

The Whetstone benchmark main loop executes in a few milliseconds on an average modern machine, so its designers decided to provide a calibration procedure that will first execute 1 pass, then 5, then 25 passes, etc... until the calibration takes more than 2 seconds, and then guess a number of passes xtra that will result in an approximate running time of 100 seconds. It will then execute xtra passes of each one of the 8 sections of the main loop, measure the running time for each (for a total running time very near to 100 seconds) and calculate a rating in MWIPS, the Whetstone metric. This is an interesting variation in the two basic procedures described in Section 1.

Main loop

The main loop consists of 8 sections each containing a mix of various instructions representative of some type of computational task. Each section is itself a very short, very small loop, and has its own timing calculation. The code that gets looped through for section 8 for example is a single line of C code:

x = sqrt(exp(log(x)/t1); where x = 0.75 and t1=0.50000025, both defined as doubles.

Executable code size, library calls

Compiling as specified above with gcc 2.7.2.1, the resulting ELF executable whets is 13 096 bytes long on my system. It calls libc and of course libm for the trigonometric and transcendental math functions, but these should get compiled to very short executable code sequences since all modern CPUs have FPUs with these functions wired-in.

General comments

Now that we have an FPU performance figure for our machine, the next step is comparing it to other CPUs. Have you noticed all the data that whets.c asked you after you had run it for the first time? Well, Al Aburto has collected Whetstone results for your convenience at his site, you may want to download the data file and have a look at it. This kind of benchmarking data repository is very important, because it allows comparisons between various different machines. More on this topic in one of my next articles.

Whetstone is not a Linux specific test, it's not even an OS specific test, but it certainly is a good test for the FPU in your Linux box, and also gives an indication of compiler efficiency for specific kinds of applications that involve FP calculations.

I hope this gave you a taste of what benchmarking is all about.


4. References

Other references for benchmarking terminology:


Copyright © 1997, André D. Balsa
Published in Issue 22 of the Linux Gazette, October 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Word Processing and Text Processing

by Larry Ayers


One of the most common questions posted in the various Linux newsgroups is "Where can I find a good word-processor for Linux?". This question has several interesting ramifications:


Vital For Some...

A notion has become prevalent in the minds of many computer users these days: the idea that a complex word processor is the only tool suitable for creating text on a computer. I've talked with several people who think of an editor as a primitive relic of the bad old DOS days, a type of software which has been superseded by the modern word-processor. There is an element of truth to this, especially in a business environment in which even the simplest memos are distributed in one of several proprietary word-processor formats. But when it is unnecessary to use one of these formats, a good text editor has more power to manipulate text and is faster and more responsive.

The ASCII format, intended to be a universal means of representing and transferring text, does have several limitations. The fonts used are determined by the terminal type and capability rather than by the application, normally a fixed, monospace font. These limitations in one sense are virtues, though, as this least-common-denominator approach to representing text assures readability by everyone on all platforms. This is why ASCII is still the core format of e-mail and usenet messages, though there is a tendency in the large software firms to promote HTML as a replacement. Unfortunately, HTML can now be written so that it is essentially unreadable by anything other than a modern graphical browser. Of course, HTML is ASCII-based as well, but is meant to be interpreted or parsed rather than read directly.

Working with ASCII text directly has many advantages. The output is compact and easily stored, and separating the final formatting from actual writing allows the writer to focus on content rather than appearance. An ASCII document is not dependent on one application; the simplest of editors or even cat can access its content. There is an interesting parallel, perhaps coincidental, between the Unix use of ASCII and other OS's use of binary formats. All configuration files in a Linux or any Unix system are generally in plain ASCII format: compact,editable, and easily backed-up or transferred. Many programmers use Linux; source code is written in ASCII format, so perhaps using the format for other forms of text is a natural progression. The main configuration files for Win95, NT and OS/2 are in binary format, easily corruptible and not easily edited. Perhaps this is one reason users of these systems tend towards proprietary word-processing formats which, while not necessarily in binary format, aren't readable by ASCII-based editors or even other word-processors. But I digress...

There are several methods of producing professional-looking printable documents from ASCII input, the most popular being LaTeX, Lout, and Groff.


Text Formatting with Mark-Up Languages

LaTeX

LaTeX, Leslie Lamport's macro package for the TeX low-level formatting system, is widely used in the academic world. It has become a standard, and has been refined to the point that bugs are rare. Its ability to represent mathematical equations is unparalleled, but this very fact has deterred some potential users. Mentioning LaTeX to people will often elicit a response such as: "Isn't that mainly used by scientists and mathematicians? I have no need to include equations in my writing, so why should I use it?" A full-featured word-processor (such as WordPerfect) also includes an equation editor, but (as with LaTeX) just because a feature exists doesn't mean you have to use it. LaTeX is well-suited to creating a wide variety of documents, from a simple business letter to articles, reports or full-length books. A wealth of documentation is available, including documents bundled with the distribution as well as those available on the internet. A good source is this ftp site, which is a mirror of CTAN, the largest on-line repository of TeX and LaTeX material.

LaTeX is easily installed from any Linux distribution, and in my experience works well "out of the box". Hardened LaTeX users type the formatting tagging directly, but there are several alternative approaches which can expedite the process, especially for novices. There is quite a learning curve involved in learning LaTeX from scratch, but using an intermediary interface will allow the immediate creation of usable documents by a beginner.

AucTeX is a package for either GNU Emacs or XEmacs which has a multitude of useful features helpful in writing LaTeX documents. Not only does the package provide hot-keys and menu-items for tags and environments, but it also allows easy movement through the document. You can run LaTeX or TeX interactively from Emacs, and even view the resulting output DVI file with xdvi. Emacs provides excellent syntax highlighting for LaTeX files, which greatly improves their readability. In effect AucTeX turns Emacs into a "front-end" for LaTeX. If you don't like the overhead incurred when running Emacs or especially XEmacs, John Davis' Jed and Xjed editors have a very functional LaTeX/TeX mode which is patterned after AucTeX. The console-mode Jed editor does syntax-highlighting of TeX files well without extensive fiddling with config files, which is rare in a console editor.

If you don't use Emacs or its variants there is a Tcl/Tk based front-end for LaTeX available called xtem. It can be set up to use any editor; the September 1996 issue of Linux Journal has a good introductory article on the package. Xtem has one feature which is useful for LaTeX beginners: on-line syntax help-files for the various LaTeX commands. The homepage for the package can be visited if you're interested.

It is fairly easy to produce documents if the default formats included with a TeX installation are used; more knowledge is needed to produce customized formats. Luckily TeX has a large base of users, many of whom have contributed a variety of style-formatting packages, some of which are included in the distribution, while others are freely available from TeX archive sites such as CTAN.

At a further remove from raw LaTeX is the LyX document processor. This program (still under development, but very usable) at first seems to be a WYSIWYG interface for LaTeX, but this isn't quite true. The text you type doesn't have visible LaTeX tagging, but it is formatted to fit the window on your screen which doesn't necessarily reflect the document's appearance when printed or viewed with GV or Ghostscript. In other words, the appearance of the text you type is just a user convenience. There are several things which can be done with a document typed in LyX. You can let LyX handle the entire LaTeX conversion process with a DVI or Postscript file as a result, which is similar to using a word-processor. I don't like to do it this way; one of the reasons I use Linux is because I'm interested in the underlying processes and how they work, and Linux is transparent. If I'm curious as to how something is happening in a Linux session I can satisfy that curiosity to whatever depth I like. Another option LyX offers is more to my taste: LyX can convert the document's format from the LaTeX-derived internal format to standard LaTeX, which is readable and can be loaded into an editor.

Load a LyX-created LaTeX file into an Emacs/Auctex session (if you have AucTeX set up right it will be called whenever a file with the .tex suffix is loaded), and your document will be displayed with new LaTeX tags interspersed throughout the text. The syntax-highlighting can make the text easier to read if you have font-locking set up to give a subdued color to the tagging (backslashes (\) and $ signs). This is an effective way to learn something about how LaTeX documents are written. Changes can be made from within the editor and you can let AucTeX call the LaTeX program to format the document, or you can continue with LyX. In effect this is using LyX as a preprocessor for AucTeX. This expands the user's options; if you are having trouble convincing LyX to do what you want, perhaps AucTeX can do it more easily.

Like many Linux software projects LyX is still in a state of flux. The release of beta version 0.12 is imminent; after that release the developers are planning to switch to another GUI toolkit (the current versions use the XForms toolkit). The 0.11.38 version I've been using has been working dependably for me (hint: if it won't compile, give the configure script the switch --disable-nls. This disables the internationalization support).


YODL

YODL (Yet One-Other Document Language) is another way of interacting with LaTeX. This system has a simplified tagging format which isn't hard to learn. The advantage of YODL is that from one set of marked-up source documents, output can be generated in LaTeX, HTML, and Groff man and ms formats. The package is well-documented. I wrote a short introduction to YODL in issue #9 of the Gazette. The current source for the package is this ftp site.


Lout

About thirteen years ago Jeffrey Kingston (of the University of Sydney, Australia) began to develop a document formatting system which became known as Lout. This system bears quite a bit of resemblance to LaTeX: it uses formatting tags (using the @ symbol rather than \) and its output is Postscript. Mr. Kingston calls Lout a high-level language with some similarities to Algol, and claims that user extensions and modifications are much easier to implement than in LaTeX. The package comes with hundreds of pages of Postscript documentation along with the Lout source files which were used to generate those book-length documents.

The Lout system is still maintained and developed, and in my trials seemed to work well, but there are some drawbacks. I'm sure Lout has nowhere near as many users as LaTeX. LaTeX is installed on enough machines that if you should want to e-mail a TeX file to someone (especially someone in academia) chances are that that person will have access to a machine with Tex installed and will be able to format and print or view it. LaTeX's large user-base also has resulted in a multitude of contributed formatting packages.

Another drawback (for me, at least) is the lack of available front-ends or editor-macro packages for Lout. I don't mind using markup languages if I can use, say, an Emacs mode with key-bindings and highlighting set up for the language. There may be such packages out there for Lout, but I haven't run across them.

Lout does have the advantage of being much more compact than a typical Tex installation. If you have little use for some of the more esoteric aspects of LaTeX, Lout might be just the thing. It can include tables, various types of lists, graphics, foot- and marginal notes, and equations in a document, and the Postscript output is the equal of what LaTeX generates.

Both RedHat and Debian have Lout packages available, and the source/documentation package is available from the Lout home FTP site.


Groff

Groff is an older system than TeX/LaTeX, dating back to the early days of unix. Often a first-time Linux user will neglect to install the Groff package, only to find that the man command won't work and that the man-pages are therefore inaccessible. As well as in day-to-day invocation by the man command, Groff is used in the publishing industry to produce books, though other formatting systems such as SGML are more common.

Groff is the epitome of the non-user-friendly and cryptic unix command-line tool. There are several man-pages covering various of Groff's components, but they seem to assume a level of prior knowledge without any hint as to where that knowledge might be acquired. I found them to be nearly incomprehensible. A search on the internet didn't turn up any introductory documents or tutorials, though there may be some out there. I suspect more complete documentation might be supplied with some of the commercial unix implementations; the original and now-proprietary version is called troff, and a later version is nroff; Groff is short for GNU roff.

Groff can generate Postscript, DVI, HP LaserJet4, and ASCII text formats.

Learning to use Groff on a Linux system might be an uphill battle, though Linux software developers must have learned enough of it at one time or other, as most programs come with Groff-tagged man-page files. Groff's apparent opacity and difficulty make LaTeX look easy in contrast!


A Change in Mind-Set

Processing text with a mark-up language requires a different mode of thought concerning documents. On the one hand, writing blocks of ASCII is convenient and no thought needs to be given to the marking-up process until the end. A good editor provides so many features to deal with text that using any word-processor afterwards can feel constrictive. Many users, though, are attracted by the integration of functions in a word processor, using one application to produce a document without intermediary steps.

Though there are projects underway (such as Wurd) which may eventually result in a native Linux word-processor, there may be a reason why this type of application is still rare in the Linux world. Adapting oneself to Linux, or any unix-variant, is an adaptation to what has been called "the Unix philosophy", the practice of using several highly-refined and specific tools to accomplish a task, rather than one tool which tries to do it all. I get the impression that programmers attracted to free software projects prefer working on smaller specialized programs. As an example look at the plethora of mail- and news-readers available compared to the dearth of all-in-one internet applications. Linux itself is really just the kernel, which has attracted to itself all of the GNU and other software commonly distributed with it in the form of a distribution.

Christopher B. Browne has written an essay titled An Opinionated Rant About Word-Processors which deals with some of the issues discussed in this article; it's available at this site.

The StarOffice suite is an interesting case, one of the few instances of a large software firm (StarDivision) releasing a Linux version of an office productivity suite. The package has been available for some time now, first in several time-limited beta versions and now in a freely available release. It's a large download but it's also available on CDROM from Caldera. You would think that users would be flocking to it if the demand is really that high for such an application suite for Linux. Judging by the relatively sparse usenet postings I've seen, StarOffice hasn't exactly swept the Linux world by storm. I can think of a few possible reasons:


I remember the first time I started up the StarOffice word-processor. It was slow to load on a Pentium 120 with 32 mb. of RAM (and I thought XEmacs was slow!), and once the main window appeared it occurred to me that it just didn't look "at home" on a Linux desktop. All those icons and button-bars! It seemed to work well, but with the lack of English documentation (and not being able to convince it to print anything!) I eventually lost interest in using it. I realized that I prefer my familiar editors, and learning a little LaTeX seemed to be easier than trying to puzzle out the workings of an undocumented suite of programs. This may sound pretty negative, and I don't wish to denigrate the efforts of the StarDivision team responsible for the Linux porting project. If you're a StarOffice user happy with the suite (especially if you speak German and therefore can read the docs) and would like to present a dissenting view, write a piece on it for the Gazette!

Two other commercial word-processors for Linux are Applix and WordPerfect. Applix, available from RedHat, has received favorable reviews from many Linux users.

A company called SDCorp in Utah has ported Corel's WordPerfect 7 to Linux, and a (huge!) demo is available now from both the SDCorp ftp site and Corel's. Unfortunately both FTP servers are unable to resume interrupted downloads (usually indicating an NT server) so the CDROM version, available from the SDCorp website, is probably the way to go, if you'd like to try it out. The demo can be transformed into a registered program by paying for it, in which case a key is e-mailed to you which registers the program, but only for the machine it is installed on.

Addendum: I recently had an exchange of e-mail with Brad Caldwell, product manager for the SDCorp WordPerfect port. I complained about the difficulty of downloading the 36 mb. demo, and a couple of days later I was informed that the file has been split into nine parts, and that they were investigating the possibility of changing to an FTP server which supports interrupted downloads. The smaller files are available from this web page.


There exists a curious dichotomous attitude these days in the Linux community. I assume most people involved with Linux would like the operating system to gain more users and perhaps move a little closer to the mainstream. Linux advocates bemoan the relative lack of "productivity apps" for Linux, which would make the OS more acceptable in corporate or business environments. But how many of these advocates would use the applications if they were more common? Often the change of mindset discussed above mitigates against acceptance of Windows-like programs, with no source code available and limited access to the developers. Linux has strong roots in the GNU and free software movements (not always synonymous) and this background might be a barrier towards development of a thriving commercial software market.


Copyright © 1997, Larry Ayers
Published in Issue 22 of the Linux Gazette, October 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


GNU Emacs 20.1

by Larry Ayers


Introduction

Richard Stallman and the other members of the GNU Emacs development team are a rather reticent group of programmers. Unlike many other development projects in the free-software world, the Emacs beta program is restricted to a closed group of testers, and news of what progress is being made is scanty. In the past couple of months hints found in various usenet postings seemed to intimate that a new release of GNU Emacs was imminent, so every now and then I began to check the GNU main FTP site on the off-chance that a release had been made.

Early on the morning of September 17 I made a quick check before beginning my day's work, and there it was, a new Emacs 20.1 source archive. As with all Emacs source packages, it was large (over 13 megabytes) so I began the download with NcFtp and left it running.

Building It

There is always a delay between the release of a new version of a software package and the release of a Linux distribution's version, such as a Debian or RedHat binary package. Even if you usually use RPMs or *.deb releases (in many cases it's preferable) a source release of a major team-developed piece of software such as GNU Emacs will usually build easily on a reasonably up-to-date Linux machine. The included installation instructions are clear: just run the configure script, giving your machine-type and preferred installation directory as switches. In my case, this command did the trick:

./configure i586-Debian-linux-gnu --prefix=/mt

The script will generate a Makefile tailored to your machine; make, followed by make install and you're up and running.


So What's New?

It's been about a year since the last public GNU Emacs release, so there have been quite a few changes. One of the largest is the incorporation of the MULE (MUltiLingual Emacs) extensions, which give Emacs the capability of displaying extended character sets necessary for languages such as Chinese and Japanese. This won't be of interest to most English-speaking users, but if you're interested the necessary files are in a separate archive at the GNU site.

Here's a partial list of changes and updated packages:

Have you ever been puzzled or annoyed by the peculiar way the Emacs screen scrolls when using the up- or down- arrow keys? It's a jerky scroll, difficult for the eye to follow, which could only be partially alleviated by setting scroll-step to a small value. In 20.1 this has been changed, so that if you set scroll-step to 2 (setq scroll-step 2) the screen actually scrolls up and down smoothly, without the disorienting jerks. This feature alone makes the upgrade worthwhile!

Another Emacs quirk has been addressed with a new variable, scroll-preserve-screen-position. This variable, if set to t (which means "yes"), will allow the user to page-up and page-down and then returns the cursor to its original position when the starting page is shown again. I really like this. With the default behavior you have to find the cursor on the screen and manually move it back to where it was. This variable can be enabled with the line

(setq scroll-preserve-screen-position t)

entered into your ~.emacs init file.


The Customization Utility

What a labor-saver! Rather than searching for the documentation which deals with altering one of Emacs' default settings, the user is presented with a mouse-enabled screen from which changes can be made, either for the current session or permanently, in which case the changes are recorded in the user's ~.emacs file. It appears that a tremendous amount of work went into including the customization framework in the LISP files for Emacs' countless modes and add-on packages. A Customize screen can be summoned from the Help menu; the entries are in a cascading hierarchy, allowing an easy choice of the precise category a user might want to tweak. Here's a screenshot of a typical Customization screen:

Customizing screen


Per Abrahamsen is to be congratulated for writing this useful utility, and for making it effective both for XEmacs and GNU Emacs users.


Musings

Emacs used to be thought of as a hefty, memory-intensive editor which tended to strain a computer's resources. Remember the old mock-acronym, Eight Megabytes And Constantly Swapping? These days it seems that the hardware has caught up with Emacs; today a mid-range machine can run Emacs easily, even with other applications running concurrently. Memory and hard-disk storage have become less expensive which makes Emacs usable for more people.

Some people are put off by the multiple keystrokes for even the most common commands. It's easy to rebind the keys, though. The function keys are handy, as they aren't in use by other Emacs commands. As examples, I have F1 bound to Kill-Buffer, F2 bound to Ispell-Word (which checks the spelling of the word under the cursor), F3 and F4 put the cursor at the beginning or end of the current file, and F7 is bound to Save-Buffer. Of course, these operations are on the menu-bar, but using the keyboard is quicker. If you are accustomed to a Vi-style editor, the Viper package allows toggling between the familiar Vi commands (which are extraordinarily quick, as most are a single keystroke) and the Emacs command set. This emulation mode has been extensively improved lately, and is well worth using.

Even with the exhaustively detailed Info files, the tutorial, etc. I would hesitate to recommend Emacs for a novice Linux user. There is enough to learn just becoming familiar with basic Linux commands without having to learn Emacs as well. I think Nedit would be a more appropriate choice for a new user familiar with Windows, OS/2, or the Macintosh, since its mouse-based operation and menu structure are reminiscent of editors from these operating systems.

Emacs has a way of growing on you; as your knowledge of its traits and capabilities increases the editor gradually is molded to your preferences and work habits. It is possible to use the editor at a basic level, (using just the essential commands), but it's a waste to run a large editor like Emacs without using at least some of its manifold capabilities.


Copyright © 1997, Larry Ayers
Published in Issue 22 of the Linux Gazette, October 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


A True "Notebook" Computer?

by Larry Ayers


Introduction

Recently I happened across an ingeniously designed add-on LISP package for the GNU Emacs editor. It's called Notes-Mode, and it helps organize and cross-reference notes by subject and date. It was written by John Heidemann. Here's his account of how he happened to write the package:

Briefly, I started keeping notes on-line shortly after I got a portable computer in January, 1994. After a month-and-a-half of notes, I realized that one does not live by grep alone, so I started adding indexing facilities. In June of 1995 some other Ficus-project members started keeping and indexing on-line notes using other home-grown systems. After some discussion, we generalized my notes-mode work and they started using it. Over the next 18 months notes-mode grew. Finally, in April, 1996 I wrote documentation, guaranteeing that innovation on notes-mode will now cease or the documentation will become out of date.


Using Notes-Mode

Here's what one of my smaller notes files looks like:


25-Jul-97 Friday
----------------

* Today
-------
prev: <file:///~/notes/199707/970724#* Today>
next: <file:///~/notes/199707/970728#* Today>

* Prairie Plants
----------------
prev: <file:///~/notes/199707/970724#* Prairie Plants>
next: <none>
So far the only results I've seen in response to the various desultory efforts I've made to direct-seed prairie plants in the west prairie: 1: Several rattlesnake-master plants in a spot where we burned a brush-pile. Two are blooming this summer. 2: One new-england aster near the above. There are probably others which are small and haven't flowered yet. * Linux Notes ------------- prev: <file:///~/notes/199707/970724#* Linux Notes> next: <file:///~/notes/199708/970804#* Linux Notes> I noticed today that a new version of e2compress was available, and I've patched the 2.0.30 kernel source but haven't compiled it yet. I've been experimenting with the color-syntax-highlighting version of nedit 4.03 lately; it has a nifty dialog-box interface for creating and modifying modes. Easier than LISP!

The first entry,Today, contains nothing; it just serves as a link to move from the current notes file to either the previous day's file or the next day's. Any other word preceded by an asterisk and a space will serve as a hyper-link to previous or next entries with the same subject. Type in a new (or previously-used) subject with the asterisk and space, press enter, and the dashed line and space will automatically be entered into the file; this format is what the Perl indexing script uses to identify discrete entries.

While in Emacs with a notes-mode file loaded, several keyboard commands allow you to navigate between successive entries, either by day or by subject, depending on where the cursor is when the keystroke is executed. A handy key-binding for notes-mode is Control-c n, which will initialize a new notes file for the day if the following LISP code is entered into your ~.emacs file:
(define-key global-map "^Cn" 'notes-index-todays-link). The "^C" part is entered into the file by entering Control-q Control-c.

When Notes-Mode is installed a subdirectory is created in your home directory called Notes. As you use the mode a subdirectory for each month is created as well as a subdirectory under each month's directory for each week in the month. The individual note files, one for each day the mode is used, are given numerical names; the format of the path and filename can be seen in the above example.

The ability to navigate among your notes is enabled by means of a Perl script called mkall, which is intended to be run daily by cron. Mkall in turn calls other Perl scripts which update the index file with entries for any new notes you may have made. This system works well, making good use of Linux's automation facilities. Once you have it set up you never have to think about it again.

While this mode is designed for an academic environment in which voluminous notes are taken on a variety of subjects, it can also be useful for anyone who wants to keep track of on-line notes. It could even be used as a means of organizing diary or journal entries. The only disadvantage I've seen is that, though the notes-files are ASCII text readable by any editor, the navigation and hyper-linking features are only available from within Emacs. This is fine if you use Emacs as your main editor but makes the package not too useful for anyone else. XEmacs users are out of luck as well, as the package doesn't work "out-of-the-box" with XEmacs. I imagine a skilled LISP hacker could modify Notes-Mode for XEmacs; I've made some tentative attempts but without success.

Availability

The only source I've seen for this package is from the author's web page, at this URL:
http://gost.isi.edu/~johnh/SOFTWARE/NOTES_MODE/index.html

The documentation for Notes-Mode can be browsed on-line at this site if you'd like to read more before trying it out.


Copyright © 1997, Larry Ayers
Published in Issue 22 of the Linux Gazette, October 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Using m4 to write HTML.

By Bob Hepple bhepple@pacific.net.sg


Contents:


This page last updated on Thu Sep 18 22:46:54 HKT 1997
$Revision: 1.4 $

1. Some limitations of HTML

It's amazing how easy it is to write simple HTML pages - and the availability of WYSIWYG HTML editors like NETSCAPE GOLD lulls one into a mood of "don't worry, be happy". However, managing multiple, interrelated pages of HTML rapidly gets very, very difficult. I recently had a slightly complex set of pages to put together and it started me thinking - "there has to be an easier way".

I immediately turned to the WWW and looked up all sorts of tools - but quite honestly I was rather disappointed. Mostly, they were what I would call Typing Aids - instead of having to remember arcane incantations like <a href="link">text</a>, you are given a button or a magic keychord like ALT-CTRL-j which remembers the syntax and does all that nasty typing for you.

Linux to the rescue! HTML is built as ordinary text files and therefore the normal Linux text management tools can be used. This includes the revision control tools such as RCS and the text manipulation tools like awk, perl, etc. These offer significant help in version control and managing development by multiple users as well as in automating the process of extracting from a database and displaying the results (the classic "grep |sort |awk" pipeline).

The use of these tools with HTML is documented elsewhere, e.g. see Jim Weinrich's article in Linux Journal Issue 36, April 1997, "Using Perl to Check Web Links" which I'd highly recommend as yet another way to really flex those Linux muscles when writing HTML.

What I will cover here is a little work I've done recently with using m4 in maintaining HTML. The ideas can probably be extended to the more general SGML case very easily.

Contents

2. Using m4

I decided to use m4 after looking at various other pre-processors including cpp, the C front-end. While cpp is perhaps a little too C-specific to be very useful with HTML, m4 is a very generic and clean macro expansion program - and it's available under most Unices including Linux.

Instead of editing *.html files, I create *.m4 files with my favourite text editor. These look something like this:

m4_include(stdlib.m4)
_HEADER(`This is my header')
<P>This is some plain text<P>
_HEAD1(`This is a main heading')
<P>This is some more plain text<P>
_TRAILER

The format is simple - just HTML code but you can now include files and add macros rather like in C. I use a convention that my new macros are in capitals and start with "_" to make them stand out from HTML language and to avoid name-space collisions.

The m4 file is then processed as follows to create an .html file e.g.

m4 -P <file.m4 >file.html

This is especially easy if you create a "makefile" to automate this in the usual way. Something like:

.SUFFIXES: .m4 .html
.m4.html:
	m4 -P $*.m4 >$*.html
default: index.html
*.html: stdlib.m4
all: default PROJECT1 PROJECT2
PROJECT1:
	(cd project2; make all)
PROJECT2:
	(cd project2; make all)

The most useful commands in m4 include the following which are very similar to the cpp equivalents (shown in brackets):

m4_include:
includes a common file into your HTML (#include)
m4_define:
defines an m4 variable (#define)
m4_ifdef:
a conditional (#ifdef)

Some other commands which are useful are:

m4_changecom:
change the m4 comment character (normally #)
m4_debugmode:
control error disgnostics
m4_traceon/off:
turn tracing on and off
m4_dnl:
comment
m4_incr, m4_decr:
simple arithmetic
m4_eval:
more general arithmetic
m4_esyscmd:
execute a Linux command and use the output
m4_divert(i):
This is a little complicated, so skip on first reading. It is a way of storing text for output at the end of normal processing - it will come in useful later, when we get to automatic numbering of headings. It sends output from m4 to a temporary file number i. At the end of processing, any text which was diverted is then output, in the order of the file number i. File number -1 is the bit bucket and can be used to comment out chunks of comments. File number 0 is the normal output stream. Thus, for example, you can `m4_divert' text to file 1 and it will only be output at the end.

Contents

3. Examples of m4 macros

3.1 Sharing HTML elements across several page

In many "nests" of HTML pages, each page shares elements such as a button bar like this:

[Home] [Next] [Prev] [Index]

This is fairly easy to create in each page - the trouble is that if you make a change in the "standard" button-bar then you then have the tedious job of finding each occurance of it in every file and then manually make the changes.

With m4 we can more easily do this by putting the shared elements into an m4_include statement, just like C.

While I'm at it, I might as well also automate the naming of pages, perhaps by putting the following into an include file, say "button_bar.m4":

m4_define(`_BUTTON_BAR', 
	<a href="homepage.html">[Home]</a>
	<a href="$1">[Next]</a>
	<a href="$2">[Prev]</a>
	<a href="indexpage.html">[Index]</a>)

and then in the document itself:

m4_include button_bar.m4
_BUTTON_BAR(`page_after_this.html', 
	`page_before_this.html')

The $1 and $2 parameters in the macro definition are replaced by the strings in the macro call.

Contents

3.2 Managing HTML elements that often change

It is very troublesome to have items change in multiple HTML pages. For example, if your email address changes then you will need to change all references to the new address. Instead, with m4 you can do something like this in your stdlib.m4 file:

m4_define(`_EMAIL_ADDRESS', `MyName@foo.bar.com')

and then just put _EMAIL_ADDRESS in your m4 files.

A more substantial example comes from building strings up with multiple components, any of which may change as the page is developed. If, like me, you develop on one machine, test out the page and then upload to another machine with a totally different address then you could use the m4_ifdef command in your stdlib.m4 file (just like the #ifdef command in cpp):

m4_define(`_LOCAL')
.
.
m4_define(`_HOMEPAGE', 
	m4_ifdef(`_LOCAL', `//127.0.0.1/~YourAccount', 
		`http://ISP.com/~YourAccount'))

m4_define(`_PLUG', `<A REF="http://www.ssc.com/linux/">
	<IMG SRC="_HOMEPAGE/gif/powered.gif" 
	ALT="[Linux Information]"> </A>')

Note the careful use of quotes to prevent the variable _LOCAL from being expanded. _HOMEPAGE takes on different values according to whether the variable _LOCAL is defined or not. This can then ripple through the entire project as you make the pages.

In this example, _PLUG is a macro to advertise Linux. When you are testing your pages, you use the local version of _HOMEPAGE. When you are ready to upload, you can remove or comment out the _LOCAL definition like this:

m4_dnl m4_define(`_LOCAL')

... and then re-make.

Contents

3.3 Creating new text styles

Styles built into HTML include things like <EM> for emphasis and <CITE> for citations. With m4 you can define your own, new styles like this:

m4_define(`_MYQUOTE',
	<BLOCKQUOTE><EM>$1</EM></BLOCKQUOTE>)

If, later, you decide you prefer <STRONG> instead of <EM> it is a simple matter to change the definition and then every _MYQUOTE paragraph falls into line with a quick make.

The classic guides to good HTML writing say things like "It is strongly recommended that you employ the logical styles such as <EM>...</EM> rather than the physical styles such as <I>...</I> in your documents." Curiously, the WYSIWYG editors for HTML generate purely physical styles. Using these m4 styles may be a good way to keep on using logical styles.

Contents

3.4 Typing and mnemonic aids

I don't depend on WYSIWYG editing (having been brought up on troff) but all the same I'm not averse to using help where it's available. There is a choice (and maybe it's a fine line) to be made between:

<BLOCKQUOTE><PRE><CODE>Some code you want to display.
</CODE></PRE></BLOCKQUOTE>

and:

_CODE(Some code you want to display.)

In this case, you would define _CODE like this:

m4_define(`_CODE',
	 <BLOCKQUOTE><PRE><CODE>$1</CODE></PRE></BLOCKQUOTE>)

Which version you prefer is a matter of taste and convenience although the m4 macro certainly saves some typing and ensures that HTML codes are not interleaved. Another example I like to use (I can never remember the syntax for links) is:

m4_define(`_LINK', <a href="$1">$2</a>)

Then,

<a href="URL_TO_SOMEWHERE">Click here to get to SOMEWHERE </a>

becomes:

_LINK(`URL_TO_SOMEWHERE', `Click here to get to SOMEWHERE')

Contents

3.5 Automatic numbering

m4 has a simple arithmetic facility with two operators m4_incr and m4_decr which act as you might expect - this can be used to create automatic numbering, perhaps for headings, e.g.:

m4_define(_CARDINAL,0)

m4_define(_H, `m4_define(`_CARDINAL',
	m4_incr(_CARDINAL))<H2>_CARDINAL.0 $1</H2>')

_H(First Heading)
_H(Second Heading)

This produces:

<H2>1.0 First Heading</H2>
<H2>2.0 Second Heading</H2>

Contents

3.6 Automatic date stamping

For simple, datestamping of HTML pages I use the m4_esyscmd command to maintain an automatic timestamp on every page:

This page was updated on m4_esyscmd(date)

which produces:

This page was last updated on Fri May 9 10:35:03 HKT 1997

Of course, you could also use the date, revision and other facilities of revision control systems like RCS or SCCS, e.g. $Date$.

Contents

3.7 Generating Tables of Contents

Using m4 allows you to define commonly repeated phrases and use them consistently - I hate repeating myself because I am lazy and because I make mistakes, so I find this feature absolutely key.

A good example of the power of m4 is in building a table of contents in a big page (like this one). This involves repeating the heading title in the table of contents and then in the text itself. This is tedious and error-prone especially when you change the titles. There are specialised tools for generating tables of contents from HTML pages but the simple facility provided by m4 is irresistable to me.

3.7.1 Simple to understand TOC

The following example is a fairly simple-minded Table of Contents generator. First, create some useful macros in stdlib.m4:

m4_define(`_LINK_TO_LABEL', <A HREF="#$1">$1</A>)
m4_define(`_SECTION_HEADER', <A NAME="$1"><H2>$1</H2></A>)

Then define all the section headings in a table at the start of the page body:

m4_define(`_DIFFICULTIES', `The difficulties of HTML')
m4_define(`_USING_M4', `Using <EM>m4</EM>')
m4_define(`_SHARING', `Sharing HTML Elements Across Several Pages')

Then build the table:

<UL><P>
	<LI> _LINK_TO_LABEL(_DIFFICULTIES)
	<LI> _LINK_TO_LABEL(_USING_M4)
	<LI> _LINK_TO_LABEL(_SHARING)
<UL>

Finally, write the text:

.
.
_SECTION_HEADER(_DIFFICULTIES)
.
.

The advantages of this approach are that if you change your headings you only need to change them in one place and the table of contents is automatically regenerated; also the links are guaranteed to work.

Hopefully, that simple version was fairly easy to understand.

Contents

3.7.2 Simple to use TOC

The Table of Contents generator that I normally use is a bit more complex and will require a little more study, but is much easier to use. It not only builds the Table, but it also automatically numbers the headings on the fly - up to 4 levels of numbering (e.g. section 3.2.1.3 - although this can be easily extended). It is very simple to use as follows:

  1. Where you want the table to appear, call Start_TOC
  2. at every heading use _H1(`Heading for level 1') or _H2(`Heading for level 2') as appropriate.
  3. After the very last HTML code (probably after </HTML>), call End_TOC
  4. and that's all!

The code for these macros is a little complex, so hold your breath:

m4_define(_Start_TOC,`<UL><P>m4_divert(-1)
  m4_define(`_H1_num',0)
  m4_define(`_H2_num',0)
  m4_define(`_H3_num',0)
  m4_define(`_H4_num',0)
  m4_divert(1)')

m4_define(_H1, `m4_divert(-1)
  m4_define(`_H1_num',m4_incr(_H1_num))
  m4_define(`_H2_num',0)
  m4_define(`_H3_num',0)
  m4_define(`_H4_num',0)
  m4_define(`_TOC_label',`_H1_num. $1')
  m4_divert(0)<LI><A HREF="#_TOC_label">_TOC_label</A>
  m4_divert(1)<A NAME="_TOC_label">
	<H2>_TOC_label</H2></A>')
.
.
[definitions for _H2, _H3 and _H4 are similar and are 
in the downloadable version of stdlib.m4]
.
.

m4_define(_End_TOC,`m4_divert(0)</UL><P>')

One restriction is that you should not use diversions within your text, unless you preserve the diversion to file 1 used by this TOC generator.

Contents

3.8 Simple tables

Other than Tables of Contents, many browsers support tabular information. Here are some funky macros as a short cut to producing these tables. First, an example of their use:

<CENTER>
_Start_Table(BORDER=5)
_Table_Hdr(,Apples, Oranges, Lemons)
_Table_Row(England,100,250,300)
_Table_Row(France,200,500,100)
_Table_Row(Germany,500,50,90)
_Table_Row(Spain,,23,2444)
_Table_Row(Denmark,,,20)
_End_Table
</CENTER>

ApplesOrangesLemons
England100250300
France200500100
Germany5005090
Spain232444
Denmark20

...and now the code. Note that this example utilises m4's ability to recurse:

m4_dnl _Start_Table(Columns,TABLE parameters)
m4_dnl defaults are BORDER=1 CELLPADDING="1" CELLSPACING="1"
m4_dnl WIDTH="n" pixels or "n%" of screen width
m4_define(_Start_Table,`<TABLE $1>')

m4_define(`_Table_Hdr_Item', `<th>$1</th>
  m4_ifelse($#,1,,`_Table_Hdr_Item(m4_shift($@))')')

m4_define(`_Table_Row_Item', `<td>$1</td>
  m4_ifelse($#,1,,`_Table_Row_Item(m4_shift($@))')')

m4_define(`_Table_Hdr',`<tr>_Table_Hdr_Item($@)</tr>')
m4_define(`_Table_Row',`<tr>_Table_Row_Item($@)</tr>')

m4_define(`_End_Table',</TABLE>)

Contents

4. m4 gotchas

Unfortunately, m4 is not unremitting sweetness and light - it needs some taming and a little time spent on familiarisation will pay dividends. Definitive documentation is available (for example in emacs' info documentation system) but, without being a complete tutorial, here are a few tips based on my fiddling about with the thing.

4.1 Gotcha 1 - quotes

m4's quotation characters are the grave accent ` which starts the quote, and the acute accent ' which ends it. It may help to put all arguments to macros in quotes, e.g.

_HEAD1(`This is a heading')

The main reason for this is in case there are commas in an argument to a macro - m4 uses commas to separate macro parameters, e.g. _CODE(foo, bar) would print the foo but not the bar. _CODE(`foo, bar') works properly.

This becomes a little complicated when you nest macro calls as in the m4 source code for the examples in this paper - but that is rather an extreme case and normally you would not have to stoop to that level.

Contents

4.2 Gotcha 2 - Word swallowing

The worst problem with m4 is that some versions of it "swallow" key words that it recognises, such as "include", "format", "divert", "file", "gnu", "line", "regexp", "shift", "unix", "builtin" and "define". You can protect these words by putting them in m4 quotes, for example:

Smart people `include' Linux in their list
of computer essentials.

The trouble is, this is a royal pain to do - and you're likely to forget which words need protecting.

Another, safer way to protect keywords (my preference) is to invoke m4 with the -P or --prefix-builtins option. Then, all builtin macro names are modified so they all start with the prefix m4_ and ordinary words are left alone. For example, using this option, one should write m4_define instead of define (as shown in the examples in this article).

The only trouble is that not all versions of m4 support this option - notably some PC versions under M$-DOS. Maybe that's just another reason to steer clear of hack code on M$-DOS and stay with Linux!

Contents

4.3 Gotcha 3 - Comments

Comments in m4 are introduced with the # character - everything from the # to the end of the line is ignored by m4 and simply passed unchanged to the output. If you want to use # in the HTML page then you would need to quote it like this - `#'. Another option (my preference) is to change the m4 comment character to something exotic like this: m4_changecom(`[[[[') and not have to worry about `#' symbols in your text.

If you want to use comments in the m4 file which do not appear in the final HTML file, then the macro m4_dnl (dnl = Delete to New Line) is for you. This suppresses everything until the next newline.

m4_define(_NEWMACRO, `foo bar') m4_dnl This is a comment

Yet another way to have source code ignored is the m4_divert command. The main purpose of m4_divert is to save text in a temporary buffer for inclusion in the file later on - for example, in building a table of contents or index. However, if you divert to "-1" it just goes to limbo-land. This is useful for getting rid of the whitespace generated by the m4_define command, e.g.:

m4_divert(-1) diversion on
m4_define(this ...)
m4_define(that ...)
m4_divert	diversion turned off

Contents

4.4 Gotcha 4 - Debugging

Another tip for when things go wrong is to increase the amount of error diagnostics that m4 emits. The easiest way to do this is to add the following to your m4 file as debugging commands:

m4_debugmode(e)
m4_traceon
.
.
buggy lines
.
.
m4_traceoff

Contents

5. Conclusion

"ah ha!", I hear you say. "HTML 3.0 already has an include statement". Yes it has, and it looks like this:

<!--#include file="junk.html" -->

The problem is that:

There are several other features of m4 that I have not yet exploited in my HTML ramblings so far, such as regular expressions and doubtless many others. It might be interesting to create a "standard" stdlib.m4 for general use with nice macros for general text processing and HTML functions. By all means download my version of stdlib.m4 as a base for your own hacking. I would be interested in hearing of useful macros and if there is enough interest, maybe a Mini-HOWTO could evolve from this paper.

There are many additional advantages in using Linux to develop HTML pages, far beyond the simple assistance given by the typical Typing Aids and WYSIWYG tools.

Certainly, this little hacker will go on using m4 until HTML catches up - I will then do my last make and drop back to using pure HTML.

I hope you enjoy these little tricks and encourage you to contribute your own. Happy hacking!

6. Files to download

You can get the HTML and the m4 source code for this article here (for the sake of completeness, they're copylefted under GPL 2):

using_m4.html	:this file
using_m4.m4	:m4 source
stdlib.m4	:Include file
makefile

Contents


Copyright © 1997, Bob Hepple
Published in Issue 22 of the Linux Gazette, October 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


An introduction to The Connecticut Free Unix Group

by Lou Rinaldi lou@cfug.org, CFUG Co-Founder


October of 1996 was when Nate Smith and I first began discussing the creation of a local-area unix users' group here in Connecticut, something we felt the area was desperately in need of. We bantered around some initial ideas; some great, some not so great. Finally we decided on creating a group whose focus was on the "free unix" community. CFUG, The Connecticut Free Unix Group, was born in November of 1996. Both of us had very busy schedules, so all of the time we were going to invest in this project came directly from our ever-decreasing periods of leisure activity. We agreed upon three major goals for CFUG: The first was the wide distribution and implementation of free, unix-like operating systems and software. The second was educating the public about important developments in the evolution of free operating systems. Finally, we strove to provide an open, public forum for debate and discussion about issues related to these topics. After writing to several major vendors and asking for donations of their surplus stock and/or older software releases, the packages began rolling in. (After all, we wanted to create some sort of incentive for people to come to the first meeting)! We then got started doing some heavy advertising on the newsgroups, in local computer stores and also on local college campuses. Finally, after securing an honored guest speaker for our first meeting, (Lar Kaufman, co-author of the seminal reference book "Running Linux"), we were ready to set a date. December 9th, 1996 marked the first official CFUG gathering, which took place at a local public library. We've held meetings on the second Monday of each month ever since, and are now widely recognized as Connecticut's only organization dedicated to the entire free unix community. We've since lost Nate Smith to the lucrative wiles of Silicon Valley, but we continue to carry on with our original goals. We have close relations with companies such as Caldera Inc., InfoMagic Inc., and Red Hat Software, as well as such non-commercial entities as The FreeBSD Project, Software In The Public Interest (producers of Debian GNU/Linux), The OpenBSD Project and The Free Software Foundation. We were also featured on the front page of the Meriden Record-Journal, a major local newspaper, on May 26th of this year. Our future plans include more guest speakers, as well as trips to events of pertinence throughout New England.

For more information, please check our website - http://www.cfug.org

There is a one-way mailing list for announcements concerning CFUG. You can sign up by emailing cfug-announce-request@cfug.org with "subscribe" as the first line of the message body (without the quotes).


Copyright © 1997, Lou Rinaldi
Published in Issue 22 of the Linux Gazette, October 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Review: The Unix-Hater's Handbook

by Andrew Kuchling amk@magnet.com


I've written a review of an old (1994-vintage) book that may of interest to Linuxers. While even just its title will annoy people, there actually is material of interest in the book to Linux developers and proponents.

Andrew Kuchling
amk@magnet.com
http://starship.skyport.net/crew/amk/


The UNIX-HATERS Handbook (1994)
by Simson Garfinkel, Daniel Weise, and Steven Strassman.
Foreword by Donald Norman
Anti-Forward by Dennis Ritchie.

Summary: A sometimes enraging book for a Linux fan, but there are valuable insights lurking here.

In his Anti-Forward to this book, Dennis Ritchie writes "You claim to seek progress, but you succeed mainly in whining." That's a pretty accurate assessment of this book; it's one long complaint about work lost due to crashes, time wasted finding workarounds for bugs, unclear documentation, and obscure command-line arguments. Similar books could be written about any operating system. Obviously, I don't really agree with this book; I wouldn't be using Linux if I did. However, there is informative material here for people interested in Linux development, so it's worth some attention.

The book describes problems and annoyances with Unix; since it was inspired by a famous mailing list called UNIX-HATERS, there are lots of real-life horror stores, some hilarious and some wrenching. The shortcomings described here obviously exist, but in quite a few cases the problem has been fixed, or rendered irrelevant, by further development. Two examples:

* On the Unix file system: "...since most disk drives can transfer up to 64K bytes in a single burst, advanced file systems store files in contiguous blocks so they can be read and written in a single operation ... All of these features have been built and fielded in commercially offered operating systems. Unix offers none of them." But the ext2 file system, used on most Linux systems, does do this; there's nothing preventing the implementation of better filesystems.

* "Unix offers no built-in system for automatically encrypting files stored on the hard disk." (Do you know of any operating system that has such capability out of the box? Can you imagine the complaints from users who forget their passwords?) Anyway, software has been written to do this, either as an encrypting NFS server (CFS) or as a kernel module (the loopback device).

There are some conclusions that I draw from reading this book:

First, when the book was written in 1994, the free Unixes weren't very well known, so the systems described are mostly commercial ones. Proponents of free software should notice how many of the problems stem from the proprietary nature of most Unix variants at the time of writing. The authors point out various bugs and missing features in shells and utilities, flaws which could be *fixed* if the source code was available.

Better solutions sometimes didn't become popular, because they were owned by companies with no interest in sharing the code. For example, the book praises journalled file systems, such as XXX's Veritas, because they provide faster operation, and are less likely to lose data when the computer crashes. The authors write, "Will journaling become prevalent in the Unix world at large? Probably not. After all, it's nonstandard." More importantly, I think, the file system was proprietary software, and companies tend to either keep the code secret (to preserve their competitive advantage), or charge large fees to license the code (to improve their balance sheets).

The chapter on the X Window System is devastating and accurate; X really is an overcomplicated system, and its division between client and server isn't always optimal. An interesting solution is suggested; let programs extend the graphics server by sending it code. This approach was used by Sun's NeWS system, which used PostScript as the language. NeWS is now quite dead; it was a proprietary system, and was killed off by X, freely available from MIT. (Trivia: NeWS was designed by James Gosling, who is now well-known for designing Java. Sun seems determined not to make the same mistake with Java... we hope.)

Second, many of the problems can be fixed by integrating better tools into the system. The Unix 'find' command has various problems which are described in chapter 8, and are pretty accurate. (Though they seem to be fixed in GNU find...) Someone has also written GNU locate, an easier way to find files. It runs a script nightly to build a database of filenames, and the 'locate' command searches through that database for matching files. You could make this database more than just a list of filenames; add the file's size and creation time, and you can do searches on those fields. One could envision a daemon which kept the database instantly up to date with kernel assistance. The source is available, so the idea only needs an author to implement it...

Chapter 8 also points out that shell programming is complex and limited; shell scripts depend on subprograms like 'ls' which differ from system to system, making portability a problem, and the quoting rules are elaborate and difficult to apply recursively. This is true, and is probably why few really sizable shell scripts are written today; instead, people use scripting language like Perl or Python, which are more powerful and easier to use.

Most important for Linux partisans, though, it's very important to note that not all of the flaws described have been fixed in Linux! For example, Linux still doesn't really allow you to undelete files; it's on the TODO list for ext2, but it hasn't been completed. 'sendmail' really is very buggy; Unix's security model isn't very powerful. (But people are working on new programs that do sendmail's job, and they're coding security features like the immutable attributes, and debating new security schemes.)

For this reason, the book is very valuable as a pointer to things which still need fixing. I'd encourage Linux developers, or people looking for a Linux project, to read this book. Your blood pressure might soar as you read it, but look carefully at each complaint and ask "Is this complaint really a problem? If yes, how could it be fixed, and the system improved? Could I implement that improvement?"


Copyright © 1997, Andrew Kuchling
Published in Issue 22 of the Linux Gazette, October 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


Linux Gazette Back Page

Copyright © 1997 Specialized Systems Consultants, Inc.
For information regarding copying and distribution of this material see the Copying License.


Contents:


About This Month's Authors


Larry Ayers

Larry Ayers lives on a small farm in northern Missouri, where he is currently engaged in building a timber-frame house for his family. He operates a portable band-saw mill, does general woodworking, plays the fiddle and searches for rare prairie plants, as well as growing shiitake mushrooms. He is also struggling with configuring a Usenet news server for his local ISP.

Jim Dennis

Jim Dennis is the proprietor of Starshine Technical Services. His professional experience includes work in the technical support, quality assurance, and information services (MIS) departments of software companies like Quarterdeck, Symantec/ Peter Norton Group, and McAfee Associates -- as well as positions (field service rep) with smaller VAR's. He's been using Linux since version 0.99p10 and is an active participant on an ever-changing list of mailing lists and newsgroups. He's just started collaborating on the 2nd Edition for a book on Unix systems administration. Jim is an avid science fiction fan -- and was married at the World Science Fiction Convention in Anaheim.

John M. Fisk

John Fisk is most noteworthy as the former editor of the Linux Gazette. After three years as a General Surgery resident and Research Fellow at the Vanderbilt University Medical Center, John decided to ":hang up the stethoscope":, and pursue a career in Medical Information Management. He's currently a full time student at the Middle Tennessee State University and hopes to complete a graduate degree in Computer Science before entering a Medical Informatics Fellowship. In his dwindling free time he and his wife Faith enjoy hiking and camping in Tennessee's beautiful Great Smoky Mountains. He has been an avid Linux fan, since his first Slackware 2.0.0 installation a year and a half ago.

Michael J. Hammel

Michael J. Hammel, is a transient software engineer with a background in everything from data communications to GUI development to Interactive Cable systems--all based in Unix. His interests outside of computers include 5K/10K races, skiing, Thai food and gardening. He suggests if you have any serious interest in finding out more about him, you visit his home pages at http://www.csn.net/~mjhammel. You'll find out more there than you really wanted to know.

Bob Hepple

Bob Hepple has been hacking at Unix since 1981 under a variety of excuses and has somehow been paid for it at least some of the time. It's allowed him to pursue another interest - living in warm, exotic countries including Hong Kong, Australia, Qatar, Saudi Arabia, Lesotho and (presently) Singapore. His initial aversion to the cold was learned in the UK. Ambition - to stop working for the credit card company and taxman and to get a real job - doing this, of course!


Not Linux


Thanks to everyone who contributed to this month's issue!

I'm very excited to edit the Linux Gazette for October.
At my last job, where I fixed computers for a big company, I was talking with a woman about life in general while fixing her computer, and suddenly she blurted: "Oh my God! You're really a computer geek!" She immediately apologized and explained that she didn't mean any offense, even though I had a huge smile on my face and was trying to explain that I appreciated the compliment.

After many experiences like that, working with SSC has been a welcome change. And since Linux Gazette is one of the places where geeks come home to roost, I'm happy to be a part of it.

I just came back from the Grace Hopper Celebration for Women in Computing, which was held in San Jose, California this year. To quote Bill and Ted, it was totally awesome! I got to meet the illustrious Anita Borg, the amazing Ruzena Bajcny, and the inspiring Fran Allen from IBM, as well as many many many others who came from all over the country, and from dozens of countries from around the world. It was the most incredible even that I have ever attended, and I encourage everyone to go to the next one which will be in the year 2000.

Margie Richardson will return next month as Editor-In-Chief, and I'll be helping out on the sidelines. I'm really glad that I got the chance to be the Big Cheese for a month. :)
Keep sending those articles to gazette@ssc.com!

Until next month, keep reading and keep hacking!


Viktorie Navratilova
Editor, Linux Gazette gazette@ssc.com


[ TABLE OF 
CONTENTS ] [ FRONT 
PAGE ]  Back


Linux Gazette Issue 22, October 1997, http://www.ssc.com/lg/
This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com