Answers 76

From LXF Wiki

Answers 76

<title>Gnome install</title>

<question>I have an old laptop, previously loaded with Windows 98. I have used the Gnome 2.12 Live distro with the December issue [LXF73]. This has proved ideal and it works very well, except that starting it each time is a slow process. Is there any way this distro can be loaded to the hard disk? I presume this is not possible due to a lack of installing software. The problem with Linux is that it seems to grow more and more complex, whereas this distro seems ideal for people trying to make use of an old low-spec computer. </question>

<answer> You are correct in presuming that this particular CD cannot be used for installation. The disc is essentially a showcase for Gnome 2.12 and is based on an Ubuntu Live CD. The good news is that Ubuntu is also available in an installable version. Ubuntu is an excellent distribution that has come a long way in a very short time. You can download the installation CD from If you do not have broadband, you can request a CD copy be sent to you for free. You need the i386 install version for your laptop. The appearance of Ubuntu's Gnome desktop is different, but this is purely down to the theme used (which you can easily change), but it works in exactly the same way. It is natural for software to become more complex as new features become available and new hardware makes more things possible. This is particularly true of the `big two' desktop environments, Gnome and KDE. However, there are plenty of lighter alternatives for those that either do not want or are unable to run the latest bells and whistles. Take a look at IceWM, Xfce 4 and Fluxbox, all of which are available via Ubuntu's Synaptic package manager. </answer>

<title>BIOS flop</title>

<question>I have an HP OmniBook 6000 on which I run Mandrake 10.0. When rebooting, the machine freezes. I searched the web and found out there is a fix for the problem: I have to update the BIOS with a certain file from an HP customer care web page. My first problem is that the update is an InstallShield executable file that needs to be run on Windows to create an update floppy­ and I only run Linux. That brings me to my second problem: I don't have a floppy drive on this laptop, only a CD/DVD drive. How can I extract the floppy image from this file, and is it possible to make a bootable CD from it? </question>

<answer> Some executable file installers are self-extracting zip files, but this one is not. The only safe way to extract it is to run the program on another computer. This will copy the BIOS update to a floppy disk. Then use the read function of rawwritewin.exe (from the Essentials/Rawwrite directory of the coverdisc) to create a disc image file. Copy this to your laptop. The second part of the problem is remarkably simple, because the original method of making a bootable CD was to embed a floppy disc image in the boot sector of the CD. Assuming your disc image is called bios.img, create a directory called biosupdate and put the image file in it. Then run the following command:

mkisofs -b bios.img -c -o biosupdate.iso biosupdate

This will create a bootable CD image. Use Cdrecord or your favourite CD -burning GUI to write this to a CD, which will boot and run the BIOS updater. It is also possible to create the ISO image with K3b, by selecting the disc image file as the boot image. An alternative method is to use the Ultimate Boot CD, from This is a bootable CD containing over 100 floppy-based diagnostic tools and utilities. It does not contain copyrighted files, like your BIOS update, but the website contains clear instructions for adding your own images. </answer>

<title>Sharing your home</title>

<question>Having heard so much about PCLinuxOS and its foolproof installation, I thought I'd give it a go. I tried version 89a and it ran sweetly from the CD, so I clicked Install To Hard-Drive. The installation appeared to proceed absolutely fine: no hangups, awkward questions I couldn't answer or anything like that. It finished normally, as far as I could tell. However, on reboot from the hard drive and after I'd logged into KDE, an error message appeared, saying it couldn't start KDE and suggesting that I check my DCOP_SERVER was running. What is my DCOP_SERVER and how do I check if it's running and make it do so if not? Shouldn't it be running anyway if the PCLinuxOS installation is so foolproof? Although I have had to revert to my Mandriva setup for the time being, this isn't quite right any longer as some settings seem to have been altered by PCLinuxOS. </question>

<answer>It appears that you have tried to use the same home directory for your user on both distros. Sharing /home is fine, but using the same home directory in different distros is asking for problems. Although you may have the same user name, the numeric user and group IDs are often different. As the system uses the numeric IDs to determine who owns what, it is likely that your user in PCLinuxOS is not able to create files in the home directory. Since the DCOP server tries to create sockets in ~/.kde, and fails, KDE thinks the DCOP server is not running, so it cannot start up. DCOP is the Desktop COmmunication Protocol. It is an inter-process communication system, whereby programs can exchange messages and data. It is fundamental to the working of KDE, which relies heavily on embedding one program in another, such as KMail in Kontact or KPDF in Konqueror when you click a link to a PDF file. The safest approach is to use a different home directory for each distro. You can use the same username, just change the home directory. For example, you could be `fred' on each distro and have home directories of /home/fred-pclinuxos and /home/fred-mandriva, respectively. To make it easier to access the other distros' home directory, set your user and group IDs to be the same. I found Mandriva gave the first user a UID and GID of 500, whereas PCLinuxOS starts at 501, because the guest user for the Live CD uses 500. The files you need to edit, as root, are /etc/passwd and /etc/group. The line in /etc/passwd should be like:

username:x:UID;GID:Real Name:/

and in /etc/group,


Change them so that your PCLinuxOS files have the same UID and GID values as in Mandriva, and reboot. You should also make sure that all files in your home directories have the correct IDs with

chown -R username: /home/ username*


<title>Remote parenting</title>

<question>I have just built a new PC running SUSE 9.3 for my mum. As she lives 80-odd miles away, can I use Krdc/Krfb to help her if she has a problem? We both have 2MB ADSL via Ethernet routers and static IP addresses. Could you point me to a HOWTO? I have Googled for VNC [Virtual Network Computing programs] but they all seem to be for LAN setup with Windows. </question>

<answer>It is possible to use KDE's Remote Desktop connection over the internet, but your routers will block this by default. If possible, set up and test the Krdc/Krfb connection with a direct Ethernet link between your computers (that is, if you have a laptop, take it to your mum's house). Make sure you set up a secure password for connection. It is possible to set up a connection with no password, which may be acceptable for use inside a firewalled LAN. This would be a bad idea ­ a really bad idea ­ when exposing your computer to the internet. VNC uses network ports starting at 5900 for display 0, 5901 for display 1 and so on. You will only need display 0, so open up port 5900 on your mum's router and direct it to the LAN IP address of her computer. In the firewall of the router or the computer (or both), block access to port 5900 from any public IP address but your own. This will stop script kiddies trying to crack your password. Now you should be able to connect with Krfb using an address of the form a.b.c.d:0, replacing a.b.c.d with your mum's public IP address. You can usually read this from her router. Alternatively, visit http:// </answer>

<title>Lost in space</title>

<question>I have a bunch of directories with some Microsoft Office Word files on a Gentoo system, and I need to use Antiword to change them to text files. I have written a script that does it for a given directory:

for i in `ls *.doc` ; do antiword $i >${i/doc/txt}; done

There are probably some bugs in the line (like going down subdirectories) but I will iron them out. My main problem is that some of the files have a space in their name, such as `file 1.doc'. I end up with errors like:

file `file' does not exist,
cannot convert file `1.doc'

How I could turn around this problem? It would also be useful to be able to delete the DOC files once they are successfully converted. </question>

<answer>You need to put quotes around the variables, so bash treats `file 1.doc' as a single file and not as two files (`file' and `1.doc'). They must be double quotes, not singles. Bash interprets the contents of single quotes as literal, whereas it will expand the values of variables within double quotes. You do not need to use `ls', as `*.doc' will match on files in the current directory by itself. It is also best to add `-i 1' to prevent Antiword outputting image data into your text file. Your command then becomes:

for i in *.doc ; do antiword -i 1 "${i}" >"${i/doc/txt}"; done

To recurse though directories, use find:

find . -name `*.doc' | while read i; do
antiword -i 1 "${i}" >"${i/doc/txt}";

You could also use find to remove the DOC files afterwards, thus:

find . -name `*.doc' -exec rm "{}" \;

This would remove all DOC files, even if Antiword failed to convert them. To convert the files and remove them after successful conversion, use this:

find . -name `*.doc' | while read i; do
antiword -i 1 "${i}" >"${i/doc/txt}"
&& rm "${i}"; done

Find outputs a list of matching files, one per line, which are read by read; then Antiword converts each file. The && means that the rm command is only run if the previous command (antiword) ran without error. </answer>

<title>On top of things</title>

<question>I have recently started a small design business and am now hosting a number of sites for my clients on my dedicated Red Hat Enterprise Linux 3 server. As I have numerous access_log files scattered all over the filesystem, what's the easiest way to keep a real-time view of what's going on with the HTTPD web server? If I use top, I can see several HTTPD processes consuming a fair bit of CPU, but I don't know how to associate these processes with a particular website. </question>

<answer>The HTTPD server on RHEL 3 comes pre-packaged with mod_status, which is an Apache module for monitoring how the web server is performing. To enable it, open up /etc/httpd/conf/httpd.conf and uncomment the following lines:

<Location /server-status>
   SetHandler server-status
   Order deny,allow
   Deny from all
   Allow from desktop.ip

To obtain a full status report, also uncomment this line:

ExtendedStatus On

After restarting the HTTPD server, you can browse http://server.ip/server-status?refresh=5. This will display an HTML page that refreshes every five seconds, providing you with the following information:

The number of workers (threads) serving requests.
The number of idle workers.
The status of each worker, the number of requests that worker has performed and the total number of bytes served by the worker.
A total number of accesses and byte count served.
The time the server was started/restarted and the time it has been running for.
Averages giving the number of requests per second, the number of bytes served per second and the average number of bytes per request.
The current percentage CPU used by each worker and in total by Apache.
The current hosts and requests being processed.

(Quoted directly from It is also good practice to keep this information restricted to specific hosts, as a lot of information is revealed about your HTTPD server through this module. Leaving the default Deny From All and then opening up access with Allow From Desktop.ip will ensure that only authorised hosts are permitted to view this information. </answer>

<title>Zip, nada</title>

<question>I have recently tried many flavours of Mandrake and others (plus many Live distros) and not one of them will detect my faithful old Zip Plus drive, for which I have an archive of 42 100MB disks. In desperation I decided to install it manually, and looked up the mini HOWTO, which referred me to a David Campbell at Unfortunately, this website does not respond and there is no re-direction. I've done several web searches to try to track down the file but without success. Can you please help? The Zip drive is paralleled with an Epson printer and is closest to the PC as recommended by Iomega. The printer is detected and installed correctly. lsmod does not detect the imm module. I know some might say, "Why not transfer your archive to one DVD?" ­ but think of the work involved. Most of the archive consists of my own engineering programs which have to be updated from time to time and this is a straightforward and reliable process on Zip disks, which is why I want to keep it operational. It is interesting to note that Windows XP has no difficulty at all in detecting and installing it. This is one of the things preventing me from making more use of Linux. </question>

<answer>The Zip Plus drive has both parallel and SCSI connectors. The easiest way to connect it is to fit a SCSI PCI card. Even a cheap, sub-£10 card is likely to perform better than parallel, with no need for special drivers. If SCSI is not an option, you will need the imm module. The website you refer to is indeed defunct, but the imm module is now part of the standard kernel source tree, so you don't need to install it. Mandriva (2005 and 2006) has this module included with the kernel, as do the Knoppix 4 Live discs. Just type, as root

modprobe -v imm

This should report the modules loaded: imm plus any it may depend on. If the drive contained a disc when you loaded the module, it will be recognised, usually as /dev/sda. If the drive was empty, the driver will detect when a disc is loaded. If this is your only drive using the SCSI sub-system (USB memory drives use that too) it will appear as /dev/sda. If you have another drive, or need confirmation of the device, type

tail -f /var/log/messages

and insert a disc. You will see something along the lines of

scsi0 : Iomega ZIP Plus drive
scsi : 1 host.
  Vendor: IOMEGA Model: ZIP 100
PLUS       Rev: J.66
  Type: Direct-Access
ANSI SCSI revision: 02
Detected scsi removable disk sda at
scsi0, channel 0, id 6, lun 0
SCSI device sda: hdwr sector= 512
bytes. Sectors= 196608 [96 MB]
[0.1 GB]
sda: Write Protect is off
 sda: sda1

In this case the drive is /dev/sda with a single partition at /dev/sda1. </answer>

<title>Keeping tabs</title>

<question>I am a Windows sysadmin at a marketing company. The company employs a number of developers who work on in-house marketing campaign software that runs on Red Hat Enterprise Linux. The other Windows administrator with Linux experience who used to manage these servers resigned, and the development team took over the administration of all Linux servers. After a conversation with my ex -colleague I've realised that the development team and their "let's get the job done" attitude often meant changing execute rights on root-only applications and allowing some restricted directories to be accessed by everyone. Now that I have to start involving myself with these servers, is it easy to audit all these changes? </question>

<answer>The only way to pick up all system modifications is to revise disaster recovery procedures and bringing up a replica of the production system from bare metal and a clean copy of the operating system. It should be something that upper management may be keen on backing up too. As a starter, however, RPM can help you determine which files on an installation have been modified. Running rpm -Va will show all files in all RPM packages installed that have been modified since installation. It is normal for some configuration to be changed but watch out for files and directories that report any of the following failures:

M The permissions have been changed.
5 The file has changed.
U/G Ownership of the file has changed.

RPM is flexible enough to allow permissions and ownership to be set back to the original. For example, to recover `M, U & G' failures reported for a particular package, run

# rpm --setperms <package>
# rpm --setugids <package>


<title>FoxPro databases</title>

<question>I am a recent convert to Linux and have managed to find alternatives to most of my Microsoft programs. One thing I still haven't managed to do is read or update FoxPro .dbf files. I have a legacy database system that uses them and in Windows I would connect through an ODBC connection. Is there a way to do it in Linux? I am running Ubuntu and would prefer to use Python or PHP. </question>

<answer>There are a few ways to access Xbase databases, as created by [the programming language] FoxPro. Rekall is probably the most complete. This is a database front-end, available in commercial and GPL variants. Access to Xbase databases is through Rekall's XBSQL library. The GPL version is available at Rekall can be scripted with Python. Another option is Knoda, from Once again, this is a database front-end that connects to various database servers. Alternatively, you could use XBSQL and the Xbase library directly to build your own PHP or Python based front-end. However, a better long-term solution may be to use Rekall or XBSQL to export all your data to a MySQL or PostgreSQL database. Both of these database servers are well supported and have a wide variety of web or GUI front-ends available, and allow command line or script access. XBSQL is available for Ubuntu through the Universe repository. Select this repository in Synaptic and install libxbsql-bin to give yourself SQL command line access to your FoxPro databases. </answer>

<title>Booted out</title>

<question>I've been trying to get Mandriva to run from the installation CDs that came with the Linux Format Mandriva Special, but alas, all the bad press I've heard about Linux has been proved true. It doesn't work and there's little help available in deciding even where to begin when it doesn't. There were problems installing Mandriva, with a fatal error when I tried to get it to initialise from the first CD. When it finally loaded it went through all the (incredibly slow) process of installing. Now it won't boot ­ hasn't booted once.The routine gets to `Initialising Cryptographic API' then hangs. I have tried booting from the CD again and using the rescue option but am getting fatal errors again. No amount of internet searching gives me any pointers to where the problem is or what to do in any sort of plain language. Now that I have this useless system on my laptop's hard drive, how do I get rid of it if the system won't even run? I've just lost 10 gig of space to a dud system! </question>

<answer>I am sorry you are having so much trouble with what is usually a straightforward installation. Despite what you have heard, Linux does work and there is help available, but nothing is perfect and some people have difficulties. Mandriva should not be "incredibly slow" to install. I installed Mandriva 2006 and Windows XP on to a laptop last week and the installation times were within five minutes of each other, despite Mandriva installing a lot more software. This, and your other errors, indicates that there may be a problem with your hard disk. Not necessarily a fault, but possibly an incompatibility between the controller and the default Mandriva drivers. Fortunately, this is usually easy to deal with. When the Mandriva disc boots, press F1 at the splash screen. This gives a boot prompt where you can make changes to the way it starts up. Some laptops require you to type

linux noapic

If you had told me which laptop you are using, I may have been able to give a more specific recommendation: try following up on the LXF forums at If you want to remove Mandriva Linux and reclaim the disk space for Windows, you can either use something like Partition Magic on Windows to remove the Linux partitions and resize the Windows partition to fill the whole disk, or you can do it from the Mandriva installer. First you will need to remove the Linux bootloader (Mandriva 2006 uses Grub, but some earlier version use Lilo). Boot from the CD, type rescue at the boot prompt and select the option to restore the Windows bootloader. Now reboot to start the installer and proceed to where you are given choices for partitioning. Select Custom Partitioning, delete all but the Windows partition then select the Windows partition, click Resize and drag the slider to the far right. When the process is finished, click Done and you will see a warning about creating a root partition. Ignore this and reboot reboot, eject the CD, let Scandisk do its stuff then Windows will start. </answer>

<title>Slowcoach spotting</title>

<question>I work in the IT department of a small hospital. More and more, we have PCs going out into our wards and doctors' areas ­ all of which have internet access. Some time ago, I installed Squid and DansGuardian and they're working really well. The thing is, our network really isn't very fast ­ the main hospital still runs on 10MB Ethernet and some of the cabling infrastructure is over 15 years old. Sometimes our network slows down to a crawl, and I think it's because someone out there is downloading a lot of large files (some of the medical PDFs can be huge). Can you recommend any software to monitor the network for me and show me any hosts that are using up all the bandwidth? </question>

<answer> Ntop ( is a free, portable traffic monitoring tool, and should be your first port of call. Designed to be the network equivalent of top, it collects network metrics and can report on network traffic by interface, protocol and host. Or try MRTG (, a daemon that generates a visual representation of SNMP variables changing over time and has traditionally been used to graph bandwidth utilisation in and out of an interface. You may have to install an appropriate SNMP daemon if the monitored interface is on a Linux host, while most routers and managed switches have SNMP capabilities that can be enabled. MRTG becomes very resource-intensive when polling a large number of devices as, by default, it generates all image files every five minutes. However, you can use rrdtool to store the data collected by the polling engine and a third-party CGI script such as 14all.cgi to generate reports only on demand. Finally, Ethereal (, a free utility for sniffing, filtering and decoding network traffic, is invaluable for thorough traffic investigations but would be overkill for your everyday monitoring of network usage and capacity planning. </answer>

<title>Screen blanks</title>

<question>I just have a quick question regarding A Quick Reference To: Screen in LXF72 [Answers]. Within the article you talk about splitting the display; this is also talked about within the man pages for screen. However, every time I try this command it just freezes the session and the only way I have found to get around this problem is to shut the session down, which means I am unable to re-connect to the session. I am current running Fedora Core 4 with Screen version 4.00.02. </question>

<answer>Screen's keybindings are case sensitive. The command to split a screen is Ctrl+a S, with a capital S. Ctrl+a s, with a lower-case s, sends a control-s (xoff) to the terminal. This is the command to stop any output. You can do the same in a normal terminal with Ctrl+s. Ctrl+s effectively freezes the terminal, which is exactly what you are seeing. Now that you know the difference it should not happen again, but if you do forget to press the Shift key, Ctrl+a q sends a control-q (xon) to resume the terminal's output. If you want to use the split function regularly, it may be better to bind it to an easier key combination. Add these lines to the file .screenrc in your home directory:

bind ^S xoff
bind s split

Now `Ctrl+a s' will split the screen and `Ctrl+a Ctrl+s' will send an xoff to pause the terminal. </answer>


<question>I've set up a number of FTP accounts restricted to their respective directories. On our dedicated server running RHEL I managed to do this by setting the users to chroot(). These accounts are used by our clients, who upload spreadsheets and other data that is then downloaded and processed by our management consultants. This was very popular as originally all information was exchanged over email. The consultants have now made my task more challenging by refusing to log in to each of their clients' FTP acount, insisting that it should be easy to set up the FTP server in a way that they can log in with one username and password and see all their clients's as subfolders. I have to be careful not to allow one consultant to be able to see information pertaining to another consultant's clients. Can you help? </question>

<answer> Assuming that you are using the stock vsftpd server that is bundled with RHEL 3 and 4, a bit of reconfiguration on how the accounts are created can take you a long way. For a consultant called John Doe, an account without a login shell can be created as follows:

# useradd -d /home/jdoe -s /sbin/nologin jdoe

John's clients can now have their home directories created under /home/jdoe. To allow the consultant to descend to and manage files within the client's home directories, the accounts can be created with 'jdoe' as the default group and full group permissions assigned thus:

 # useradd -g jdoe -d /home/jdoe/client1 -s /sbin/nologin client1
 # chmod g=rwx /home/jdoe/client1/

The FTP server will not be able to transfer the client into his home directory unless execute permissions are set on all the parent directories:

 # chmod g+x /home/jdoe

FTP users created will also have to be configured to chroot(). </answer>