Answers 98

From LXF Wiki

Answers 98

<title>Buggy BIOS</title>

<question>I bought what I thought was the greatest money-for-value-wise PC, and I still think it is; one thing bothers me though. When I start up the PC and select Ubuntu from Grub, a message is printed telling me that there is MP-BIOS bug 8254 and some timer is not connected. Also, almost all the DVDs from your magazine fail to start up without a noapic option on the boot command line. Booting LXFDVD95 shows the text

MP-BIOS bug: 8254 timer not connected to IO_APIC
Kernel panic - not syncing: OI_APIC + timer doesn't work!

I searched on Google and Ubuntu's help, but all I could find was a bare bone description that my timer doesn't work. I am guessing it has to do something with my NVIDIA 7300LE (I knew that cheap things turn to be expensive in the end), but what exactly? Another fact that could help probably. Every thing I tried in 3D is very fragile and buggy on my machine. Do I need to buy a better video card?! </question>

<answer>This is not caused by your video card, but may well be the cause of your video problems. APIC (Advanced Programmable Interrupt Controller) handles timing and interrupts for various components on your motherboard, including disk controllers and video card slots. It is common for computers to have ACPI controllers that break the specifications, many manufacturers consider "it works with Windows" to be an acceptable alternative to following the standards. You have already discovered that you need to append the keyword noapic to the boot parameters with live CDs, but you also need to do this when booting from your hard disk. Before you do that, check the manufacturers website for a BIOS update, it may be that this has already been fixed in a later firmware. If not, you need to alter the boot menu to always use noapic when booting. Ubuntu doesn't include a configuration program for the boot process, so you will have to edit the configuration file manually. Press Alt-F2 and type

sudo gedit /boot/grub/menu.lst

This will open the boot menu configuration file in an editor. Most of the lines start with a #, these are comments that you can ignore. Go to the first line starting with title; this is the first option on the boot menu. You need to change the first kernel line below this, add noapic to the end of that line, making sure there is a space between the previous last word and noapic, and save the file. When you reboot, the BIOS error message should not appear and your 3D graphics should be more stable. You may notice other improvements, because buggy APIC firmware can cause all sorts of things from poor disk drive performance to clocks running at the wrong speed. </answer>

<title>Where's my wireless?</title>

<question>I previously used Linux in 1996/1997 to run a Unix application on a laptop, as Linux was free and a Sparcbook cost £10K. I recently thought it would be great to use it again and so installed it on my home Dell XPS m1210. I looked around the web and it looked like Slackware 10 was the best for my machine. I have now successfully installed it and use Lilo to dual boot between Windows Vista and Linux. Except I cannot get the wireless to work, or more precisely: I have no idea how to get the wireless to work! I look on the web and there are solutions out there, but it all seems like a foreign language; I have forgotten so much over the last 10 years, I feel like a complete novice. My wireless card is: Intel PRO/Wireless 3945ABG Network Connection </question>

<answer>There is a driver for this wireless card, available from This is an official driver project created by Intel, but it requires a fairly recent kernel to run, 2.6.13 at least. Slackware 10 is more than three years old ­ much older than this driver ­ and uses a 2.4 kernel. In order to use modern hardware reliably, you need a distro, and particularly a kernel, that is at least as new as the hardware. If you want to stick with Slackware, use the newly-released 12.0, which is the first Slackware released to default to a 2.6 series kernel, something your wireless card needs. The packages you need to use this card with Slackware 12.0 are available from Alternatively, you could install any distro that carries ipw3945 packages in its software repositories. Ubuntu from LXFDVD94 would be a good choice because the ipw3945 drivers are included in the default installation, so it should "just work" Fedora Core 7 (on LXFDVD95) also has ipw3945 available, but in this case you need to add the ATrpms repository to the package manager before you can install the packages. Details on how to add the repository are at The ATrpms site contains a lot more than wireless drivers, there are plenty of packages of all descriptions, so it is well worth adding to the list of repositories. </answer>

<title>That syncing feeling</title>

<question>I would like to switch to Linux, but I fear that syncing with my PDA with Microsoft Outlook may not work. Also I have Money for PPC on my PDA and that is important to me so does GnuCash provide a similar feature? </question>

<answer>SynCE ( is a framework that allows Linux software to synchronise with Windows Pocket PC devices. This works, with varying degrees of user-friendliness and success, with various programs. One of the easiest to sync with is the KDE PIM suite of KMail, Kontact, KAddressBook and KOrganiser. To do this you need to install the synce-kde package, which comes with most distros, although not all of them install it by default. After installation, run the package manager and install synce-kde if it is not already marked as installed. Then you will be able to sync mail and contacts. Of course, this means you will need to be running a system based on the KDE desktop, such as Mandriva, Kubuntu, PCLinuxOS or SUSE. These have all been on Linux Format cover discs recently, or you can find links to them ­ and many other distros ­ at Syncing your financial records is another matter. GnuCash is able to import standard QIF accounts files, but not export them. However, KMyMoney ( does offer QIF import and export, so you should be able to import files from your PDA and then transfer them back after making modifications. Unless you have some formal bookkeeping training or accountancy experience, you'll probably find KMyMoney easier to learn than GnuCash. KMyMoney is also a KDE program and should be available with any of the previously mentioned distros. </answer>

<title>URL rewriting</title>

<question>I am writing a website that uses .png images throughout ­ most of these images use transparency. It works great on all the latest browsers but (as expected) not IE6. To compensate for this I've created .gif versions of each of the graphics (as well as a custom style sheet) which should load in place of the .png images if the user is still on IE6. To achieve this I want to use mod_rewrite and .htaccess to make it transparent ­ so that images/png/image1.png is rewritten as images/gif/image1.gif. This is my .htaccess file

RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} "MSIE 6"
RewriteRule /images/png/([A-Za-z0-9])+\.png$ /
RewriteCond %{HTTP_USER_AGENT} "MSIE 6"
RewriteRule css/style.css css/iestyle.css

The CSS rewrite works perfectly but the image replacement (png to gif) doesn't. </question>

<answer>You have the right idea in using mod_rewrite to change the URLs. It is falling over because you are using a + to join strings, but mod_rewrite works with regular expressions, where + is a pattern matching character, not an operator. With regular expressions, you don't need to join strings, instead you use parentheses to mark the parts you want unchanged and $1, $2... to include them in the destination, as you have done, and everything is either text or regular expression characters. So to replace the last occurrence of foo in a string with bar, you would use


In your case, you want to change anything starting with images/png and ending in .png, replacing both occurrences of png with gif. You can do this by replacing your first RewriteRule line with one of

RewriteRule /images/png/(.*)\.png$ /images/
RewriteRule /(.*)/png/(.*)\.png$ /$1/gif/$2\.gif

The first is easier to read, but the second will work with images in other directories too. </answer>

<title>NTFS running repairs</title>

<question>I have a external hard-drive, NTFS-formatted. I need to defragment it but I don't want to lose the data on it. Can you defragment NTFS on Linux? I run Ubuntu Feisty Fawn on an old PC2800 computer. </question>

<answer>The short answer is no, not really. Why is the drive using NTFS in the first place? If it contains a Windows bootable installation, any attempt you make to defragment it in Linux will most likely render it unbootable in Windows. But if is does contain Windows, why not use that to defragment the drive ­ Windows can be useful for more than playing games. If the drive is used purely for data, then you can severely reduce fragmentation by copying all the data off, reformatting the drive and copying the data back. This requires an NTFS filesystem with full write support, either the commercial Paragon NTFS for Linux application that we reviewed last month, or NTFS-3G, which is included in the Ubuntu repositories. You'll also need the ntfsprogs package, so fire up Synaptics and install both of those. Now you can do the whole job by opening a terminal, changing to a directory with enough space to hold the contents of the NTFS drive and running

tar cf ntfs.tar /mnt/ntfs && umount /mnt/ntfs && mkntfs /dev/sda1 && mount /dev/sda1
/mnt/ntfs -t ntfs-3g && tar xf ntfs.tar -C /mnt/ntfs

This is all on one line. The two tar commands and mkntfs take a while, so chaining the commands together like this means you don't need to babysit the machine, yet each command will only be executed if the previous one was successful (you don't want to reformat the drive if copying the data failed). This example assumes your drive is at /dev/sda1 and mounted on /mnt/ntfs. Make sure you change these to the correct paths suitable for your machine before you run it. If you are short of space to save the contents, you can create a compressed archive, but this will take longer, particularly when copying for the drive. Do this with

tar czf ntfs.tar.gz /mnt/ntfs && umount /mnt/ntfs && mkntfs /dev/sda1 && mount /dev/sda1
/mnt/ ntfs -t ntfs-3g && tar xf ntfs.tar.gz -C /mnt/ntfs

If you are using NTFS so the drive is readable in Windows (why else would you use it?) and you will only use it with your own Windows computers, a better solution would be to format the disk as ext2 and install the ext2 driver from on your Windows computer(s). Then you no longer have to worry about disk fragmentation and you will get better disk performance in Linux. The above commands will do this is you replace mkntfs with mke2fs and remove

-t ntfs-3g

from the mount command. </answer>

<title>Lightweight distro needed</title>

<question>I am looking for an OS suitable for an AMD K-6/200. I thought NetBSD might be a good choice, but it turns out the basic install results in an OS that is command line only. Xfree86 (not Xorg) needs to be set up separately. I'm disabled and the extra effort is a problem for me. Is there an `easy' version, like PC-BSD or Desktop BSD are easy versions of FreeBSD? I had tried DSL on a P2/400 machine and didn't care for it, but I just discovered DSL-N. This has a real word processor! How much net performance do you end up gaining if you install Gnome or KDE on either NetBSD or DSL-N? Fedora Core, with Gnome, runs infuriatingly slow on the P2/400. </question>

<answer>A K-6/200 is slow by current standards, so you'll need a lightweight distro to give reasonable performance. Most importantly, you will need a lightweight window manager, which definitely excludes Gnome and KDE. Something using Fluxbox, Xfce or IceWM would be far more suitable. As you need word processing, Xfce may be a good choice, as it uses the GTK toolkit, as does AbiWord. With limited resources, choosing a set of applications that use the same toolkit and libraries will help your system run more efficiently. Speaking of resources, one of the best improvements for any Linux system that needs more speed is more memory. Spending a few pounds/dollars/euros pieces-of-eight on extra RAM generally gives a greater improvement than spending a similar amount on a faster processor. There are a number of distros designed for lightweight systems; you have already discovered DSL and DSL-N, but you could also consider Puppy Linux, from DSL is limited by the stipulation that the ISO image should never exceed 50MB, while Puppy Linux is nearly twice that. This means it includes a lot more, such as the AbiWord word processor and accompanying office software, SeaMonkey (the new name for Mozilla) for web and mail and plenty more. The main drawback of Puppy is that the hard disk installation process is rather convoluted, as this is mainly designed as a live CD system. You could also run it from the CD, using your hard disk only for storage of data and settings. Another alternative, although a little heavier, is Zenwalk ( If you have the amount of RAM that was typically used on 200MHz machines when new, you will definitely need more to use Zenwalk, but it will give you more features than smaller distros. Running any OS on a K-6/200 is going to be a compromise between features and performance, but it is definitely possible: doubly so if you add some extra RAM. </answer>

<title>A job for Ubuntu</title>

<question>When I try to run or install Ubuntu, I get the following message after the splash screen comes up

unable to access tty job control turned off

and am returned to a terminal prompt. Ubuntu apparently is trying to access my floppy drive for some reason because the floppy drive turns on until I get the error message. </question>

<answer>It appears that this error is caused by the kernel being unable to find your boot drive, so the floppy drive light comes on because it is trying every device listed in the BIOS. As there are a couple of reported causes of this problem, there's more than one possible solution. One is to boot from the install disc and edit the fstab of your installed system. If your root partition is on /dev/sda1, the commands you need are

sudo -i
mount /dev/sda1 /mnt
gedit /mnt/etc/fstab

You should see the line that mounts your root partition in fstab, it will look something like

# /dev/sda1
UUID=71f72f22-0a14-45b7-9057-f7b0bd9d819c /  ext3 defaults....

The UUID (Universally Unique IDentifier) should enable Ubuntu to find the root partition, even if your device nodes change (such as adding another disk), but it can cause problems here. Change the UUID=xyz string back to the device node and your system should boot again. The fstab line should now look like:

# /dev/sda1
/dev/sda1 / ext3 defaults....

The other solution is more extreme, so only try it if the fstab trick fails. You need to open your computer and disconnect any extra hard and CD/ DVD drives, leaving only your boot drive and the DVD from which you installed ­ turn off the computer first! Disconnect the floppy drive too, removing the power cables from the unneeded devices should be sufficient. Your system should now boot. Then you add the piix module to the ramdisk image that Ubuntu loads when it boots with these terminal commands

sudo echo piix >>/etc/initramfs-tools/modules
sudo update-initramfs -u

You should now be able to shut down, reconnect the devices and start up. This bug appears to affect a small number of Ubuntu users, and only those with multiple drives fitted. It has also been reported that when the problem is caused by a floppy drive, it can be circumvented by leaving a disk in the drive, but we were unable to verify this and it sounds like a kludge anyway. </answer>

<title>Backup services</title>

<question>In reaction to LXF94's Online backup Roundup, I would like to present a small but hopefully solvable problem. I use IBackup, because I can have backups from my own PC (Ubuntu) and from my wife's Windows PC. She can manage her backups without any intervention from me. The problem is, that often during the backup on my PC, which is performed by cron, the connection drops. When that happens the stunnel I created collapses, which is devastating for the backup, and I end up with a backup that was partially copied to the IBackup server. Is a way to recover from such a disconnect, or even to actively reconnect, without losing what you are doing? The IBackup server does not allow setting the time and datestamps of the copied files, causing the files all to have the time and date of copying. For that reason I copy tarred files and lose the rsync ability. This might be incentive enough to switch to, however I will have to copy the files from my wife's PC too. With IBackup she has her own connection and URL. </question>

<answer>If you are using rsync, restarting the backup should be no trouble, because rsync will simply pick up where it left off. The server may be using the time of copying as the timestamp because of your rsync options. You need to call rsync with the --times option to preserve timestamps. The --archive option combines several options useful for backups, including --times. This should remove the need to copy tar archives to the server, and therefore mean that you are copying individual files in the same form that they exist on your original machine, which makes restarting a backup easier. I tried after reading the article (I used Strongspace at the time) and switched to them completely. Backing up multiple machines is easy as you can do more or less what you want with the available space, so you can create a directory for each machine's backup. uses SSH for rsync transfers, so there's no need for stunnel, and you can use Duplicity to encrypt the data for storage. An alternative approach is to backup everything to a local disk then sync that to the remote server. This has the advantage that your first line of backups is local ­ making restoration faster ­ but it does mean that the backup computer has to be switched on whenever any computer needs to make a backup. </answer>

<title>Booting a DVD</title>

<question>I want to install LXF DVD94 on an older PC, dual booting with Windows 98SE. This computer is a seven-year-old Athlon 600 in an MSI motherboard with 128MB RAM, two hard drives, one DVD-ROM drive and a CD-RW drive. The BIOS of this older PC hasn't an option to boot from a DVD-ROM drive. The boot sequence allows me to use a CD-ROM drive as the first device and I am comfortable with boot sequence changing. The forums told me to install Windows first, if dual booting is required (it is). I used Partition Magic V5 to set up both FAT and Linux partitions. I believe that Linux uses a different file format to FAT but I tried using a Windows start-up floppy to `set-up' or `install' your DVD but failed. Would this work if your disc had been a CD-ROM? The floppy disk from Red Hat 6.1 allowed me to start running the Red Hat CD but It demanded the Red Hat CD and wouldn't settle for LXFDVD94. I tried the Red Hat CD which worked but aborted the install because I would prefer (K)ubuntu. Do I need a Linux boot floppy with DVD drivers on it, to get your LXFDVD94 installed? </question>

<answer>As far as your BIOS is concerned, booting from CD and DVD are the same, a DVD is seen as a large CD-ROM. Older Linux distros used a boot floppy to kickstart the CD installer, as a lot of hardware did not support booting from CDs at the time. Your vintage hardware should support booting from optical disks, whether CD or DVD. As long as you set your BIOS to boot from CD, you ought to have no problems. But, this is dependent on BIOS idiosyncrasies; some older BIOSes got confused when more than one optical drive was fitted. If you set the BIOS to boot from CD and still cannot boot the DVD, try disconnecting the cables from your CD-RW drive so you only have the one optical drive. It is rare to need a boot floppy to install from CD or DVD nowadays but, just in case, we have provided one on the DVD. Smart Boot Manager, in the Essentials/SBM directory of the DVD, is a bootable floppy that will transfer the boot process to an optical or hard disk. Run RAWRITE.EXE in Windows, put a blank floppy in the drive and select sbootmgr.dsk as the source. By booting from this disk, you will be able to boot from your DVD. The different filesystem formats of Linux and Windows are irrelevant at this point, as all data is coming from the DVD, which uses another type of filesystem (the same as used by CDs). Using Windows partitioning tools to create Linux partitions is known to cause difficulties. Use Partition Magic to delete the partitions you created for Linux, including the swap, leaving unallocated space. Then tell the Ubuntu installer to use the free space on the drive (free space in this context means unpartitioned space, not unused space within existing partitions). Your PC may show its age in the RAM. 128MB is not a lot by today's standards; a modern desktop, like KDE in Kubuntu, will run slowly. The version of Ubuntu on the LXF DVD includes the lightweight Xfce desktop, used in Xubuntu, as well as the more resource-hungry Gnome and KDE heavyweights. </answer>

<title>New disk, old problem</title>

<question>I found that I needed a bigger hard disk, so I plugged one in as hdb, partitioned it as I wanted, copied file systems from the old drive (hda), and tried to boot from the new one. Unfortunately, this operation turned out to be unsuccessful. I made and copied partitions for /, /boot, /usr, /home, among others. I also make a swap partition. /boot is primary partition 1, marked bootable. I wrote an mbr record, using lilo -M /dev/hdb1. I mounted the new /boot and / partitions, edited the new copy of /etc/lilo.conf, (now in /mnt/hdb5), and ran lilo -C /mnt/hdb5/etc/lilo.conf -b /dev/hdb1, which appeared to work. When I try to boot from the new drive, I get through Lilo's boot choice screen, and a fair amount of other stuff, ending with:

initrd finished
Freeing unused kernel memory
Warning: Unable to open an input console

After that, only the reset button on the box will make anything happen. This is "Mandrakelinux release 10.2 (Limited Edition 2005) for i586" </question>

<answer>This is not a problem with the bootloader. Once the kernel has loaded, the bootloader's job is done. This error looks like a missing file from /dev, probably /dev/console. Although the dynamic dev filesystems, like udev and its predecessor devfs, create your device nodes in /dev automatically, there are some that are needed before devfs/udev start up. I suspect that you omitted the contents of /dev when making a copy of your root partition, either by not including it in the copy command, or by excluding all other filesystems when copying (you didn't mention how you copied the filesystems, but cp, rsync and tar all have options to exclude other filesystems). The contents of your original /dev directory are now hidden because a new, dynamic /dev/ has been mounted on top of them, but as you will see, they are still accessible.

 mkdir /mnt/tmp
 mount --bind / /mnt/tmp

will make your whole root filesystem available from /mnt/tmp, without any of the other filesystems that are mounted at various points. So /mnt/tmp/home will be empty while /mnt/tmp/dev will contain a few device files. Copy these to dev on your new root partition and your boot error should disappear. The easiest way to ensure your new root filesystem contains exactly the same files as your current one is

rsync -a --delete /mnt/tmp/ /mnt/newroot/

Turn over for this month's big Answer. </answer>

<title>Admin through browser</title>

<question>We have a web hosting account that provides PHP and MySQL with an Apache server. We have FTP access for uploading files but no shell account, which makes setting up SQL databases and the like rather tricky. We are not able to install any extra software on this server. We could move to somewhere with shell access, but the accountant likes the price we pay here. Is there a way of gaining administrative access through a web browser to do what we need? </question>

<answer>While changing to a host that allows SSH access would give more flexibility, there are solutions that remain attractive even if you cannot get a command prompt. Foremost is phpMyAdmin ( As you have probably guessed from the name, this is a MySQL administration program written in PHP; it only need to be installed as a set of files in your web space, after suitable securing and configuration. Most web hosts only allow access to the database from local IPs, so the scripts must be running on the web host, not your own machine. Download and unpack one of the tarballs from the phpMyAdmin website (they differ only in the languages included and archiving method used). The traditional way of setting up phpMyAdmin was to create a suitable file, using the included sample as a basis, but there is now a setup script that you can run once you have copied the files to your web server. Before you do, make sure this is secure. Anyone with access to your server's phpMyAdmin directory can read or change any of your databases, so secure it with a .htaccess file (or other means) so that only passworded accounts can connect. If possible, include it in a section of your web space that is accessible via HTTPS, as you will be transferring passwords when you run the setup script. Create a config directory in the phpmyadmin directory and copy the whole phpmyadmin directory to your web space (including the .htaccess file). Go to https://www.your.webspace/phpmyadmindir/setup.php and fill out the boxes with your MySQL login details. Now you can go to https://www.your.webspace/phpmyadmindir/ and see a list of your databases. Select one and you'll see the tables in it. From here you can browse, query and modify your SQL tables to your heart's content. If you use PostgreSQL instead of MySQL, there is an equivalent program called phpPgAdm, available from SQL databases are not all you can administer via a web interface. Webmin lets you change just about anything you are allowed to change on a *nix box, and it is by no means limited to servers. The disadvantage of Webmin in your situation is that it must be installed and run by root, because it uses its own built-in server, rather than running through the likes of Apache. Contact your web host about this: they may have installed either Webmin or its limited cousin Usermin. If they haven't, they may be willing to, as it would benefit all customers. The may also install phpMyAdmin for you, so you don't need to take up your own space and bandwidth allowances with it. </answer>