Answers 68

From LXF Wiki

Answers 68

<title>Fedora bore</title>

<question>I just installed Fedora Core 3 on a used PC. Since I already had Linux installed on another system, I planned on mounting the /var/spool/up2date folder from the first one on the second so I won't have to download update files twice But after looking at the Red Hat manual, man page and the HOWTO page, I'm still unsuccessful. Here's what I did. On the server I put in the /etc/exports file the line

/var/spool/up2date
192.168.1.12(ro,sync,no_root_squash)

and in the /etc/hosts.allow file

all:192.168.1.12

On the client side I put the following in /etc/fstab:

192.168.1.1  1:/var/spool/up2date /
server/var nfs soft 0 0

I tried with the directory /server then /server/var to create without luck. The error message that I had was, `Failed server is down'. I tried again by disabling the firewall on the server and that time I had an RPC timeout error. I did notice that on the server, rpc.mountd and portmap are running but not rpc.nfsd. Could you show me how to properly configure NFS on both machines so I could share the folder, and how to properly configure the firewall? </question>

<answer>Your /etc/exports file is correct, so you should be able to do

/etc/rc.d/init.d/nfs start

If that does not work, verify that you have the packages pertaining to being a Network File System server installed. You can review /var/log/messages to establish exactly why the NFS server failed to start, though with Fedora Core 3, it should just work out of the box. You can verify which RPC services are running with rcpinfo ­p, which will need to list nfsd before you can mount it on the remote system. To answer your question about the firewall, if the system is on your internal network, it will be safe to leave the firewall down, assuming you trust everything within your network. For a system that's outwardly accessible, you will have a separate outside interface which you can limit connectivity through, and open the inside one. Often it can be difficult to permit NFS through a firewall, but by using rpcinfo you can get a good idea of what ports need to be opened for NFS to function. I would really recommend against opening NFS on the internet as it is an unencrypted protocol and transports your data in plain text.</answer>

<title>Does Linux do 64-bit?</title>

<question>I have the latest Ubute AMD64 processor, and every version of Linux I have tried with it so far gives the message `out of sync' after the welcome screen. I presume that is because Linux is a 32-bit operating system and is incompatible with the 64-bit machine. Is that true, especially of Sun Java Desktop System 2 on the February 2005 coverdisc [LXF63]? The Solaris system on the website mentions 64 but when I proceed with it, it only shows x86. FreeBSD has a download option for AMD64, but the only option of payment is a credit card (which I don't possess). Could you please help, as I'm desperate to have Linux running and am so fed up with Windows XP Pro that I feel like chucking the whole thing out the window. Do I have to go to the extent of purchasing a second x86 machine (which I presume 64-bit isn't) and installing Win98 on it, which will at least make boot disks? </question>

<answer>AMD64 processors are backwards-compatible with x86 binaries, so you can run a standard x86 Linux distribution on them. You can always download AMD64 versions of distributions such as Debian, SUSE and others and run 64-bit binaries on the system. Linux will run happily on both 32-bit and 64-bit processors, although you should be aware that to run 32-bit binaries, the distribution will need some libraries to allow them to run on a 64-bit based distribution. 'Out of sync' sounds like a video problem, so you may want to try to force a text install, or specify a video resolution at boot time. Distributions will have help screens when they boot up to indicate how to do this. I've had success with SUSE for AMD64, as well as Debian, so I would be interested to hear what progress you make.</answer>

<title>Dual to the death</title>

<question>Before I start, my system specs are: a Shuttle XPC, Athlon 2400XP, 1024MB of RAM, DVD/CD-RW, a 120GB hard drive and a Leadtek 6600GT graphics card with a 19-inch Sony monitor. Oh, and a BT Voyager 105 USB ADSL modem. I recently used your cover DVD (LXF65, April 2005) to set up a Simply MEPIS Live Distro on CD, being forced to install MEPIS on to the same hard disk as Windows XP following the instructions given using QParted. I split it into 105GB for XP, 14GB for MEPIS, 1GB for swap, and 4GB-ish for home. Yes, I know there are risks, but to be honest it all seemed to go quite well... However, instead of offering me the dual boot option that it should have, my PC was automatically booting into MEPIS with a 2.6 and 2.4 option. This in itself isn't such a big deal, and I could still access all the document and graphic files on the Windows partition. But when I tried to change the Active partition using QParted in Linux and boot with XP, I received an `NTLOADER' or `NTFSLDR' (or something similar) error, and a message asking me to press a key to reboot -- this effectively put the machine into a perpetual loop. Suffice to say that I can no longer boot into my XP OS using this hard disk. Even trying to reinstall Windows proved fruitless because it kept wanting to reformat the drive. Luckily I have a slightly older XP installation on another hard disk(30GB), and swapping the drives has allowed me to get internet access to find help. Using Norton SystemWorks and XP's own CHKDSK facility I've at least managed to get the original 120GB drive recognised as an E: drive now, and I do have full access to it. My problem now, however, is that I have my E: drive hooked up as a slave to my DVD/CD-RW, and my C: drive is acting as my main drive. My E: drive has just about everything I now need, while the C: drive is relatively old and out of date, so how do I swap them back again? I have tried to swap the two hard disks, but E: is no longer recognised as a System/Boot drive and thus XP goes through to the Windows XP logo and just hangs or restarts over and over. Ideally, I would like to have a dual boot OS option so I can learn and `play' with Linux without losing my XP partition. </question>

<answer>Wow, what a journey: I commend your persistence! You should be able to select the operating system you want to boot, Linux 2.4, 2.6 or Windows XP, from a boot loader such as Grub or LILO. These boot Windows filesystems in completely different ways, so once you have figured out which loader you are using, you may want to review the dual-boot documentation at www.tldp.org. Booting a Windows XP install from Grub can be done with a simple

rootnoverify(hd0,0)
chainloader+1
boot

at the command line. You may also be able to boot into your Windows installation using a Windows boot disk and run fdisk /mbr to reinstall the Windows boot loader on to the disk. I would always advocate making sure you have good backups of data prior to installing Linux or repartitioning your drives. It's rare for things to go wrong, but you can be sure that when they do, they really go wrong. As you've got access to the disk and the NTFS filesystem is intact, you should be able to recover the system using Windows tools and rebuild the Linux boot loader configuration to dual boot the pair. Once you have the boot loader working happily, you can physically swap the disks so that the drive currently E: under Windows becomes C:.</answer>

<title>Modem installation</title>

<question>I've used Linux SUSE 9.2 for about five years, with a U.S. Robotics 56k modem. I've just installed Homecall broadband from Homecall. co.uk. Installing on Windows went without any trouble but how do I install it on SUSE Linux? Homecall's help desk directed me to http://speedtouch.com/support.htm for a driver, which I downloaded and unzipped. That gave me KQD6_ 3.012 and ZZZL3.012. So are these files drivers, and how do I load them? My modem is a Thomson SpeedTouch 330.</question>

<answer>Detailed documentation on using the SpeedTouch modem with SUSE 9.2 can be found at www.linux-usb.org/SpeedTouch/suse. Yes, the two files you downloaded are the correct files for use with your system, but you need to move them to the appropriate location, as per the HOWTO. Reader John Gregory has also sent in this advice about configuring SpeedTouch modems: "The easiest route to take is with SpeedTouchConf. http://speedtouchconf.sourceforge.net will give chapter and verse on what to download, where to get it and how to install it. I have used it with a SpeedTouch 330, WinXP and three flavours of SUSE (2.4 and 2.6 kernels). At the start of the connection process a driver file is loaded into the modem by the OS. This only happens once unless you reboot or hot-unplug the modem. The driver file is the same as used for Windows but must be the correct version for the modem." Thanks John!</answer>

<title>Stuck in root</title>

<question>I have a couple of pen drives for transferring files between my Windows laptop and my Mandrake 10.1 PC. I can get read access to the pen drive as user but whatever I do, I cannot get write access to the removable drive as a user, only as root. I cannot change the group to my sharing group or my user, and owner remains as root. Even logged in as root I am told that I do not have enough permissions to change the group/ownership of mnt/removeable. When I change permissions it is still inaccessible when I return to user and ownership is returned to root. I have also been unable to transfer it to a sharing group. I have tried running the partitioner and allowing write access to all users but this also fails. I understand that if I install SUSE 9.2 I will get full read/write access but is there a less dramatic answer?</question>

<answer>You can allow users to mount and write to devices by modifying the /etc/fstab file. A typical fstab entry that permits user mounting of files is:

/dev/sdb1 /mnt/usb-key            ext3
defaults,user          0          0

The user option allows the device to be mounted by a non-root user, who can then write data to the USB device. Remember to umount the device before unplugging it from the machine, otherwise you risk potential data corruption due to files not being written completely.</answer>

<title>You're too kind...</title>

<question>I have been wondering for a while if there is a faster alternative to KDE or Gnome for my Fedora Core 2 machine (Athlon 1.2GHz, 256MB). So I decided to try out the IceWM window manager, because I wanted something that was vaguely similar to what I was already using, with a task bar, launcher button and menu. It wasn't all plain sailing, but I did eventually get there. The first stumbling block was getting IceWM to start at all. Eventually I found I had to put an .xsession file in my home directory (and make sure it's executable). Here is what I use, so other readers don't have to struggle like I did:

# run profile to set $PATH and other
env vars correctly

. $HOME/.bash_profile

# setup touchpad and the external
mouse
xset m 7 2
xinput set-ptr-feedback 0 7 1.9 1
# run initial programs
  uxterm &
# start icewm, and run xterm if it
crashes (just to be safe)
exec icewm-session || exec xterm -fg
  red

The next problem was to figure out how to get any extra items on the launcher menu. I found that you can do this by editing files in the .icewm directory (a sub-directory of home). The main file to edit is menu (obvious really, when you think about it). The nice thing about this is you don't have to reboot your machine to get the menus active ­ they are immediately there when the file is saved. My one and only gripe, which is stopping me from using IceWM all the time, is that because it's so quick my mouse is too fast to control easily. My guess is that there is some kind of parameter to change in the xorg.conf file or in the nvidia-config file. Could you tell me what needs changing to get my mouse under control?</question>

<answer>Thanks for sharing your discovery with LXF. IceWM is a great little window manager, although I prefer to use Sawfish as it seems a little more solid in general use. Most login tools such as gdm and kdm will allow you to select a window manager, so you won't need to edit your .xsession file manually to change to the one you want. For anyone looking for a slightly less minimalist window manager, Enlightenment DR0.17 is looking pretty crazy these days (check out www.enlightenment.org). If you've got the time to compile the dozen or so support libraries for it, it's a really slick system, although it is still in development. A quick machine is recommended for Enlightenment, but if you turn off many of the bells and whistles it will work happily on older boxes. Now to your question. You can set the mouse speed using xset, such as

xset m 5 1

You can also set a `Resolution' value in your xorg.conf in the Mouse section to adjust the speed of the mouse. In both cases, it's a trial and error situation where you have to tune your settings as you go. Of course, you will probably only want to change one at a time otherwise you'll drive yourself crazy!</answer>

<title>What a turn-off</title>

<question>I have installed SUSE 9.2 and have a problem with switching off my Acer laTurnoff from the KDE menu everything goes fine until I receive the message, `The system will be halted immediately'. After that the system reboots instead of switching off. I have the same situation when trying to use the command line by typing in poweroff as root user. What's surprising is that the problem doesn't exist when I'm using the battery. I didn't have this problem with the Knoppix 3.6 Live CD or Yoper 2.1.0-4 either. If I try the boot options

apm=off acpi=off

it's the same story. When the ptop. After I select computer is shutting down at the end of the process I again get the message, `The system will be halted immediately', but straight afterwards it reboots anyway. Giving the line

apm=off acpi=off makes the machine hang instead.

I get:

   `The system will be halted immediately Master Resource Control:
    runlevel 0 has been reached Skipped services in runlevel 0:/'.

I installed SUSE in safe mode as well. When I tried to turn off the computer this message appeared:

   `The system will be halted immediately Master Resource Control:
    runlevel 0 has been reached Skipped services in runlevel 0: stty:  
    standard input: unable to perform all requested operations'.

I think I've tried everything, including upgrading BIOS (which supports ACPI) and installing different kernels. SUSE is an excellent distro but with this kind of problems I wouldn't be keen to stick with it. </question>

<answer>Please don't give up on it yet! I think we can help. Disabling both APM and ACPI is probably not a good idea, since it will disable all power management features. Your laptop is probably using ACPI rather than APM for power management, though it depends on the age of the machine, and ACPI in Linux has its fair share of bugs and problems with certain BIOSs. I've known ACPI to allocate IRQs of191to NICs and other crazy stuff, which doesn't make the system very stable. Occasionally tweaking BIOS options will help, but as it works with Knoppix and not with SUSE, I'd err on the side of caution and avoid breaking anything that isn't actually broken. I would suggest instead that you review the boot logs from your system with dmesg and inspect what the kernel finds with respect to your APCI system. It's not unlikely that it worked under Linux 2.4 and broke in 2.6 kernels. SUSE has kernel updates available now for its 9.2 release, so you may want to give one of those a go and see if you have the same problem. Another place to try is www.linux-laptops.net, where you can find out how other people have installed Linux on to the same laptop. It's worth remembering that many distributions patch their kernels, so if it works with Fedora Core 3 or Mandrake, it doesn't mean it will work with SUSE. You can always post a bug report with SUSE and find out if they have a work around for a fix for it.</answer>

<title>Back to the source</title>

<question>I'm 12 years old and have a reasonable computer in my room, which dual boots Mandrake 10.1 and XP. But the only computer with access to the internet is the family XP (I'm not allowed to install Linux or boot from a Live CD). My computer is unlikely to have the net for a while, and the two machines are a long way away from each other so sadly there is no network or shared connection. Now, if I try to download the source of a file and save it to a USB drive it sort of works, but when I come to extract it on Mandrake it says, `Error this is not a .bz file' (or whatever type it was ­ the same happens for RPMs), however I try to extract it in a terminal or ark. I think Windows mucks it up when I download it but I'm not sure. Please help as there is only a limited amount on your DVDs.</question>

<answer>If you have the full disc set for Mandrake 10.1, there is loads of software available which you can install and play with. We try to include popular software items on the coverdisc each month but apologise that 4.5GB of data every four weeks isn't enough fo you...! If you want to install software on Mandrake, I would suggest ensuring that you start out by downloading Mandrake RPMs and installing them on the system, as anything that is bz2-compressed is probably source code and can take some effort to compile. If you want lots of Linux apps, www.linuxemporium.co.uk is a great place to get Linux software cheaply. Trying a few different Linux distributions is a fun way to get open source experience without giving yourself too many headaches. </answer>

<title>Safety checks</title>

<question>I have a standard Enterprise Linux 3 server with no control panels and no hardware firewall. I only run an HTTP web server that requires MySQL and Sendmail for outgoing mail via PHP. There is only one user on the server me. I have not touched the default install apart from disabling VSFTP. All data is backed up daily. What I'd like to know is where the system is vulnerable, where any attack is likely to target and which area of my system should I be monitoring closely. This is how I see things from a security point of view:

Shell login is via SSH, and FTP is only available via SFTP. The  authentication system PAM ensures that root cannot log in directly (I think).
Portsentry stops port scans.
The standard daily cron job shows me the logwatch activity and I take note of all the error logs such as failed attempts to log on (I'd really like to see a list of successful logons), plus disk space used.
Red Hat up2date keeps my software fully patched.
I don't have a firewall and see no reason for one, or iptables, although I could be very wrong on this one.</question>

<answer>You've asked a host of really great questions ­ I'll try to answer them all briefly. To see what else on your system besides just HTTPd and Sendmail is `exposed', we need to look at what makes up what I call your network profile. To see this for yourself, run netstat -antp to see which TCP services/ports are bound (and in your case exposed) as well as which binaries are associated with each:

# netstat -antp
Active Internet connections (servers
and established)
Proto R-Q S-Q Loc.Addr. For.Add
State       PID/Program name
tcp      0 0 0.0.0.0:10000 0.0.0.0:*
LISTEN        1018/perl
tcp      0 0 0.0.0.0:1   10 0.0.0.0:*
LISTEN        16577/xinetd
tcp      0 0 0.0.0.0:143 0.0.0.0:*
LISTEN        16577/xinetd
tcp      0 0 0.0.0.0:1 1 0.0.0.0:*
                         1
LISTEN        1809/portmap
tcp      0 0 0.0.0.0:80 0.0.0.0:*
LISTEN        918/httpd
tcp      0 0 0.0.0.0:21 0.0.0.0:*
LISTEN        875/vsftpd
tcp      0 0 0.0.0.0:22 0.0.0.0:*
LISTEN        16351/sshd
tcp      0 0 0.0.0.0:25 0.0.0.0:*
LISTEN        18632/sendmail: acc
tcp      0 0 0.0.0.0:443 0.0.0.0:*
LISTEN        918/httpd
tcp      0 48 69.20.9.105:22
64.39.0.38:32910 ESTABLISHED
19647/0

As you can see, there are around eight or nine different daemons binding to ports on a stock system. Now compare this with a remote portscan of your server using a tool like nmap (eg nmap -sS <IP>) to see what the world sees as your network profile. Remember to turn off Portsentry on your server or it will block you if you try a portscan! Your first layer of security is the network and/or iptables, so yes, I would look again at your decision not to have it. Iptables will block anything bad getting to the kernel. If a vulnerability is discovered in the Linux networking stack you could be vulnerable without iptables. It is excellent at preventing malformed or invalid packets from reaching your server. One big problem in this day of web forums, blogs and other cool web apps is back-door bugs in non-vendor-supplied application layer packages such as phpBB and VBulletin. These cool web apps offer themselves out via your daemons and expose you to all kinds of bugs that are not fixed through anything you have set up on your system. In fact, this is probably the most successful `flank attack' that we see these days with web hosting customers. Attackers exploit some weak code in phpBB or VBulletin, which gets them local user access as the user Apache, and then they're free to upload and launch local exploits or strong-arm attack tools to try for escalated user privileges (ie root access). All I can say is that if you choose a package like phpBB or VBulletin, you should track the bugs and patches very closely. The sftp/SSHroot restriction is actually set in the /etc/ssh/sshd.conf file, but you're right, the root user access control can also be controlled at the PAM layer. Portsentry is a good outer layer warning and lockdown system that has saved many an insecure box. Try to combine manually going over your logwatch emails with tools such as chkrootkit, regular netstat and MD5SUM baseline comparisons. Red Hat's up2date is an essential tool in an enterprise server environment. If you're away for a few days the server can patch itself against most big vulnerabilities until you can get to manual patching.</answer>

<title>Mail servers</title>

<question>I want to set up an IMAP mail server so that the wife, children and me can log on to any of our three PCs and have the same email format, whether it be Linux or Windows. I have a SUSE Server set up for File > Printing that the three PCs connect to. I run a hosted website, which does my main mail, and I also get mail through my ISP. What I want is for all mail to be dumped on to my server so that it can be read via IMAP, and to remain there unless deleted. The problem I have is understanding how I link Sendmail/Postfix, Courier IMAP and Fetchmail together. I understand what each bit does, just not the type of mail service I need to run ­ SMTP, POP or IMAP?</question>

<answer>Exim topped LXF66's mail server Roundup, but based on your needs I would recommend Postfix. Each user can collect their mail from their home directory with their mail client. IMAP will be a good choice of mail server if your family will be moving from PC to PC, as everything is always stored on the server, and because the server is local to the clients it will be just as quick as POP. You can also easily tie a web-mail tool into IMAP, using OpenWebMail or IMP, which will give you mail access through a browser if necessary. Fetchmail can be configured to inject mail through your local mail transfer agent, which I'd suggest should be Postfix, and deliver it to each user. If there are separate mail accounts, each can be sent to a different user, or specific users can have mail sent to their mailbox if there is a combined account, such as is available from ISPs.</answer>

<title>Identity parade</title>

<question>As part of my day-to-day work, I need to back up MySQL databases from various servers (both local to the office and external). I have created cron jobs for each server to be backed up at night ­ some servers have only one or two databases, but others have hundreds. My cron jobs are simple bash scripts. They take the name of the database being backed up and append the date and time to create a unique filename, then use mysqldump to retrieve the data:

mydatabase="mydatabase `date`.
sql"
filename=${mydatabase// /_}
mysqldump -h mydbaseserver1.
co.uk -u username -ppassword
mydatabase
> /var/backups/sqlbackup/
mydbaseserver1/$filename

This all works perfectly. If a database fails to be backed up, the cron job sends the error report to me on email. However, the email does not tell me which database failed. As each cron job contains backups for more than one database, I could get a confusing email like this:

  `mysqldump: Got error: 2013:
   Lost connection to MySQL
   server during query when
   retrieving data from server.
   mysqldump: Got error: 2013:
   Lost connection to MySQL
   server during query when
   retrieving data from server.
   mysqldump: Got error: 2013:
   Lost connection to MySQL
   server during query when
   retrieving data from server'.

The subject of the message lets me know which cron job was running, but apart from going in to the directory and looking through the files for backups that haven't been created properly (far too laborious and time- consuming!) there is no way to identify which databases failed. It does not really seem practical to create single cron jobs for every database as there are hundreds of databases. Is there a way that the error email I get from the cron job can include the database names of backup jobs that have failed?</question>

<answer>Yes, it is possible to configure mysqldump to give more verbose output using the ­v or -verbose switch. This should report the status of individual database successes and failures. The mysqldump output should be emailed to you by cron. As an extra level of intelligence you could set up a local mail filter on your workstation or procmail on the server to parse for a keyword synonymous with a failed database backup and only bring it to your attention if a failure has occurred. To have cron mail you the results, add a MAILTO=username to the top of your /etc/crontab or an individual user's crontab. An alternative to mysqldump is mysqlhotcopy. Many administrators prefer it because of its supposedly superior locking and better reliability. You can find information on mysqlhotcopy from MySQL directly at http://dev.mysql.com/doc/mysql/en/mysqlhotcopy.html.</answer>

<title>Impenetrable!</title>

<question>I have set up an OpenVPN server on one of my internal machines (a Linux machine) and have a problem talking to it from the outside world. I've tried everything, but I cannot get a connection to the damn server! I have no problem connecting to the VPN with the same configuration from an internal IP address, but as soon as I try to connect from outside my LAN, via my WAN interface, I have difficulties. My LAN is connected to the net by a Zoom ADSL X3 modem, router and firewall. I have made sure to allow 1194 UDP port forwarding to the local IP of the server (using the Virtual Server options). The Linux server does not have a firewall. Even when I run the server in a DMZ (totally open on the web) configuration it fails! That leads me to believe it is the VPN configuration that's messing up somewhere. The other concern I have is that the router operates automatic DHCP for the LAN ­ I wonder if this could be the problem. The thing is, I don't know how to assign fixed IPs on this router. I have spent days trying to sort this out and have completely lost hope. </question>

<answer>The first step in this process is to use a tool such as tcpdump on the Linux box to see if it even receives packets coming from outside the network. If you have it open on the internet, and it doesn't receive any packets, it must be an issue with the router that you have in place. As you can connect internally, I would suggest that OpenVPN is working and configured, although it would be worth checking that OpenVPN is listening on all necessary IP addresses for new VPN traffic. As the router is basically NATing the connection through, it shouldn't make any difference. You really need to get down to the most basic configuration, send some packets and see if they come through. It may be that your ISP is not permitting UDP traffic on that port, and you will have to call its technical support to verify this and figure out if the ISP blocks it. Many ISPs block IPSec for home users; however, OpenVPN is obscure enough that you'd think they'd not care about it.</answer>

<title>Apache redirecting</title>

<question>I was first introduced to Linux a couple of years ago when I started using Plesk. Over time I wanted to do more and more things that Plesk wasn't geared to handle, not at all because it's a bad product but because I have some customers with really weird and diverse requirements. What I'm doing now is setting up another server without Plesk and trying to do all the things Plesk was doing manually. I've learned loads by doing this so far but there are some things I still need to address. At the moment I'm focusing on Apache and my question is simple but I can't find an easy answer. I'd like to be able to point a certain directory on a customer's site (call it http://domain.com/secure) to an entirely different website, which is hosted at their premises for their own internal policy reasons. The domain they are using is http://secure.domain.com). The secure.domain.com is a new host with the appropriate DNS pointing to them. All the links in their site now point to the new location but they are concerned about people who have bookmarked to old page. Obviously I can't set up secure. domain.com in my Apache config as I don't run it. The client said they could do it on their web page but don't want to take the traffic and overhead. In all honesty I don't know enough about it to discuss it with them properly.</question>

<answer>I'm convinced that there would be almost no extra load on Apache by having a web page doing the redirecting as you suggested, but if your customer really wants Apache to do the work at a lower level it's dead easy to do. Try adding the following into the virtual host configuration block on the server:

Redirect permanent /secure http://secure.domain.com

There are several other options for this type of redirect such as temp, seeother and gone. The Apache documentation has a good explanation of the differences between them but essentially it comes down to the HTTP code returned by Apache. With each of these, Apache on your server will give the browser the new URL and will not stay involved in the connection, which should suit your customer's policy. Your customer may also want to do some intervention for web traffic coming in from your server with this redirect to tell them that the link has changed and that they should update their bookmarks.</answer>

<title>Wi-Fi Scot-spot</title>

<question>ADSL is hitting even rural areas of northern Scotland these days, hence my need for a suitable Linux-compliant wireless modem/router. I have identified a number of potential devices without knowing their Linux compatibility, which apparently depends upon certain chipsets:

Netgear DG834GT complete with PCMCIA transceiver.
Belkin F5D7632UK4.
Linksys wireless 4-port ADSL Gateway, WAG54-UK.
3com Wireless ADSL modem/router complete with PCMCIA transceiver, 3CRWE754G72-AGBUN.
D-Link DSL-904 Wireless ADSL modem/router with 802.11g PCMCIA card.

In the May 2005 issue of the magazine there is a review of the OvisLink Multimedia VPN router and server [Reviews, LXF66], and you gave it an overall rating of 8 out of 10. However, the device does not seem to have a built-in ADSL modem. So your advice on the devices that I've listed here, or any others that are suitable, would be much appreciated. </question>

<answer>The sure-fire way to get a DSL modem and router that works with Linux is to find one that will terminate your PPPoE or PPPoA session and hand off plain old Ethernet to your network. Even if it does not handle PPPoE itself and bridges it on to Ethernet, your Linux system can handle PPPoE out of the box very easily. You're right that the OvisLink doesn't have an ADSL modem built in. A device such as the Zoom X6 (www.zoom.com/products/adsl_overview.html) will do everything you need, and provide wired and wireless Ethernet access to the network. The D-Link DSL-904 on your list is also a good choice, but check that the card will work with Linux before you buy it. A quick search on Google will locate for you the appropriate Linux kernel configuration required to make it work.</answer>