Answers 84

From LXF Wiki

Answers 84

<title>Player's frustration</title>

<question>I have installed the Pipepanic game from the latest Linux Format DVD [HotPicks, LXF82] into my home directory and can run it by typing ./pipepanic in a console with

cd pipepanic-0.1.3-source
./pipepanic

However, I can't work out how to add this as an item to the K menu using the Edit K Menu section of KDE Control Centre. I don't know what command to put in the Command box. If I put /home/marrea/pipepanic- 0.1.3-source/pipepanic in the box, and then go back and click on the Pipepanic entry I've added to the K menu, all that happens is that the little hourglass with Pipepanic on it goes round and round in the Kicker bar and a gear wheel icon bounces up and down for 30 seconds or so and then they both disappear. Is it because I have installed Pipepanic in my home directory? </question>

<answer>It is failing because you are not in the pipepanic directory when running it from KDE. The program needs to be run from its own directory in order to find files it needs. You can fix this by setting the Work path to /home/marrea/pipepanic-0.1.3-source/ in KDE's menu editor. This effectively adds a CD, as you did in theshell. You may also need to specify the full path in the command box. The safest way to make sure both of these are correct is to use the file selector icons to the right of the boxes. If you tick the Run In Terminal box, you will see any output from the program and, hopefully, get a clue as to where it goes wrong. That's how I saw it was failing to find a file. You will need to add something like ; sleep 5 to the end of the command, to keep the terminal window open for a few seconds after it exits. For example,

/home/marrea/pipepanic-0.1.3-source/pipepanic;sleep 5 

</answer>

<title>That rsyncing feeling</title>

<question>I've been trying to set up an rsync script to back up the important contents of my home directory to a USB drive, and I'm having great difficulty whipping it into shape. Particularly confusing is how to use --exclude-from and (even more confusing) --include-from. I'm on an Ubuntu 6.06 system, with rsync 2.6.6. Here's an outline of what I want to happen. First, all of the non-hidden files, directories and their subdirectories etc in my home directory /home/dcoldric are to be backed up, except that for the directory /home/dcoldric/ MyDownloads, I don't want any subdirectories to be included, just non-directory files. Another exception is that there is a very limited number of non-hidden subdirectories ­ such as /home/dcoldric/cxoffice/ ­ that I do not want to back up. All of the hidden files and directories are to be ignored, except for a few. For example, I do want to back up /home/dcoldric/.netbeans and subdirectories, as well as .bashrc and .bash_aliases. Finally, I'd like the directory structure of the backup to mimic that of the original (except for the ignored directories). I have tried just about everything I can think of, to no avail. My latest variant looks like:

rsync -a --delete --safe-links --exclude-from=/
home/dcoldric/bin/backupExcludes /home/
dcoldric/ /media/USB/backup/dcoldric

where the backupExcludes file currently looks like this:

- /*
+ /dcoldric/
+ /dcoldric/.Creator/
+ /dcoldric/.java/
+ /dcoldric/.mozilla/
+ /dcoldric/.mozilla-thunderbird/
+ /dcoldric/.netbeans/
+ /dcoldric/.bashrc
+ /dcoldric/.bash_aliases
+ /dcoldric/MyDownloads/
- /dcoldric/MyDownloads/*/
- /dcoldric/.*
- /dcoldric/cxoffice
- /dcoldric/jdk*
- /dcoldric/sun
- /dcoldric/SUNW*

However, it appears to do nothing. </question>

<answer>The rsync command copies everything by default, so the --exclude option tells it what to skip. It may be clearer to think of --include as --do-not-exclude. The exclude-from file you have given is actually a filter file. Filtering provides control, but it does not have a --filter- from variant. The more correct way to use a filter file is with the option

--filter="merge myfilterfile"

Your current filter file does not work because it starts with - /*, which excludes everything. So when you say it does nothing, you and the program are quite correct ­ because that is just what you told it to do. The first match counts, so move - /* to the end. When a filter path starts with a /, it is matched relative to the source directory, which is ~/dcoldric. So you need to remove /dcoldric from the start of each path, otherwise you are trying to match /home/dcoldric/dcoldric/.mozilla and so on. Although it doesn't affect your current filters, you should be aware that

+ /foo/bar/
- /*

will match nothing. Because /* excludes everything in the base directory, the contents of foo are never checked, so foo/bar is not found. You need to force rsync to scan foo with

+ /foo/
+ /foo/bar/
- /foo/*
- /*

A working filter file would be

+ /.netbeans/
+ /.bashrc
+ /.bash_aliases
- /MyDownloads/*/
- /.*
- /cxoffice
- /jdk*

Call this with

rsync -a --delete --safe-links --filter="merge
~dcoldric/bin/backupFilter" ~dcoldric/ /media/
USB/backup/dcoldric/

Note the trailing / on the destination: this can affect the result. </answer>

<title>Which Fedora?</title>

<question>I bought Fedora Core 4 and the Linux Format Special magazine in 2005 and now I want to install Fedora on VMware (in a Windows XP host). In VMware there are a few alternatives for Red Hat, such as `Red Hat Linux' and `Red Hat Enterprise Linux 2, 3 & 4'. I guess I can rule out the plain `Red Hat Linux' alternative, but for this version of Fedora, which one of the others should I choose? It could be important since VMware's own VMware Tools greatly enhances the flexibility of guest operating systems' screen, mouse and pointer. Still, this facility hasn't yet worked for me on any other Linux distro that I've tried, and I cannot find any relevant information in the magazine. </question>

<answer>Almost every variant of Linux that I have tried to install on VMware and I have tried a lot ­ has installed successfully, even if the specific distribution is not listed. Most of the time I use `Other Linux 2.6.x.kernel' but for Fedora Core I choose the plain `Red Hat Linux' option. This causes no problem with installing VMware Tools as described on page 142 of the VMware Workstation manual (which you can download from www.vmware.com/support/pubs/ws_pubs.html). The steps are:

1 Remove any mounted CD/DVD discs.
2 Select VM > Install VMware Tools from the VMware menu.
3 Open the CD-ROM drive in the guest operating system.
4 Double-click on the VMware-Tools RPM file.
5 Give the root password when prompted.
6 Run vmware-config-tools.pl from a root terminal.

You may need GCC installed for the final stage, if it needs to compile a module for your kernel. This is necessary if the installer does not have a pre-built module for your kernel, as is the case with Fedora Core 5. </answer>

<title>Securing Apache</title>

<question>I've just built an Apache web server to host some websites externally. Can you give me some general security tips? </question>

<answer>Aside from securing the pages via HTTP authentication or SSL where applicable, there are a number of things you can do in the httpd.conf file, as the default configuration file can provide a potential attacker with some specific information to help them target their attack. Firstly, make absolutely sure the ServerTokens directory is set to Prod. When it's at its default value it will reveal the version of Apache you are using, as well as other modules you are using and potentially your operating system. While security by obscurity isn't something to recommend if you do fall behind with your versions, you don't want to give away too much information. To see what your server is currently giving away try executing

curl ­I http://yourwebserver

Also make sure the ServerSignature is set to email ­ this will prevent your versions being disclosed on Apache's error pages. Do you want your users to have their own web-accessible folders? No? Then disable the userdir module. Similarly, are you using CGI? If not, remove the cgi-bin alias from the config. One other thing to be wary of is the Apache manual, which is sometimes aliased by default. Make sure directory indexes are forbidden, by setting Indexes in the Options section of the <Directory> directives. If you are running PHP, ensure the expose_php directive in your php.ini file is set to Off. If other people are publishing content to your web server you may also want to make sure that they do not override certain settings with a .htaccess file. Within the root <Directory> directive, set the AllowOverride directive to None, AuthConfig or another limited value; do not set it to All. </answer>

<title>Serial terminals</title>

<question>I maintain some ancient industrial hardware, and have some simple test software I wrote many years ago in Quick Basic and monitor the test results using HyperTerminal, set up to emulate a DEC VT100 using COM1 (9K6Baud). I also use a Thurlby LA160 logic analyser and a Velleman PC oscilloscope all running under Windows. Can you tell me how I obtain a similar VT100 terminal display on Linux? Do I need to master Wine to run the Thurlby and Velleman software under Linux ­and what about Quick Basic (compiled) programs? My current system is a dual-boot Windows ME and SUSE 10.0 machine. </question>

<answer>The Linux serial ports are numbered from /dev/ttyS0, which is equivalent to COM1. You may also have a link from /dev/modem to /dev/ttyS0. The usual replacement for HyperTerminal is Minicom, which is available with most distros, including SUSE 10.0. Minicom has a VT100 emulation mode, so it should do exactly what you want. The SUSE package does not set up global defaults, so you'll have to run

minicom -s

as root first. You also need to be a member of the UUCP group in order to write to the serial device. You can set this in Yast > Security And Users > User Management, but you have to log out of KDE and back in for the change to take effect. It's likely that you'll need to use Wine to run any proprietary software, but this will use /dev/ttyS0 as COM1, so it will still be able to access your hardware. Your Quick Basic software will also require Wine to run as is, but it may be easier in the long run to port it to something like Gambas, a Linux equivalent of Visual Basic, or a language that runs on both platforms, such as Python. </answer>

<title>Space invader</title>

<question>I need to add more swap space to my Linux machine but I don't have any unpartitioned disk space. Is there anything I can do? </question>

<answer> GNU/Linux is a lot more flexible than other operating systems in a lot of respects, including swap space. First off, work out how much additional swap space you need. For argument's sake, let's say you want another 1GB of swap. Next, identify a partition on your system that has at least that amount of space free and won't be needed any time soon. When I built my system, for example, I built it with a 4GB /opt partition which had only 1.5GB utilised. Then it's time to create the file you are going to use for swap. To do this you need to use the dd command, which takes various arguments including a block size argument and a count argument. To create a file 1GB in size, use the following command:

dd if=/dev/zero of=/opt/swapfile bs=1G count=1

This command will write a 1GB file at /opt/swapfile. The if switch specifies the input source while the of switch specifies the output file. Next up you need to format it as a swap file:

mkswap /opt/swapfile

Once this has been set up as a swap file you need to activate it by executing

swapon /opt/swapfile

You should be able to see it active on the system by executing cat /proc/swaps or simply free at the command line. To enable the swap during the boot process, add it to your /etc/fstab file:

/opt/swap   swap  defaults 00

</answer>

<title>Creditcard ADSL</title>

<question>I bought the Linux Format issue with Mepis on disc [LXF79] because of its superior hardware detection, which is where my installations always go wrong. Unfortunately, the network connection with the internet just does not work. Do you have any clue to how I can get my ADSL connection working with Linux? I have a Xircom Creditcard Ethernet 10/100 + Modem 56. In the connection settings I found that the address type was assigned by DHCP. I tried copying the other settings, including the IP address, subnet mask, standard gateway and the DNS, but it did not seem to do much good. </question>

<answer>The network side of this card is handled by the xirc2ps-cs module. This is included with Mepis. First, check whether the card has been detected. Open a terminal and type

su -
#give root password when prompted
lsmod | grep xirc

If you get no output, the module is not loaded, so type

modprobe xirc2ps-cs

No output from this command means everything is as it should be. Now run

ifconfig

to see a list of your network interfaces. There should be two: lo and your network interface, which I expect will be eth0. Now start the Mepis OS Centre from the KDE menu, go to the Network section and pick your network interface. Select Use DHCP For IP and also select DHCP under the DNS tab. Now your network should be configured automatically. If the card does not start automatically when you boot, you should type

echo "xirc2ps-cs" >>/etc/modules

This adds the name of the module to the list that the system automatically loads when it boots. </answer>

<title>Thingamajig.rpm</title>

<question>I have two questions, both concerning forgotten program names. The first follows on from a major deleting `oops!' I had recently. (Computers don't do what you want them to do, they do what you tell them to do, and I really need better protection from my own fallibility.) I once read, possibly in your own pages, about an undelete daemon. Any file delete command was intercepted ­ it must redefine the system unlink call or something ­ and converted to move the file to a trash folder. Then instead of immediately emptying the trash, as most people do, you find you can't. It persists until the trash totals a predefined size, or free space starts to fall below another threshold. The other issue is that I'm a web developer, and need to test on a wide variety of browsers. I have heard of a GTK and KHTML browser project, which would be very useful if only I could remember its name. I don't want to install Konqueror because it depends on just about all of KDE's bloat. I don't need a full-featured browser. Just something lightweight will be fine. </question>

<answer>The trash can program you are thinking of may be Delsafe, from http://delsafe.cjb.net. This works much as you describe, replacing library calls to move deleted and overwritten files to a trash can instead of deleting them. Multiple deletions or overwrites of the same filename are timestamped, and an undel program is provided to recover the files. Another possibility is libtrash, from http://pages.stern.nyu.edu/~marriaga/software/libtrash, which offers similar features. I suspect the KHTML project you are thinking of is Gtk+ WebCore (http://gtk-webcore.sourceforge.net). This is at an early stage and may not be representative enough for your needs. I would suggest that to properly test pages in Konqueror, you need Konqueror itself, especially if your pages use JavaScript. This isn't so bad, because you do not need to install most of KDE to use Konqueror. All you need is the kdelibs package and Konqueror itself. Most distros now split the KDE packages, so you can install just Konqueror instead of the whole of kdebase (as used to be the case). </answer>

<title>In the black</title>

<question>My question to you is about DansGuardian blacklist files. As an unknown number of websites are registered on a weekly basis, the need arises for a sysadmin to keep their blacklist files up to date. Not many of us have the budget to do this on a regular basis. Is it possible to have my blacklist files automatically updated by using spiders and crawlers? If this is possible, how can I achieve this, and what is the potential harm or gain to my setup? Perhaps you could tell me what is the minimum recommended spec for running DansGuardian comfortably. PS: What would it take to set up a Linux user group for Nigeria? </question>

<answer>The first point to bear in mind is that DansGuardian does not work purely on blacklists. It is a content filter, so its main work is done by checking the content of pages. However, it helps to keep your phraselists up to date as well, as site creators try to work around existing filtering restrictions. You can get updated phraselists from http://contentfilter.futuragts.com/phraselists. Using a spider to generate your own URL lists would be hugely expensive in terms of bandwidth, as you would be checking sites you would never visit, and it would still only use your phraselists. It is possible to download updated URL blacklists, and although some of these are commercial, others are free. The commercial lists are often amalgamations of free lists ­ you're just paying someone to do the work for you. There are a number of scripts on the DansGuardian website (in the Extras & Add-Ons section) that will download and install updated blacklists for you, and you can also get them from the Squidguard site at www.squidguard.org/blacklist. The required specs depend on your usage. For a home network, the requirements are minimal. The main burden on the system seems to be loading up the rules when it starts up, so a decent amount of memory is more important than a fast processor. This also depends on what else is running on the computer. As for starting a user group, all you need is a few people to meet with and a place to meet, or a website and mailing list if your group will only exist in cyberspace. There are no formal requirements, just a number of people with a shared interest in Linux. Some groups have more formal meetings, with demonstrations by members; other just get together in a pub to chat about Linux and other matters of interest. You might find the articles at http://en.tldp.org/HOWTO/User-Group-HOWTO.html and http://linuxmafia.com/faq/ Linux_PR/newlug.html useful. </answer>

<title>Load balancing</title>

<question>My company has a number of web servers that we use for intranet/ internet hosting. We want to load balance the traffic but don't want to either buy a load balancer or use round robin DNS. Can I do it with Linux? </question>

<answer>Yes! For a while GNU/Linux has benefited from the Linux Virtual Server project (www.linuxvirtualserver.org), the code for which, ipvs, has been included in recent kernel releases. If you are using a kernel older than 2.4.28 you may need to patch and recompile your source, though. You can tell if ipvs is enabled with

cat /proc/net/ip_vs

If that file does not exist, try to load the module by executing

modprobe ip_vs

Assuming the module loads or has been compiled into the kernel you are ready to go! There are three choices when it comes to the implementation of LVS within your network: direct routing, tunnelling or NAT (Network Address Translation). NAT is by far the easiest to configure but may require an extra layer of networking. Direct routing is the fastest and will work in a flat network, but can cause configuration issues with the receiving web server. Assuming you are going to use NAT, your new load balancer will need two network cards, one within the network in which your web servers are located, the other in a DMZ (Demilitarized zone)/external network ­ in short, the network your HTTP requests are sent to. Let's assume your external network is 10.1.0.0 and your web server network is 192.168.1.0. Assign the machine to unused addresses, such as 10.1.0.1 and 192.168.0.1, then configure the routing table on each web server to use it as its default gateway:

route add ­net 0.0.0.0 mask 0.0.0.0
gw 192.168.1.1

At this point you need to configure how the LVS will forward traffic to each machine. There are a number of load-balancing algorithms, including: round robin, least-connection scheduling and destination hashing scheduling. To find out how each works check out the LVS website. For now, we are going to set up round-robin load balancing. This simply sends traffic to each web server in turn, but the configuration of the other algorithms is much the same. In order to manipulate the ipvs/LVS table you need to use the ipvsadm binary. This is already installed on most modern Linux distributions (it was released in July 2003) but you may need to compile it if you are using something older. The first step is to setup the VIP or virtual IP address; the IP address your requests will be received on. For now we will assume it is the address you allocated to the server earlier in the 10.1.0.0 network:

/sbin/ipvsadm -A -t 10.1.0.1:http -s
rr

Now add your web servers to the VIP (insert your own IP addresses):

/sbin/ipvsadm -a -t 10.1.0.1:http -r
192.168.1.10:http -m -w 1
/sbin/ipvsadm -a -t 10.1.0.1:http -r
192.168.1.11:http -m -w 1
/sbin/ipvsadm -a -t 10.1.0.1:http -r
192.168.1.12:http -m -w 1

This adds all three web servers to the VIP with a weight of 1 (see the ­w switch). If you have a server you want to get more traffic, simply increase the weight on a per-server basis. If you want it to not take any traffic at all, set its weight to 0. </answer>

<title>Bad option</title>

<question>I tried to mount one of my extra disks the other day and got the following error message:

mount: wrong fs type, bad option, bad superblock on /dev/hda1, or too many mounted file systems

When I tried to scan the disk with fsck I got this message:

fsck.ext3: No such file or directory while trying to open /dev/hda1

The superblock could not be read or does not describe a correct ext2 filesystem. I'm going to replace the disk but would like to recover the data. Is there any way to do it? </question>

<answer>Luckily, yes! The ext2 and ext3 filesystems have backup superblocks stored at regular intervals throughout the disk, you simply need to find out where they are and specify them to fsck when you repair the filesystem. Their position depends on the size of the partition created. The easiest way to locate them is to rerun mke2fs specifying the ­n switch. This will cause mke2fs to do nothing but tell you what it would do.

 mke2fs /dev/hda

The info that this code gives you will include a list of locations that superblocks are stored at throughout the filesystem. Using that info you can instruct fsck to repair the filesystem using one of the backup superblocks.

 fsck ­b 8193 /dev/hda

where 8193 is the backup superblock you observed from the previous command. Once repaired you should be able to mount the filesystem as usual. </answer>

<title>Dual-head hell</title>

<question>I use a dual-boot XP and Ubuntu machine at work, which has two monitors. I have two monitors at home and have set up a replica of my workstation. Everything works wonderfully, mostly thanks to the great article you guys did about dual head some months back ­ except that on my home machine it is desperately slow. At work I have a dual-head graphics card thanks to it being PCIe, so I have applied the Nvidia drivers and it all works great with hardware acceleration. The problem I have at home is that I have an AGP card and a PCI card providing the two video sources. They have different chipsets, and one uses the legacy Nvidia driver set and the other the new Nvidia driver set. I originally thought this would be quite straightforward to resolve, thinking I would just install both sets of drivers and then specify which one to use in the X Windows config file. Unfortunately both sets are referred to as `nvidia' which means I have to use a combination of the official drivers for one adapter and the standard open one for the other. Needless to say my desktop is now slow and cumbersome. I need a way to install both drivers and then refer to them within my xorg.conf file so that I can use the right driver for the right adapter and my desktop speeds up. My graphics cards are a GeForce FX 5200 (AGP) and a Riva TNT2 Model 64 Pro (PCI). </question>

<answer>These are not different drivers but versions of the same one, and it is not possible to have two different versions of the same module loaded into the kernel at the same time. This leaves you with a number of alternative choices. You could do as you have already tried to do and use the nv driver for one card, but this is very slow. Yo u could install an older version of the Nvidia driver; one that is compatible with the TNT2. Either the 1.0.6629 or the 1.0.7167 should be suitable here ­ they are the latest versions that work with legacy cards yet still support the FX5200. This should work for now, but the older Nvidia drivers have a problem with the latest kernels, so a kernel update could break things later. Or you could look for a cheap non- Nvidia card for the second display, or a newer Nvidia card that uses the latest drivers. The simplest solution, though, would seem to be the one you have already used at work. The FX5200 is a dual-head card. All you need is a DVI-to-VGA adapter (mine came with one) unless you are using a monitor with DVI input. This would enable you to set things up exactly as you did on your work computer. In that case, you could change your xorg.conf file to contain

 Section "Device"
 Identifier "NVIDIA Corporation NV34 [GeForce FX 5200] (rev a1)-0"
 VendorName "NVIDIA"
 Driver "nvidia"
 BusID "PCI:1:00:0"
 Screen 0

</answer>