Answers 83

From LXF Wiki

Answers 83

<title>Netless Fedora</title>

<question>I have installed Fedora Core 5, and all seems fine ­ with one exception. When I try to run Add/Remove Programs or Packager Updater, I am prompted for the root password. Then I get the error messages: `Unable to retrieve software information' or `Unable to retrieve update information'. The only deviation from the default installation is that I do not use the Logical Volume Manager; instead I manually partitioned the hard disk into /boot (100MB), / (38GB) and swap (1024MB). Also, this PC is not connected to the internet. I have tried installation both from CD and from your supplied DVD. But I still get the same errors. </question>

<answer>The lack of an internet connection is the reason for these messages. Both programs try to read information from online software repositories to do their jobs. In the case of the Software Updater, this is inevitable and unavoidable: by their very nature, updates are newer than the packages on the installation media, so it is not possible to use this feature without an internet connection. To prevent this error with Add/Remove Programs, you need to edit the repository files to disable all online sources and add one for the DVD. You need to be root to do this. Load /etc/yum.repos.d/fedora-core.repo into your favourite text editor, find the section starting [core] and comment out the baseurl and mirrorlist lines by placing a # at the start of each line. Then add a new line reading

baseurl=file:///media/disk
   This creates a new repository at /media/disk, where the DVD is mounted. You then have to edit the other .repo files and change any occurrences of enabled=1 to enabled=0. Now the only repository that is enabled is the one for the DVD, and running Add/Remove Software should allow you to install software from the disc.

</answer>

<title>Snail mail</title>

<question>I have been experiencing delays in sending mail through my Qmail- enabled mail server. I have tried to make things go faster, but to no avail. Could you give me a list of things to check that might be causing the delays? </question>

<answer> The most common reason behind delays is DNS lookups. First and foremost, please make sure that the server's hostname is resolvable. Also, a PTR record should be created for the IP address that the server uses to send mail out on. You could even speed up lookups by running your own local caching name server. Optionally, you might want to disable DNS lookups altogether. If you run qmail-smtpd through the tcpserver wrapper script you should add the -H flag to its options, so that it doesn't look up the remote host name in DNS; and remove the environment variable $TCPREMOTEHOST. To avoid loops, you must use this option for servers on TCP port 53. If you're not, you're probably running it through inetd/xinetd; you might want to add the -Rt0 flags to your configuration, under server_args in your inetd/xinetd configuration file. That will prevent Qmail from performing ident requests when an SMTP connection is established. When it does, it manifests itself by causing a delay between the TCP connection being established and the banner being displayed. On a related issue, if you have a queue that is constantly filled to the brim, you might want to add the file /var/qmail/control/queuelifetime and set it lower than the default of seven days, which means that emails that soft bounce will be retried for a week. A value of one to two days is more reasonable. These steps should reduce the time Qmail takes to display the banner. </answer>

<title>Disaster recovery</title>

<question>A friend of mine has a small law firm (six users using Windows XP Pro, one Linux proxy/mail server). They often do `bad things' to their machines, and he calls me to fix them. Usually this means a Windows re-installation after backing up every document they saved here and there. Thus, I am trying to find and set up a totally automated Linux-based disaster-recovery solution that will back up the whole disk once I install everything (such as Ghost or G4L) and every night automatically back up every workstation ­ so, in case the `bad thing' happens all they should do is boot from another computer on the network or boot from a CD and have their system recovered by getting the image files from a local backup server. </question>

<answer> There are two separate issues here. The first is a complete backup that can be restored from a CD or over the network for a complete reinstall in the case of a total disaster. The second is regular backups of data. For the first task, you can't really go wrong with Partition Image ­ www.partimage.org. This is a Linux program that has a client­server option. You could run the server on your Linux box and use a Live CD to create images of each of the Windows computers' hard disks for recover. You would need a Live CD distro, which could be used to restore the disk from an image file on the server. RIP (Recovery Is Possible) is good for this (www.tux.org/pub/people/kent-robotti/looplinux/rip). The documentation gives detailed instructions on modifying the CD image to suit your needs, so you could add a short shell script and call it from /etc/rc.d/rc.local to automate a full system restore when booting from the CD. For the nightly incremental backups, BackupPC (http://backuppc.sourceforge.net) is a good option. This will run on the Linux server, and requires no special software installed on the Windows PCs, as it accesses them via Samba. All you need to do on the PCs is set up shares, so BackupPC can get at the files. All the work is done from the Linux box, so a simple Cron task will run the nightly backups. BackupPC has a web interface, so users don't need to learn any arcane commands to recover files from the backups. This program is particularly good when backing up a number of similar PCs, because it stores single copies of files that exist on multiple computers. Combined with compression, this significantly reduces the space needed to back up a network. </answer>

<title>Desktop capture</title>

<question>I am trying to find an application that will record whatever I do on my machine so that I can make a small movie of what I am doing. Can you recommend any software for me? </question>

<answer>There are a number of solutions to this, depending on what you want to do with the movie. If you want to publish your video on the web, Vnc2swf may be the best choice. This records a VNC session as a flash animation. You'll need VNC installed (or Tightvnc, from www.tightvnc.com). VNC is designed for running a remote desktop, but you can also run it on just one computer. Start a VNC session with

vncserver -depth 16 -geometry 800x600

and you will see a line like:

New `X' desktop is yourhostname:N

The last part is the hostname and display number. If your computer is not networked, you can use localhost. Now start recording the session with

vnc2swf -startrecording -geometry 800x600 -depth 16 -framerate 5 demo.swf yourhostname:N.0

Make sure the geometry, depth, hostname and display match the VNC server you just started. The .0 at the end is compulsory. A new window will open containing the VNC desktop session and anything you do in here will be recorded to demo.swf. End the recording by closing the window. The program will output some suitable HTML for viewing the Flash animation in a web browser, which you can redirect to a file if you wish. This size and frame rate are suitable for web use, but to display a local demo directly on a monitor or projector you may wish to increase both. To generate a movie file, you can use Vncrec. This works in a similar way to Vnc2swf, but creates a file in its own format, which you can convert to AVI or MPEG with transcode.

vncrec -record demo.vnc
transcode -x vnc --use_rgb -y xvid -
k --dvd_access_delay 5 -f 10 -i
demo.vnc -o demo.avi

As before, the geometry used here must match that with which the server was started. The -f option sets the frame rate of the video. The resultant file can be played with any video player, such as MPlayer or Xine. Whichever recording software you choose, if you want a program to be running at the start of the recording, start it from ~/.vnc/xstartup:

ooimpress sample.pps

An alternative approach is Istanbul, from http://live.gnome.org/Istanbul. This is a Gnome program, but works with other desktops. It puts an icon in the panel: click it to start recording and click it again to stop. The result is saved as ~/desktop-recording.ogg, as a Theora video. This can be limiting compared with the alternatives, but it is quick and easy to set up. </answer>

<title>Disowned!</title>

<question>I recently switched distros from Xandros to Fedora Core 5. I transferred 3GB of data only to find that all the files in my home directory have root as both file owner and file group. Is there a script I can use to change all the permissions to my user name? </question>

<answer>If you've copied your home directory (which will look something like /home/dave) from one machine to another, the easiest way to restore ownership on that directory is to do a recursive chown as root on /home/dave with the correct ownership, recursively changing the user and group. It should be safe to perform this on your home directory, as it usually only contains files and directories owned by the user and group of the user whose home directory it is.

chown -R macdaddy:macdaddy / home/macdaddy

If you have multiple files and directories owned by different users and groups, you will need to do a search and replace to change the ownership. So if user `dave' has numerous files and directories throughout /var/www/html and you want to change the ownership of those files to user and group `bigmac', you could chown -R directories to change ownership. The problem with this is that it may change ownership of files that you didn't want it to. In this case, use the find command to perform the search and replace on the ownership, ensuring that those directories not owned by Dave are left as they are:

find /var/www/html -user dave -
group dave -exec chown bigmac:
bigmac {} \;

This will find any files and directories within /var/www/html belonging to user and group dave, then change the ownership to bigmac. The {} gets replaced with the files and directories found matching the -user and -group criteria, and the \; is necessary to escape the ; to the shell and to tell find that the argument list has finished. So, assuming you have a standard home directory, the easiest way to change ownership in one go would be to use the chown -R command. Keep in mind that this method is not applicable for all locations on the filesystem though! </answer>

<title>Lost Thunderbird</title>

<question>If I click on an email address in KDE, I get this error: `KDEinit can not start /usr/share/application/thunderbird/thunderbird'. Thunderbird is installed at /opt/thunderbird. I used to run SUSE, but now I run Gentoo, and when I transferred the /home directories I must have transferred something over that Kdeinit uses but I can't work out what. Could you please tell me how I change this so that Kdeinit looks in the right place. </question>

<answer>It looks like KDE is looking in the wrong place for Thunderbird. As with most KDE options, you change this in the KDE Control Centre. As with most KDE options, finding the right place in the KDE Control Centre can be tricky ­ there are so many options, and not always where you expect to find them. The Control Centre does have a search option, which usually helps ­ but not in this case (at least not with KDE 3.5.3) The option you want is in KDE Components > Component Chooser > Email Client. Select the Use A Different Email Client radio button, then click on the small icon to the right of the text box to open the applications selector. By choosing the program from here, you ensure that you have the correct path. This will open Thunderbird, but without the recipient address or any other data. To fix that, add the following to the string used to start Thunderbird:

 -compose "mailto:%t?subject=%s&body=%B"

Hover the mouse over the text box to see the available options. </answer>

<title>Twisted NICs</title>

<question>I've just installed Fedora Core 5 and am a bit confused as to which of my network interface cards is the one in use; I have two, and on the previous install eth0 was used as the default. Here is the output from ifconfig:

eth0 Link encap:Ethernet HWaddr
00:30:18:58:4A:A3
inet addr:192.168.1.101
Bcast:192.168.1.255
Mask:255.255.255.0
UP BROADCAST MULTICAST
MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0
overruns:0 frame:0
TX packets:0 errors:0 dropped:0
overruns:0 carrier:0
eth1 Link encap:Ethernet HWaddr
00:50:BA:B3:B1:A5
inet addr:192.168.1.152
Bcast:192.168.1.255
Mask:255.255.255.0
inet6 addr: fe80::250:baff:feb3:
b1a5/64 Scope:Link
UP BROADCAST RUNNING
MULTICAST MTU:1500 Metric:1
RX packets:258479 errors:0
dropped:0 overruns:0 frame:0
TX packets:264885 errors:0
dropped:0 overruns:13 carrier:0

While the network is working, it looks to me as if all traffic is going through eth1. Could you shed some light on this? </question>

<answer>Are you using DHCP on both network interface cards? If so, here's what's probably happening:

1 The first NIC is detected and the

 module loaded.

2 Its interface is brought up and

 DHCP used to assign it an IP
 address, as well as to set up DNS
 and routing.

3 The second NIC is detected and the

  module loaded.

4 Its interface is brought up and

  DHCP used to assign it an IP
  address, as well as to set up DNS
  and routing.

Step 4 resets the routing table's default gateway, overriding the route set in step 2. You can check this by running

route -n

The line that shows a destination of 0.0.0.0 will end with whichever interface is used as default. Is there a reason why you use two NICs? If so, you probably need to set up the default route manually. Otherwise, disable the `activate device when computer starts' option for one of the NICs in the network configuration program. </answer>

<title>Self browsing?</title>

<question>I run a hosting service with over 100 domains configured on it. Our server seems to have been sluggish for the last few days. I did some preliminary tests (using netstat) and noticed that there were a lot of connections from my server on TCP port 80 to my server on the ephemeral ports. From the output that I got I understand that I have connections originating from Apache on port 80 to the various other ports on my server. But why? How can my server be browsing my own websites? I run Apache 2 on Red Hat Enterprise Linux 3. </question>

<answer>This certainly sounds like one of your more recent websites is responsible for this new behaviour. From the netstat feedback that you describe, I would say that some of the respective code is triggering connections to your web server. By analysing what your Apache server is doing, we can compare the number of netstat connections with what Apache is serving out. Enable this by placing the following in the Apache configuration file (/etc/httpd/conf/httpd.conf):

ExtendedStatus on
<Location /server-status>
SetHandler server-status
</Location>

If you browse to www.domain.com/server-status?refresh=5, you will get a five-second update of your server's status. Pay particular attention to CPU usage (CPU) and the number of seconds since the beginning of the most recent request (SS). Also, by correlating the number of connections from the netstat output with the number of connections on a particular virtual host, you will quickly find the culprit! </answer>

<title>Sendmail won't</title>

<question>I am in the process of setting up a server to host our laboratory management software (which I wrote in PHP, owing no small amount to the LXF tutorials!). I've gone for a Kubuntu server install with Sendmail ­ this is where the problem starts. I've run through the basic Sendmail configuration, leaving things alone as I understand almost nothing about Sendmail's config! It will send mail to people on the local network (eg john@localnet.co.uk) but nothing gets to the outside world (eg john@hotmail.com). I'd love some advice ­ or maybe a tutorial on setting up a mail server? </question>

<answer>Sendmail is not the ideal choice for you. While it is undoubtedly a powerful mail server, it is also difficult to configure. Postfix or Exim would be a better choice; both of these are available as packages through the Ubuntu repositories (Postfix is the default mail server and is on the Ubuntu installation CDs). These servers have heavily commented, plain-text configuration files that make learning to configure them much simpler than battling your way through Sendmail's dense configuration options. Whichever server you choose, you should consider using Webmin to configure it. As well as presenting you the options through a friendly web front-end, it makes it more difficult to misconfigure the server with the potential loss of mail or security. You can still read or fine-tune the config files by hand if you wish, so Webmin helps you learn the configuration options rather than hiding them. Whichever server you end up running, the logfiles should provide a reason for the failure. Run

tail -f /path/to/logfile

and try to send a mail to the outside world. You should see an error message relating to the failure. This could be anything from a DNS failure (although that would be unlikely if other internet activities work) to your ISP blocking outgoing SMTP traffic. Many ISPs do this as an anti-spam measure, either redirecting all SMTP traffic to their own mail server or blocking it entirely. If this is the case, you need to set up your mail server to use your ISP's server as a `smarthost'. This means that all mail not for your local network is sent via that server. To do this with Sendmail, put the following in sendmail.cf:

DSmail.isp.com

replacing mail.isp.com with your ISP's SMTP server. In Postfix, the line is

relayhost = mail.isp.com

If you use Webmin, this is the first option in the Sendmail module and the fourth in the Postfix module. </answer>

<title>Damn Grubby</title>

<question>I have a PC box with multiple partitions and a few Linux distros installed to have a play with before I settle down to make one of them my favourite. I'm using XOSL as boot manager, and it works happily with a number of distros ­and even things from Redmond! But it is seriously flummoxed by the Damn Small Linux distro from LXF80. The DSL hard disk installation script offers no choice over where the Lilo or Grub boot manager writes its stuff to ­ it always goes straight into the master boot record of the hard disk (the very same spot occupied by XOSL!). So when I restore XOSL, it finds all the other OSes again, but not DSL. Or the PC boots only to DSL. They don't play nicely together. For the benefit of a beginner, could you please give a suitable guide to setting up the bootable bits (either Lilo or Grub will do) on to the partition that DSL is installed on, so that XOSL can find it and start it? </question>

<answer>Installing Grub to a partition instead of the MBR is easy, so it's a shame that DSL does not offer this option. For the sake of this example, we will assume that DSL is installed on /dev/hda5. Boot into DSL, open a root terminal and run grub. This will put you in the Grub shell, where you type

root (hd0,4)
setup (hd0,4)
quit

Grub counts from zero, so the first disk, fifth partition (hda5 in Linux terms) is hd0,4. Now you have a bootloader for DSL installed to the partition and you can tell XOSL to boot from this partition. When XOSL boots DSL, you will get the Grub menu ­ which may be a little pointless as you have already chosen which OS to boot. You can get rid of it by editing /boot/grub/menu.lst and changing the timeout line from 15 to 0. If you want to be able to choose from the options DSL offers in its Grub menu, set the timeout to a low, but non-zero value. </answer>

<title>Slow narrowband</title>

<question>I am struggling to find a reasonably priced broadband provider that deals with Linux. I looked for a high-speed dial-up for Linux, which was not successful. So I examined my download statistics under various distros. Fedora Core 5 comes bottom, with a peak of 1.8kBs and an average of about 0.7. Fedora 4 and SUSE achieve about 3kBs max, with an average download speed of about 1.5kBs. Knoppix 4 is a little better. Xandros 3 gets about 4kB averaging about 2kB. The best is Mandriva 10.1 (using Mozilla and Epiphany), which peaks at about 13kBs and averages about 6kBs. These have been tried with a variety of connections and at a variety of times, but the results were pretty consistent ­ they all do badly about 7:00 pm and 10 am, and all seem to do best about Sunday morning. I am using a 56k external serial modem. Any ideas on getting my average speed up to double figures? PS Any ideas on networking two Linux boxes using different distros? </question>

<answer>There are two UK ISPs that specialise in Linux users: UKLinux.net and UK Free Software Network (www.ukfsn.org). Both of these provide ADSL as well as dialup. Your speed problems do seem a little strange, but they're difficult to get to grips with as you have provided so little information ­ not even the make of your modem. It would be interesting to compare the modem configurations set by each of the distros. Using a browser to measure download speeds is not the most reliable test, as there are too many variables affecting it, including your ISP's proxy server. A better test would be to try downloading a file with wget. Try this command with each of the distros:

wget ftp://ftp.mirrorservice.org/sites/
ftp.kde.org/pub/kde/stable/3.5.2/src/
kdeaddons-3.5.2.tar.bz2

Any file on a good UK-based FTP server should give a reasonable test. You are not going to see double figures with a 56k modem unless you are downloading compressible data, such as usenet postings or web pages (but not the images). The best you can hope for with compressed data, such as the above file or images, is around 7kBs. Compressed files like this give the truest indication of your connection quality. The times you mention are interesting; 7 pm is a peak time for internet usage in the UK (the web or Emmerdale: you decide) while Sunday morning sees quite low usage. It would also be worth asking BT to test your line. Even if it reports nothing wrong, the act of testing it makes a difference in many cases. When choosing an ISP based on Linux support, you would expect to get such support. I would suggest that you open a dialup account with UKFSN (it's pay as you go) and ask both ISPs for help with your connection speeds. The one that is most helpful should be the one to get your broadband business. As for your question about networking two computers with different distros, it's just the same as two computers running the same Linux variant. While the configuration tools may vary, most distros are very similar at heart. NFS, HTTP, Samba or whatever you want to network with, work the same on all distros.

</answer>

<title>Mail server Evolves</title>

<question>I am trying to fix an authentication issue with my mail server, and the only way I have been to test it is by setting it up in Evolution. Is there any way that I could try without having to set an account in Evolution? </question>

<answer> One of the best ways to test a range of different services, including SMPTP SAUTH, is to use Telnet. Now, I would never recommend normal Telnet to log in to a machine, but for testing some services it is invaluable. To trouble shoot your problem, what we want to do is to connect to the mail server on port 25 and authenticate using the encoded base 64 (read about it at http://en.wikipedia.org/wiki/Base64). First, a few helpful decoded strings, encoded with www.dillfrog.com/tools/base-64_encode:

`VXNlcm5hbWU6' decodes to
   `Username:'
`UGFzc3dvcmQ6' decodes to
   `Password:'
`dGVzdF9seGZAcmV6ZC5jby51a
   w==' decodes to `test_lxf@rezd.
   co.uk'
`Zm9vYmFy' decodes to `foobar'

The following lines are the dialogue to test the server is authenticating. We use base 64 encoding for some of the strings, which are detailed above. First, Telnet to the mail server domain/IP address (ie mail.rezd.co.uk or 10.0.0.1) on port 25:

telnet 10.0.0.1 25

The server will answer with an SMTP banner:

Trying 10.0.0.1...
Connected to mail.rezd.co.uk
(10.0.0.1).
Escape character is `^]'.
220 mail.rezd.co.uk ESMTP

Issue the EHLO command:

EHLO other.domain.rezd.org.uk

Next, the server tells us what it supports. This can very from mail server to mail server.

250-mail.rezd.co.uk Hello other.
domain.rezd.org.uk [192.168.0.1],
pleased to meet you
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-8BITMIME
250-AUTH DIGEST-MD5 CRAM-
MD5 LOGIN PLAIN
250 HELP

Authenticate to the mail server with

AUTH LOGIN

It sends out the username prompt:

334 VXNlcm5hbWU6

Now we send the name of a user that we are going to authenticate with, eg test_lxf@rezd.co.uk:

dGVzdF9seGZAcmV6ZC5jb20=

Next it asks for the password:

334 UGFzc3dvcmQ6

And we supply it:

Zm9vYmFy

And finally it says yes, so we know that authentication is working:

235 2.0.0 OK Authenticated

If we get the following, we know that there is an issue with the authentication in some way:

535 5.7.0 authentication failed

This is sufficient to test authentication; if we wanted to test sending mail we could continue the SMTP dialogue. </answer>