Answers 114

From LXF Wiki

Answers 114

<title>Local network resolution</title>

<question>I have a desktop running Fedora 9 and a laptop running Ubuntu 8.04, as well as my son’s and wife’s machines running XP. These are all connected via a D-Link router. I connect my desktop and laptop via SSH but use the local IP number to do this. The issue for me is that all machines use DHCP to get their IP addresses so they change regularly. What I’d like to do is use computer names for each machine and then resolve these to the actual IP addresses. To do this, I believe I need to setup a DNS server locally. I really don’t want to use static IP addresses. </question>

<answer>Most routers also act as DNS servers, so yours is probably already doing this. Some also have the facility to specify the IP address and hostname given to particular computers. The normal approach with DHCP servers is to pick an unused address for the pool of available addresses each time a request is received, but it’s sometimes possible to specify which address a particular computer should receive. The computer is identified by the MAC (Media Access Control) address of the computer’s network card. If your router allows this, you can specify the MAC address of each computer along with the preferred IP address and hostname. When you’ve done this, access by hostname should just work. The MAC address is six pairs of hexadecimal digits (such as 01:23:45:67:89:AB) and can be found in the network properties, or by running ifconfig in a terminal. On Windows, run ipconfig in a command prompt. If your router can’t handle this, you can use dnsmasq (www.thekelleys.org.uk/dnsmasq): a useful, lightweight DNS and DHCP server that would suit your needs (I use it on my home network). This will take care of everything for you, but you’ll need to set up the machine running dnsmasq to use a static IP address. Disable the DHCP server in your router and put the following in /etc/dnsmasq.d/local:

log-facility=/var/log/dnsmasq.log
domain=example.com
dhcp-range=192.168.1.128,192.168.1.192
dhcp-option=option:router,192.168.1.1
dhcp-host=00:1A:92:81:CB:FE,192.168.1.3,hostname

The first line sets up logging, which you may need if things don’t work as expected first time round. The next line contains the domain of your local network and the third line contains the range of addresses to be allocated by DHCP, followed by a line giving the router address that all hosts will need to know to be able to contact the internet. The final line is repeated once for each computer, and contains the MAC address of that computer, the IP address to be allocated to it and the hostname to be given it. The IP address is outside of the dhcp-range value given previously to prevent it being given to another computer. Make sure /etc/resolv.conf on this computer contains the address of at least one DNS server. If your ISP changes DNS addresses from time to time, it may be best to put the router’s address in here and let the router and ISP sort the correct addresses out over DHCP. If you do not want to use your ISP’s DNS servers, put the ones you want in resolv.conf. Start dnsmasq, or restart it if it was already running when you edited the configuration file. Then reconnect each of your other computers and they should be given the hostnames and IP addresses you want. More importantly, you should be able to contact each of them using the hostnames, so you don’t have to remember the numbers anymore. </answer>

<title>That rsyncing feeling</title>

<question>I’m highly interested in running backups – ideally decentralised – regularly. To do that, I’ve found the nice utility called rsync. Here’s the command I run to get a local copy of my whole home directory in another folder:

rsync -avz --delete-after /home/white/ /home/white/Backup/

However, I would like to filter out specific files (typically source files and not data files) so that I come up with a subset of the filesystem. In that way, I would be able to copy the most sensitive files on to a remote hard drive and a local USB pen drive. </question>

<answer>There are a couple of arguments to rsync that will do what you want. First though, is /home/white/Backup on a separate filesystem to /home/white? If so, you should add the -x or --one-file-system option, otherwise you’ll find yourself trying to back up the backup directory on to itself, which will quickly fill it. This option is also useful when backing up the root partition, to stop it trying to back up virtual filesystems such as /dev, sys and /proc. To exclude particular files or directories, you can use the --exclude option:

rsync -avxz --exclude ‘*.c’ --exclude ‘*.h’ --exclude .thumbnails ...

Note that the first two exclude patterns are quoted to stop the shell interpreting the * characters, while the third excludes an entire directory. Your command line is going to get very long if you have a lot of files to exclude, but you can put the patterns (no need for quotes this time) in a file one per line and then call:

rsync -avxz --exclude-from ~/myexcludes ...

The exclude options are fine for filtering out simple patterns or single directories, but what if you want more sophisticated filtering? The --filter option provides this and it’s comprehensive enough to have its own section in the man page, which you should read carefully before trusting the security of your data to the rules you create. However, you can simplify the use of filters by putting the exclusion and inclusion rules in a file called .rsync-filter and adding -F to rsync’s options. This argument tells rsync to look for .rsync-filter files in every directory it visits, applying the rules it finds to that directory and its children. The format of .rsync-filter would be

exclude *.c
exclude *.h
exclude .thumbnails

You can use include rules as well as exclude. Each file is tested against each of the rules until one matches, when it is included or excluded according to the rule (subsequent rules are not checked). Files that match no rules are included. This can be used to include some files that would match an exclude rule by placing an include rule before it. You can also use this to only backup specified directories with something like this:

include /mail
include /documents
include /photos
exclude *

The leading / makes the directory match against the beginning of the path in the directory you are backing up, not the root filesystem. If these rules are in /home/white/.rsync-filter, the first one matches /home/white/mail. This should be enough to get you started, but you must read the man page to be sure you understand what you are doing. Remember that the second rule of backups is that you must verify them after making them (the first rule is to do them in the first place!). </answer>

<title>SUSE and MadWifi</title>

<question>Can you tell me which version of the MadWifi drivers works best with SUSE Linux? Also, is there a free application that will report a numerical value for the frequency at which I’m exchanging bits with my router? By frequency, I mean the number of cycles per second (in the 2.4GHz band), not data transfer rate in kilobytes per second. </question>

<answer>As a general point, it’s usually best to use software from your distro’s repositories whenever possible. This software has been tested to work with that distro, both by the developers before release and the users after. Any problems that do show up can be reported and dealt with through the distro’s bugtracking system, usually very promptly. OpenSUSE 11.0 has a prerelease version of MadWifi 0.9.4 in its repositories, so you should try this first. If this gives you problems, you could try compiling 0.9.4 from source (full details are on the MadWifi website at http://madwifi.org. The main reason for doing this has nothing to do with the distro you’re using, it’s only necessary if there’s been a change relating to the hardware that you use. There’s not much development on the MadWifi driver now, because most of the team’s effort is directed at the new Ath5k driver that’s included with recent kernels. As this improves and gains support and performance for more cards, the need for a separate driver package will reduce and eventually disappear. This is the way things generally work with Linux; once an open source driver proves itself, it’s usually incorporated into the kernel. Many computers work perfectly now with no external drivers at all and the proportion will increase as the kernel is able to handle more hardware directly. You already have software that will report the frequency used by your card and router and any others in range. The wireless-tools package, which you should have already installed on your machine, does this and a lot more. Any of these commands will give the information you want, in a different context in each case.

iwconfig ath0
iwlist ath0 scan
iwlist ath0 frequency

The first gives details about the connection between your computer and the access point; the second gives a list of all visible wireless access points; while the last command shows the frequencies available on your card, and the one in use now. These are administrator commands, so you need to run su in a terminal to become root before running any of these commands. </answer>

<title>A faster network please</title>

<question>I’m thinking about upgrading my home network to Gigabit Ethernet. All my computers have gigabit network cards, but my router has a four-port 10/100 Ethernet switch built in. If I connect my router to a Gigabit switch, and connect all my computers to the Gigabit switch will I achieve Gigabit speeds, or do I need a Gigabit router? </question>

<answer>The only traffic that would need to go through your router if you did this would be the traffic that goes to or from the internet. The only reason all traffic goes though your router device now is that it also contains the network switch, but the data doesn’t actually go through the router part of the device unless it needs to reach the Big Bad Web. Any traffic between two computers directly connected to the switch would communicate at the maximum possible speed, which would be Gigabit if both had Gigabit network cards. Your router is a 100Mbit device and would be connected to the Gigabit switch, but this would not affect the speeds between other devices. Unlike a hub, which operates at the speed of the slowest device connected to it, data flowing through a switch only goes between the two devices involved in the transfer and is unaffected by anything else on the switch. The same goes if you had a computer with a 10/100 network card connected to the switch, only transfers involving this device would drop to 100Mbit speeds. You can find Gigabit broadband routers, but they’re more expensive and the only difference is that the built-in switch is Gigabit. If your router needs to be replaced, one of these might be worthwhile, but otherwise add a Gigabit switch and you’ll get the same performance and more network ports. </answer>

<title>Aspiring Linux user</title>

<question>I’ve just got a Acer One. Should it have any security on it, such as AVG? Comet, where I bought it, said I have to have McAfee or one of the others. I do hope you can help me. </question>

<answer>You don’t need antivirus, anti-spyware and anti-trojan software like you do on Windows, but there are some steps you should take to improve security. The most important is to make sure that your wireless connection is secure, so if you have your own wireless router at home, make sure you have enabled WPA encryption. The alternative is WEP, an older and easily cracked encryption protocol, but still better than running an open connection. Because Linux software is open source, there’s no opportunity to hide malware within the code, since someone will always find it. Stick to software installed through the Acer’s own package manager, which will have been verified by Linpus Linux’s own developers. Viruses are unheard of on Linux, so you don’t need to worry on that front. The salesperson you spoke to obviously has no idea that the Aspire One is not running Windows, or they wouldn’t have suggested McAfee, which isn’t much use for systems running Linux. There’s an anti-virus program for Linux; ClamAV (www.clamav.net) but it’s most useful on computers that share files with Windows systems because it detects their viruses too. There is a good article on The Register about tweaking your Aspire One, which you may find helpful: www.reghardware.co.uk/2008/09/05/ten_aspire_one_tips. </answer>

<title>Filesystem fears</title>

<question>I’ve read that it’s better to use an ext2 or FAT filesystem on a USB key, because they make them last longer than if you used a journalled system such as ext3 or ReiserFS (or even NTFS). This got me thinking, if you have a USB key that you use to exchange files between work and home, it would presumably be read and written to twice a day. On the basis of the working year being about 240 days it would last 104 years on the basis of a life expectancy of 100,000 reads and writes. I don’t quite understand the ins and outs of journalled file systems, but I believe they automatically check the disk and then verify the file when it’s written, which would make a read one ‘use’ but a write three ‘uses’, so they would reduce the life of this over-worked USB key to a measly 52 years. As these things can be bought for a few pounds these days, is a false economy to be so careful with your USB key? </question>

<answer>There’s more to a filesystem than just the files – there’s also metadata, such as file permissions and time stamps. Then there are directory indices to consider. When writing to a file, all of these have to be updated. So if you copy a directory containing 10 files to a disk, that means eleven directory entries to be updated. With a FAT filesystem, the file allocation table that gives it its name is stored at a single location, so every action on the disk involves reading or writing this location, and that’s what causes the wear. If a device is mounted with the sync option, there can be many writes to this location for each file that is updated. One kernel ‘feature’ once caused this to be written to for every 4KB of data written, which resulted in my (expensive at the time) 1GB device failing in a very short time when I was writing 700MB Knoppix images to it. Add to this the journal, which contains records of every transaction, and you can see that parts of the filesystem are worked very hard. Yes, the devices are cheap enough now, but their contents may not be. To use your example of transporting data between work and home, what happens if you take some important files home to work on them for an urgent deadline the next morning, spend hours working on them and find the USB stick doesn’t work when you get into the office the next day? The most likely point of failure is the file allocation table, so even files you weren’t working on will no longer be available without the use of some recovery software and a fair bit of time. It’s also possible, but by no means certain, that cheaper devices may fail sooner because of the likely lower standard of quality assurance. The key point is that these devices are cheap and cheerful and should not be assumed to last forever. Note that these comments are aimed at USB flash memory devices. The flash memory SSDs (solid state disks) used by the likes of the Asus Eee PCs are completely different, incorporating wear levelling so that specific parts of the memory are not disproportionately hammered. </answer>

<title>Old distro, old problem</title>

<question>I have a PC that I’ve put together using bits and pieces of old machines that I’ve scrounged from friends and relatives. It has a 13.5GB hard drive, Pentium III 600 processor and 384MB RAM. I’m trying to run Ubuntu 7.04 and having managed to make the Live CD and made the necessary change to my BIOS, my machine will actually start up and get me to the Ubuntu start-up list with the choices of booting methods. However, regardless of whichever method I choose, I eventually get a screen message that says:

BusyBox v1.1.3 (Debian 1:1.1.3-3ubuntu3) Built-in shell (ash)
Enter “help” for a list of built-in commands.
/bin/sh: can’t access tty: job control turned off (initramfs)

Can you please tell me what’s wrong and, if possible, what I can do to fix the problem to get Ubuntu started properly? </question>

<answer>This is a known problem with a couple of older releases. There were various workarounds and fixes floating around at the time, but they’re no longer necessary. The problem is caused by an incompatibility between your hardware and this particular version, but there have been three further releases of Ubuntu since then, all of which should avoid the problem. I suggest you try again with Ubuntu 8.10, which is on this month’s cover DVD, and you’ll find this problem is no longer there. </answer>

<title>Email authentication</title>

<question>I am trying to use email with Ubuntu 8.04 without success. My service provider is tiscali.co.uk and I have no problem using Outlook Express with Windows XP. I’ve tried using Evolution, Thunderbird and Opera email without success. I see an error message: ‘The server responded:[AUTH] invalid user or password’. I’ve tried connecting via a USB SpeedTouch 330 modem and a D-Link DSL-320T modem. I understand there may be a problem with Ubuntu 8.04 when trying to connect via a ‘.co.uk’ provider. Can you help? </question>

<answer>There’s no reason why Ubuntu would not connect to a UK domain, or any other. The first step of connecting to any domain is to resolve that domain name to an IP address, which is clearly happening, or you would get a different error. Your mail program is definitely connecting to your ISP’s mail server, because the error message you quote was received from the mail server. The problem is simply one of settings. When you connect to a mail server, you have to authenticate yourself with your username and password and this is being rejected by the server. The fact that three different programs generate the same error leads to the conclusion that the details you’re giving are incorrect. For Tiscali, your username is your full email address (not just the part before the @ as with some providers). The servers to use are pop.tiscali.co.uk for incoming and smtp.tiscali.co.uk for outgoing mail. You can check all of these details by looking at the account settings you currently use in Outlook. The one thing you cannot read from Outlook is your password; you must type this exactly as set up with your ISP, remembering that it’s case-sensitive. Tiscali has some useful information on how to set up Thunderbird to work with its service at http://tinyurl.com/tiscalimail. This uses the Windows version of Thunderbird, but the steps are exactly the same, except that Account Settings in the first step is under the Edit menu on the latest Linux version of Thunderbird. </answer>

<title>ODF oops</title>

<question>I have a huge number of OOo files with meaningless filenames. I need to sort out those that have been created or modified in the last month, but all the files have the same timestamps. I hoped that Konqueror’s Info List View would help, as it does for Exif info in JPEG files, but it doesn’t give any columns apart from file name, even though the metadata is present on the mouseover tooltip. </question>

<answer>If these are in Open Document Format, the process is surprisingly easy. ODF files are Zip archives containing several files that comprise the document and its metadata. Even if you rename the ODF file, the timestamps of the files within it remain unchanged. Because of this, it’s possible to extract a file from each ODF archive and set the archive’s timestamp to match that file. A short shell loop will update all the files in a given directory

for f in *.ods *.odt
do
unzip -o “$f” content.xml && touch -r content.xml
“$f” && rm -f content.xml
done

This loops through each ODS and ODT file, extracting the content.xml file. If that is successful, it uses that as the reference for touch to set the modification date of the original file, then removes the content.xml file. </answer>

<title>Disappearing display</title>

<question>First off, I’ve only been a Linux user for approximately two years, but I’ve been very impressed with the various distros available based on the users’ needs. I’ve successfully installed Mandriva One 2008, PCLinuxOS 2007 and Ubuntu 7.10 on a Dell desktop computer. However, I’ve had only limited or no success with Ubuntu Hardy Heron and OpenSUSE 11.0 on either the Dell or my main machine, an HP MCE desktop. I’ve been trying to load OpenSUSE 11.0 from the live disk included on the September 2008 issue [LXF109]. After initial boot, I’m given the choice to select OpenSUSE and do so. The image begins to load normally, only to switch to a black screen with a small information window stating ‘OUT OF RANGE 46.4kHz/44Hz’. The machine then freezes and I’m forced to shut it down via the power-on button. </question>

<answer>This is a warning message from your monitor. It means the computer has sent a signal that is outside of the monitor’s frequency range, so the monitor has disabled the display. This is preferable to the system used in days gone by, when monitors used to fry their electronics upon receiving a frequency signal that was too high. The OpenSUSE installer is still running at this point, you just can’t see it. This is usually caused by the installer loader misidentifying your monitor, but the solution is simple. When you see the first menu screen (the one where you chose OpenSUSE), press F3, then choose the lowest video mode that works. If you get the same problem with all video modes, try the text-based install. It is the same installer, but with a more basic interface that you navigate using the Arrow, Tab, Space and Enter keys. The video resolution chosen here is used only for the installer – the graphical system that will be installed is more intelligent and will probably detect and configure your monitor correctly. Even if it doesn’t, there’s an option to do this by giving it details of your monitor. Usually the model name and number is sufficient, but at the worst, you have to pick a safe resolution like 800x600 @ 60Hz. Once the system is installed and running, you can try different display settings in the hardware section of Yast. </answer>

<title>Recovering lost photos</title>

<question>I think my camera might have crashed and rebooted when I was trying to delete a photo. I can view the photos on the screen on the camera, but when I try to download I get I/O errors for some of them. I can copy an image off the card using:

dd if=/dev/sdc of=this-is-annoying.img

When I use this image to recreate the files on a partition on a hard drive I get the same I/O errors, so I figure that the files are still there and can be read but something is preventing them from being recognised properly when I try to open them. </question>

<answer>You did the right thing in creating an image of the card rather than trying to recover from it directly. When a filesystem is damaged like this, the worst thing you can do is write to it in any way. With some filesystems, even reading a file updates its metadata, causing a write. What about a solution? TestDisk is a useful suite that includes a program called PhotoRec, which can recover all sorts of lost files from many types of filesystems. TestDisk is available from www.cgsecurity.org/wiki/TestDisk, but check your distro’s package manager first. When it’s installed, run photorec from a root terminal. If run with no arguments, it will search for any partitions containing filesystems it recognises and asks you to select one to scan. You’ve made a copy of the disks data with dd, so you can use this instead, although it’s wise to keep a spare, untouched copy of this file. Your recovery attempts could affect the copy you work with and your memory card may be in too fragile a state to generate another. Start photorec with

photorec this-is-annoying.img

When asked for the partition type, select Intel/PC if you copied the whole disk (sdc) with dd or None if you copied only the filesystem (sdc1). This is the type of partition table, not the contents of the partitions on the disk, so anything usable on a PC is likely to be Intel/PC. The main exception to this is something that has no partitions, as opposed to a single partition filling the whole device, such as a floppy disk. You’ve copied the whole disk, so you’ll have two options on the next screen: one for the partition (assuming it has a single partition) and one for the whole disk. Try the partition first; if this doesn’t recover all your files, run PhotoRec over the whole disk. PhotoRec can generate a lot of files with meaningless names, so save its output in a separate directory when asked. PhotoRec can take a while to scan the image file and even longer when run directly on a memory card, so leave it alone for a while. Then you’ll find your recovery directory full of strangely named files. The file allocation table was messed up, so the names of the files are gone, but this isn’t a big deal with digital camera files, because their names aren’t that useful to start with and you’ll still have the Exif data. You’ll also find old files here, because deleting a file removes it from the index but leaves its contents on the disk and possibly quite a few duplicates. </answer>