Answers 94

From LXF Wiki

Answers 94

<title>Taking the plunge</title>

<question>I'm new to Linux, and I have decided to completely wipe Windows XP from my laptop and just have Linux. I am dual-booting XP and Ubuntu; could you please tell me how to remove Windows and just have Ubuntu? How would I expand the Linux partitions to take over the space where Windows XP used to be? As I am a bit of a newbie, would it be easier just to totally format the drive and reinstall Linux? </question>

<answer>To answer your last question first, reinstalling Ubuntu from scratch and taking the option to use the whole disk would indeed be an easy way to do this, but you'd lose your existing setup and data. Removing the Windows partition and allocating the space to Linux would leave your existing Ubuntu setup intact, and you'd learn more about how Linux works in the process. Removing Windows is easy. The first step is to delete the Windows partition (usually hda1) using the Gnome Partition Editor available from the System > Administration menu. If this isn't available, you should install GParted from the Synaptic package manager. The Windows partition is usually easy to identify, because the filesystem is NTFS (or possibly FAT), and Linux doesn't use these filesystems. Next click on the unallocated space this leaves and press the New button to create a new Linux partition of type ext3 (the default settings should be correct for this). Now, with the new partition still highlighted, go to the menus and select Partition > Format To > Ext3 (see screenshot, right). Press Apply to make these changes. The next step is to remove the Windows entry from the boot menu. Open a terminal and type

sudo -i
gedit /boot/grub/menu.lst

to load the boot menu into an editor. Towards the end of the file you'll find a line starting `title Windows' Delete from this line down to the next blank line and save the file. Your boot menu is now Windows-free. Adding the space you've just freed up is somewhat less straightforward. Linux partitions can only be resized by moving the end position, yet the space you've freed up lies before the beginning of the Linux partitions, because the Windows partition was the first on the disk. Fortunately, Linux allows you to use multiple partitions ­ in this case we can use the space previously taken by Windows as your home directory (an advantage of this approach is that if you reinstall or switch to a different distro, you can keep your personal files because they're on their own partition). You tell the system to use this partition for home by adding a line to the file /etc/fstab (filesystem table). In the terminal you've just used, type

gedit /etc/fstab

Add the following line and save the file:

/dev/hda1 /home ext3 defaults 0 0

Before you reboot, which will activate the new home partition, you need to copy your existing files across. Still in the terminal, type

mkdir /mnt/tmp
mount /dev/hda /mnt/tmp
mv /home/* /mnt/tmp/
reboot

This mounts the new partition somewhere temporary, moves your home directory over to it and reboots the computer to make the changes permanent. After this, there will be no sign of Windows at the boot menu, and when Ubuntu comes up, the space previously used by Windows will be available for storing your own files. </answer>

<title>A pesky kids proxy</title>

<question>I've been running a Squid (and SquidGuard) web proxy on my Fedora Core 6 box ever since I read about it in the very first Hardcore Linux tutorial, in LXF75. I've set up SquidGuard blocking rules to protect my children from undesirable content. What this means is that on their (Windows XP) machine, I set the internet to route through my proxy server (192.168.100.100:8080), and all is well. What concerns me is that my eldest is becoming quite savvy and it won't take him long to realise that if he unticks the box marked Use Proxy Server and switches to a direct connection to the internet, he'll get unfiltered access. Can I force all traffic to go through my (always-on) FC6 machine ­ perhaps by setting up port forwarding on the router (to which only I have the password) ­ so all web traffic has to go through the proxy server and if he switches to a `direct' connection he will get no internet? If so, how? I've tried redirecting port 80 and 8080 to the IP of my PC but that doesn't seem to work. </question>

<answer> By "the internet" I take it you mean the world wide web, which is all that Squid normally handles. However, you can force all internet traffic to go through your FC6 box and then through SquidGuard with three steps. First, and how you do this depends on your router, you have to configure your router so that it only allows your FC6 box to connect to the internet. The port forwarding you set up only affects incoming connections, so remove that. Secondly, you need to set your FC6 box up as a default gateway, so all internet traffic (not just web traffic) goes though it. Edit the file /etc/sysctl.conf, as root, and change the line

net.ipv4.ip_forward = 0

to end in 1 instead of 0. Now run service network restart You should now reconfigure your children's computer to use the IP address of your FC6 box as its network gateway. Because you have disabled their access via the router, this is now the only way they can connect to the net. That still leaves the problem of your children removing any proxy setting, so now we use a feature of Squid called transparent proxying. This forces all web requests going through the machine and you've already forced that with the previous steps ­ to go through Squid 's proxy and hence through SquidGuard. Edit the Squid configuration file (usually /etc/squid/squid.conf) and find the line(s) starting `http_port' This probably reads http_port 8080 in your file. Change this to

 http_port 80 transparent

The 80 sets it to work on the standard HTTP port. The transparent option makes Squid intercept and handle all requests, regardless of whether the browser is configured to use a proxy server or not. You should either remove the old proxy settings from the browsers or add a line to handle requests to the old 8080 port.

 http_port 8080 transparent

There is an alternative way of handling this. You can leave http_port set to 8080 and use an Iptables rule to forward all port 80 requests from addresses that you want to proxy to port 8080. This is more complex but it gives more flexibility, such as allowing some machines to bypass the proxy altogether. There are details on this on the Squid website at www.squid-cache.org. You could also use Iptables, or one of the many front-ends such as Firestarter, to block outgoing traffic to all but the common ports (such as HTTP, HTTPS, POP3, SMTP and FTP). This will prevent your children from using a remote proxy that works on another port. You could possibly do this on the router; however, implementing it on the FC6 box would allow you to block them but still have unrestricted internet access for yourself. </answer>

<title>Bringing order to chaos</title>

<question>I have a photo collection that has got out of hand ­ several gigabytes' worth. I need to organise them so I can get a good backup. Do you know of a program that will rename a file based on the EXIF date of the image and change the Modified Date of the file to the same EXIF date? My last attempt at a backup before I wiped my PC managed to set all the file dates to when the DVD was burned. Also, I've managed to get myself several duplicate images spread across my entire collection (yep, I really messed up), each with different filenames. Any idea how I could sort them (maybe with EXIF data again) without having to look at a few thousand photos? If it helps, I'm using Fedora Core 6 64-bit and I'm not scared of the command line. </question>

<answer>There are several programs capable of working with EXIF data. My favourite is ExifTool (www.sno.phy.queensu.ca/~phil/exiftool). ExifTool can read and manipulate just about any EXIF information, including extracting the Date/Time Original or Create Data EXIF tags. You can use this information to rename the files or change their timestamps. For example:

find -name `*.jpg' | while read PIC; do
DATE=$(exiftool -p `$DateTimeOriginal' $PIC |
sed `s/[: ]//g')
touch -t $(echo $DATE | sed `s/\(..$\)/\.\1/') $PIC
mv -i $PIC $(dirname $PIC)/$DATE.jpg
done

The first line finds all *.jpg files in the current directory and below. The next extracts the Date/Time Original tag from each file (you may need to use Create Data instead, depending on your camera) and removes the spaces and colons. The next line sets the file's timestamp to this date the horrible looking sed regular expression is necessary to insert a dot before the final two characters, because the touch command expects the seconds to be separated from the rest of the time string like this. The final command renames the file, using the -i option to mv in case two files have the same timestamp. This will stop any files being overwritten. It's also possible to do this with most digital photo management software without going anywhere near a command line ­ DigiKam, KPhotoAlbum, F-Spot and GThumb all have options for manipulating files based on the EXIF data. The disadvantage of using these programs for this is that they generally only work on a single directory at a time, whereas the above shell commands convert all JPEG files in a directory and all of its sub-directories. If you have several gigabytes of photos in the same directory, your collection is more out of hand than renaming some files will fix! The solution to your duplicates problem is a program called fdupes (available from http://netdial.caribe.net/~adrian2/fdupes.html or as an RPM for FC6). This compares the contents of files, so it will find duplicates even if they have different names and timestamps.

fdupes --recurse ~/photos

will list all duplicate files in your photos directory. There are also options that you can use to delete the duplicates:

fdupes --recurse --omitfirst --sameline ~/photos | xargs rm

Be careful of any option that automatically deletes files. Run without deletion first so you can see what's going to happen. </answer>

<title>ISO frustration</title>

<question>I was reading LXF91, and I wanted to use the Jigdo system to create an ISO for Fedora [Essential Linux Info]. The magazine could be clearer with the instructions or this. For example, how to run the command mkiso and what to do if you have the DVD mounted with noexec. Do I need to copy the filesystem over to a local filesystem and possibly move jigdo-file? I'm not sure if this has been done before. Anyway, I encountered the following command line errors:

  `sudo ./mkiso
  Creating FC-6-i386-livecd-1.iso general:     
  Image file: /home/user//FC-6-i386-livecd-1.iso
  general:      Jigdo: /home/user//FC-6-i386-livecd-1.iso.jigdo
  general:      Template: jigdo/FC-6-i386-livecd-1.template
   Skipping object `../..//.mozilla/firefox/i4faho56.default/lock' (No such file or directory)
   Found 0 of the 5 files required by the template
  Will not create image or temporary file - try again with different input files    general:      [exit(1)]
  ISO image written to /home/user//FC-6-i386-livecd-1.iso
  Verifying MD5 checksums...
  md5sum: FC-6-i386-livecd-1.iso: No such
  file or directory
  FC-6-i386-livecd-1.iso: FAILED open or
  read
  md5sum: WARNING: 1 of 1 listed file could
  not be read
  Verification failed, or you do not have the
  md5sum program installed.
  In the latter case, you probably have
  nothing to worry about.'

Please let me know what to do with the incorrect md5sum error, and if I can get around the verification check to get the ISO created. </question>

<answer>Firstly, you don't need to run mkiso as root, because the only place it needs write access is your home directory. Secondly, you should run it from the place where you want to create the images, or provide the destination directory as an argument to the script. If you cd to the Fedora directory of the DVD and run mkiso, as you have, it will still work and write the ISO image(s) to your home directory, but it won't be able to generate its cache files. This is less important in this case, because you're creating only one ISO image, but it makes a huge difference to the time taken to write multiple ISO images. If the DVD is mounted with -noexec, you can still run the script with sh, because you're then running sh from your hard disk and the script is simply a data file it loads:

  sh /path/to/dvd/Distros/Fedora/mkiso

Now you're running sh from your hard disk and the mkiso script is merely a data file. To get to your specific problem, the clue is in the line:

   `Found 0 of the 5 files required by the template'

For some reason, Jigdo couldn't find any of the files it needed. I suspect this is due to your use of sudo to run mkiso, resulting in the script getting confused about where it was run from and not knowing where to look for the files (the search path is relative to the script's location). The other possibility is that your DVD has a read error ­ although this is unlikely as all five files failed. Such a damaged disc would probably not work at all. Skipping the MD5 check is like fixing low oil pressure in your car by disconnecting the warning light! As you can see from the output, the reason the ISO file failed the MD5 check is that there was no file to check. Incidentally, you can also run mkiso -h to see the usage options. </answer>

<title>Firefox failing</title>

<question>I cannot get Firefox to `see' the modem connection that I've painstakingly set up. I'm fairly sure that it's working correctly, as running pon from the command line causes the modem to dial out, and poff makes it hang up. However, activating Firefox from the desktop is the problem. The Ethernet connection to broadband works fine, but disabling it and making the modem the default connection brings up the `server not found' screen. The modem is a Rockwell IQ148 and I'm using Ubuntu Dapper 6.06. I'm trying to set up the computer for my partner, who doesn't have broadband but has been gradually converted from XP by using my machine. </question>

<answer>This is almost certainly a general problem with your internet connection and not specifically related to Firefox. It sounds like your system is still trying to use the Ethernet connection. Type this in a terminal:

route -n

The line we're interested in is the last one beginning `0.0.0.0' as this is the default route for all non-local connections. I suspect it looks something like this:

    `0.0.0.0     192.168.0.1 0.0.0.0         UG 0
    0      0 eth0'

The last two numbers in the second string will probably be different, but if it ends in `eth0' (or anything but `ppp0') this is the cause of your troubles. You need to make sure the eth0 settings are purged from your system, especially if you'll no longer be using Ethernet with your broadband provider, by selecting it in the Network Settings window and pressing the Delete button (the middle of the three obscure-looking buttons at the top-right of the window). Another possibility is that your modem connection hasn't completed. The fact that the modem dials out doesn't guarantee that a connection is made. Try running

sudo plog

after an apparently successful modem connection. This will show you the last few lines of the connection log: it should be obvious if anything has gone awry here. You can also check the status of your network connections with

/sbin/ifconfig -a

If the eth0 interface appears, it shouldn't be marked `UP' nor have an `inet addr' entry. Equally, ppp0 should be marked `UP' and have a valid address. It's also possible that you're connected to your ISP but not able to look up internet domain names. Run these two commands in a terminal:

ping -c 5 www.google.com
ping -c 5 216.239.59.104

The first attempts to contact Google by name, the second bypasses the DNS lookup and goes directly to its IP address. If only the latter works, your DNS information hasn't been correctly set up. You'll need to contact your dial-up ISP and get the addresses of the DNS servers, then put them into the file /etc/resolv.conf. It should look something like:

nameserver 1.2.3.4
nameserver 1.2.4.5

You can either edit the file directly or use the DNS tab of the Network Settings tool. It's possible you still have your broadband ISP's name servers in here. These should be removed. If you're still having problems after all this, post some more information, including the output from the above commands, in the Help section of our forums at www.linuxformat.co.uk. </answer>

<title>Joomla joy</title>

<question>I'm a very old greybeard who is the owner of the last two years' issues of Linux Format and Ubuntu 6.10. However, I've got one very big problem that I have never been able to understand: I cannot install programs from the enclosed DVD. I've read `How to install from source code' in the magazine several times [Essential Linux Info] and also own the following books: The Official Ubuntu Book, Beginning Ubuntu Linux and Ubuntu Unleashed, without understanding what I'm doing wrong. Is it possible that you could help me, and write exactly what I should type in the terminal when I want to install ,say, Joomla 1.0.11 from one of your DVDs? </question>

<answer>You have picked an unusual example, because Joomla is a web application that needs to be installed to a directory accessible by your web server. Also, it's written in PHP, a scripting language, so it doesn't need compiling. However, here are the steps to install it. You'll first need a web server installed. The standard one is Apache, so run the Synaptic package manager and make sure that apache and libapache2-mod- php2 are installed. Test Apache by typing `http://localhost' into your browser. If you don't get an error, you can proceed to installing Joomla itself. Open a terminal and type

sudo mkdir /var/www/joomla
sudo tar -xf joomla-1.0.11.tar.bz2 -C /var/www/joomla

The first command creates a directory into which to install Joomla. The second command unpacks the Joomla archive into that directory. This presumes that joomla-1.0.11.tar.bz2 is in your current directory, otherwise you'll have to give the full path to the archive. Now load `http://localhost/ joomla/' into your browser and you'll be taken through the installation and setup process. As a general rule, for all packages supplied as a tar archive, the initial steps should always be the same ­ unpack the archive and inspect the contents for a file containing installation instructions, usually called README, INSTALL or something equally obvious. The `How to install from source code' instructions in the magazine apply to packages using the standard source code installation process, which applies to the vast majority of packages but by no means to all of them ­ as Joomla so ably demonstrates. </answer>

<title>Shell mail</title>

<question>I'd like to use the shell for my email. Can you tell me how this can be set up? I currently use Ubuntu 6.10. </question>

<answer>Do you mean you want to run a mail client within your shell, or do you want to be able to send mails from shell scripts? There are several terminal-based mail programs, the most popular of which is Mutt (www.mutt. org). Mutt is included in Ubuntu's main repository, so you can install it from Synaptic. If you want to send emails from a Bash script, the mailx command is the simplest solution, and is probably already installed on your system. This program mails whatever it receives on standard input to a specified address. For example:

echo "Hello World" | -s "Obvious example"
me@example.com

The subject of the mail is given with -s (use quotes if it contains spaces), and everything received on standard input forms the body of the mail, so it's good for mailing program output. </answer>

<title>Racing uncertainty</title>

<question>I have Ubuntu 7.04 Feisty Fawn installed on a Sony VAIO VGN-FJ250P laptop. I'm satisfied with almost all aspects of this distro, with just one or two niggling problems. The one I've been putting the most effort into recently regards OpenGL. It seems not to work on this Linux system. I know that it's supported by the Intel video chipset, because I dual boot with Windows XP Pro, and OpenGL applications run fine there. One of the affected applications is Planet Penguin Racer, which ran fine on the Live CD but doesn't run when installed on the hard drive. Attempting to run it from the menu produces nothing, while attempting to run it from the command line in a terminal produces the following error message:

   `*** ppracer error: Couldn't initialize
   video: Couldn't find matching GLX
   visual (Success)
   Segmentation fault (core dumped)'      .

</question>

<answer> The good news is that OpenGL works on your hardware with the Live CD, so the hardware is supported and the software present on the CD. This is a configuration problem, almost certainly in xorg.conf, caused by the installer not setting up your graphics card correctly. Boot from the Live CD, mount one of your hard disk partitions or a USB pen drive, and copy /etc/X11/xorg.conf to it. Now boot from your hard disk and compare its copy of xorg.conf with the one you just saved. The most likely cause is that your hard disk version of the file is either using the wrong driver (the Driver line in the Device section of the file) or that the GLX module isn't being loaded. Before you make any changes to this file, save a backup copy: you don't want to make things worse. The correct driver for your hardware should be i810, although using whatever is in the Live CD version of the file will work. The GLX module is loaded by including this line in the module section of xorg.conf:

Load "glx"

If both of these are set correctly and OpenGL doesn't work, you could work through the two files looking for differences and trying to identify which one is the cause. Or you could simply replace the installed file with the Live CD version, knowing it will work. </answer>

<title>Secure relaying</title>

<question>We currently have an email server running Postfix, and users either use Outlook Express or the web-based SquirrelMail (running on the server). This works fine at the moment, and only clients on the internal network can relay email to the outside world. We recently appointed someone who needs access `on the road' via a smartphone. This is fine, as we've got IMAP open externally for his folders, and he can use our Postfix SMTP server to send email ­ but only to local recipients (to prevent us being a spam relay). We'd ideally like for said person to be able to sendemail to anywhere. What part of Postfix would I go about changing to allow only him to relay email to other domains and from outside of $my_networks, without affecting the current rules allowed by webmail or internal clients? </question>

<answer>The answer lies with SMTP authentication, which will allow users to authenticate themselves before sending mail. Postfix can be configured to relay only mail from authenticated users. Postfix uses Cyrus-SASL for authentication, so make sure this is installed and that the saslauthd service is started when you boot. To configure Postfix to use Cyrus-SASL, edit /etc/postfix/main.cf and make sure that mydomain, myhostname and mynetworks are correctly set. Now add the following lines to the end of the file:

 smtpd_sasl_auth_enable = yes
 smtpd_sasl_security_options =
 noanonymous
 smtpd_sasl_local_domain =
 $myhostname
 broken_sasl_auth_clients = yes
 smtpd_recipient_restrictions =
 permit_sasl_authenticated,permit_
 mynetworks,check_relay_domains

The fourth line is optional, it is required with some versions of Outlook Express and Microsoft Exchange. If your user is only using his smartphone, try without this line. Restart Postfix, or force it to reload its configuration and any valid user on your system should be able to use your SMTP server from anywhere, provided they set their mail program to use SMTP authentication. Users on your network will still be able to send mail without altering their mailer configuration. There is a detailed HOWTO on this subject at http://postfix.state-of-mind.de/patrick.koetter/smtpauth. It also covers using TLS to encrypt communication between your user and the server. This should be considered essential, otherwise your users could be sending passwords as clear text. You can also use SASL for authentication from inside your network. For example, you could configure Postfix on a school network so that all users can send mail within the network but only teachers can send mail outside. </answer>

<title>Stretch out</title>

<question>I've just bought a new LCD monitor as a replacement for my old CRT. My Nvidia video card has two outputs ­so is it possible to connect both monitors to the card and expand my KDE desktop to fill both of them? I'm using Gentoo 2006.1 with an Nvidia FX5200 video card. </question>

<answer>The answer is yes. There is a standard way of combining two screens as a single X display, called Xinerama. This is normally used with two graphics cards, but the Nvidia drivers contain a feature called TwinView that lets you display two screens from one card, one on each monitor output, while remaining compatible with Xinerama. Enabling TwinView (and Xinerama) is simple, assuming you're using the Nvidia drivers and not the open source nv driver. First emerge nvidia-drivers and make sure X is running on the Nvidia drivers ­ the most obvious indication is the Nvidia logo that pops up when X starts. Next run nvidia-settings from a root terminal. This is a separate package on Gentoo, so you'll need to emerge it if you haven't already. Select X Server Display Configuration from the list on the left and you should see both displays, although one may be marked disabled. If one display isn't available, click on the Detect Displays button, select each display in turn and pick the correct resolution. It's best if both give the same resolution, but any Xinerama-aware window manager can handle different sized displays. Now set the Position for each screen. You can do this with absolute positioning for maximum control, but it's usually best to set one display to Right Of the other and the opposite for the other display. Click on the Save To X Configuration File button, log out and restart X. You should now have a desktop that spans two monitors, but it may need some tweaking. Make sure all applications are built with Xinerama support. If you don't already have xinerama in your USE flags, edit /etc/make.conf and add it, then rebuild all affected packages with:

emerge --update --deep --newuse --ask world

This may take a while, but when it is finished you can restart KDE and begin tuning it to suit your tastes. For example, the desktop can have a single, wide wallpaper or different ones for each monitor. Or the Kicker panel can be on a single monitor or stretched across both. The Multiple Monitors section of the Desktop settings allows you to set how windows behave and which is the default display for opening new windows. Pick your LCD monitor here. A useful feature is the Advanced > Special Window Settings menu option available when right-clicking a window's title bar. This lets you override default window manager behaviour for specific windows or applications. It's useful with a single display but even more so with dual displays, especially as it can force specific windows or applications to open in a particular position. For example, I get Gimp to open its toolbox on one display while opening the images on the other, so I can use a full screen window to edit an image without obscuring the toolbox. </answer>