Answers 96

From LXF Wiki

Answers 96

<title>Remote printing</title>

<question>I'm a teacher in a school and when I started to take care of the computers in the teachers' room they all ran Windows. Now I'm preparing to install Ubuntu on one of them, but the job is difficult because of one `minor' detail ­ the printer! They have a PC running Windows Server, connected to a switch. The server is running exclusively to serve the printer. I'm running Ubuntu Feisty Fawn and I can't print. Ubuntu detects the printer, a Samsung CLP-500, and I have installed the drivers, but nothing prints. Do I have to use Samba?


<answer>According to the OpenPrinting database at, this printer needs the SpliX driver, available from While this driver works well with some Samsung lasers (it's great with my mono laser), it is only reported as working `partially' with the CLP-500. This appears to be because it is limited to 600dpi printing. Samsung also provides a Linux driver that you can download from (the full URL is ridiculously long). SpliX is included with the current Ubuntu, so it's just a matter of installing it via Synaptic and then picking the right driver in the printer configuration tool. CUPS can talk to Windows printers ­ it uses the Samba client libraries, so you need Samba installed, but you do not have to configure it yourself. Ubuntu installs Samba by default, so there's nothing you need to do in this respect. All you need to do is install the SpliX package from Synaptics then run New Print in System > Administration > Printers and select the correct printer when asked. </answer>

<title>gHamachi gHelp</title>

<question>I'm trying to set up gHamachi using the tutorial in your June edition [LXF93]. I followed the instructions on page 86 as far as stage 3, where I clicked on Yes and entered the root password. Then a message appeared: `TAP/TUN NOT FOUND' I'm using Ubuntu 6.06. What could be the problem? </question>

<answer>This error is caused by gHamachi being unable to find the tuncfg program, which comes from Hamachi, so you need to install Hamachi too. This is a fairly uninformative error ­ TUN/TAP isn't found because the program used to look for it isn't there. It seems gHamachi treats any error from running tuncfg the same, even if it fails because tuncfg is not there. You'll need some extra commands to install Hamachi so go into Synaptic and install the build-essential package, which provides everything you need for installing packages from outside Synaptic. Now download the Linux version of Hamachi from Assuming you saved it to your Desktop directory, open a terminal and type:

cd Desktop
tar -xf hamachi-
cd hamachi-
sudo make install

Now you should be able to run gHamachi and proceed with the tutorial. </answer>

<title>Visiting Vista</title>

<question>After problems with Vista, a friend has asked me to put Linux on their PC. My PC is running Fedora Core 6, and I've set up a shared drive in Vista so I can pull off the files that my friend needs saved. But I need help getting the files off. I can access the shared drive but when I go to open up the folders to get the files, Linux comes up with a message that it can't read the folders on the Vista PC. Can you access a shared drive in Vista and pull files off it with Linux? I have no problems accessing a shared drive on XP, 2000 or 98 from Linux. </question>

<answer>You can admit to owning a Vista PC yourself ­ we'll still try to help so there's no need to blame it on a `friend'... The best way to do this is to use the shell to mount the drive, then you should see clear errors when it fails. Do this as root:

mkdir -p /mnt/windows
mount //PCNAME//C /mnt/windows -o user=USERNAME

replacing PCNAME with the network name of the Windows computer and USERNAME with the name of the admin user on that computer. After giving the user's password, the C drive should be mounted (assuming that's the drive you're trying to share). Do not try turning off password-protected sharing in the Windows control panel, it actually makes things more difficult, not easier as you might expect. You also need to turn on Public Folder Sharing in the Network And Sharing section of the Windows control panel. Even with these settings, you'll still be unable to enter and copy some directories. Vista has protected directories inside the user directories, such as USERNAME\PrintHood. However, you should have no difficulties copying your friend's documents and other data files now. Because you've mounted his shared drive, you can use any file manager you like to do the copying. You haven't said whether you're trying to do this with a direct cable link or over the internet. It should work the same, apart from the speed, but bear in mind that the data won't be encrypted in transit. You may also need to open port 139 in his firewall or router to make a connection over the internet. This also allows anyone else to attempt a connection, so use a good password and close the port as soon as the job is done. If possible, take your computer to his house (or his to yours) and use a local ethernet connection. Alternatively, you could use the Windows backup program to back up the data to a file or DVD and copy that over to your Fedora Core 6 system. Windows backup files are zip archives that an be unpacked with the Linux unzip command, which is installed on Fedora Core 6. </answer>

<title>Many monitors</title>

<question>I'm attempting to set the propriety Nvidia driver up for single, dual and twin view, and after much searching, I've finally managed by creating the xorg.conf files directly (as the Nvidia GUI keeps complaining about overlapping meta modes and reporting wrong refresh rates). But though I now have the three xorg.conf files ready and working ­ one for each view that I need (dual, twin and single) ­ I can't seem to find any information on how to integrate these in a single environment where I can switch between them. I need to be able to switch between these three types of view on the fly, ideally with a keyboard combination. As it is, I manually stop the X server, swap the xorg.conf file and restart X. I'd guess that I need to merge my three different xorg.conf files into one, but how? And how do I tie restarting the X server with an alternative view to a keyboard press (or any functionality, be it menu, file or whatever ­ as long as it's one-click or as near to as possible)? I'm using KDE on Fedora Core 6 and would appreciate some guidance on this, but please be gentle ­ so far I've only been on the Linux wagon for a week. </question>

<answer> You can combine the various portions of the separate xorg.conf files into one, providing you give them different names. The Monitor sections can just be put one after the other, but you'll need to make sure that each of your Screen sections has a different name, with a separate section for each of the layouts. Most of the other entries in xorg.conf are the same for all; things like keyboard, mouse and font settings. Then you create a separate ServerLayout section for each layout, with a different name, so you'd have something like:

Section "ServerLayout"
  Identifier "SingleScreen"
  Screen        0 "SingleScreen" 0 0
  InputDevice "Mouse0" "CorePointer"
  InputDevice "Keyboard0" "CoreKeyboard"
Section "ServerLayout"
  Identifier "TwinScreen"
  Screen        0 "TwinScreen" 0 0
  InputDevice "Mouse0" "CorePointer"
  InputDevice "Keyboard0" "CoreKeyboard"
    The first ServerLayout is the default, or you can

specify it with:

Section "ServerFlags"
  DefaultServerLayout "SingleScreen"
    Now X will start up in single mode by default

but can be started in twin mode with:

startx -- -layout TwinScreen

The `--' means `end of startx options, pass anything else along to the server' In order to bind this switch to a hotkey, you need a short shell script. Save this script somewhere in your path, say as /usr/local/bin/restartx:

if  cut -c3)" == "5" 
  sudo /sbin/telinit 3
  sudo killall X
sleep 2
startx -- -layout $1

and make it executable with chmod +x /usr/local/bin/restartx. As some of the script needs to run as root, you'll also have to edit /etc/sudoers, as root, and add this line:

yourusername ALL = NOPASSWD: /usr/bin/killall
X,/sbin/telinit 3

Now you can switch layouts with:

nohup /usr/local/bin/restartx newlayoutname

The nohup is necessary or the script will be killed when the desktop closes. As you're using KDE, you can bind any commands you want to hotkeys in the Regional & Accessibility/Input Actions section of the Control Centre, so set up one to switch to each layout in your xorg.conf file. Finally, you'll probably want KDE to remember your open applications after switching. To do this, go to Control Centre > KDE Components > Session Manager and select Restore Manually Saved Session. This adds another option to enable you to save your session and you can get the script to do this automatically by inserting this as the second line:

dcop ksmserver ksmserver saveCurrentSession

This is the only KDE-specific part of this exercise, and you'll find that the rest will work with any desktop. </answer>

<title>Samba lockout</title>

<question>I'm using a Fedora Core 1 system and thought of upgrading to Core 6. Before doing this I loaded Core 6 onto a separate machine to see how it was configured off the disk. I found that sendmail was set up to deliver mail but I couldn't deliver mail to the box from outside the box. On Google I found that the distro was shipped with the ability to receive mail from external sources turned off. Why? I also set up some shares in Samba and still have the following problem: if I set up a directory ­ say, /backup ­ with the same permissions and ownership as /var, I can connect to it from another machine and share the contents, create and update as well as remove. If I change the entry from /backup to /var then I'm not able to connect to the directory. I guess I have another pre-shipped parameter to change but which one? What I want to do is set up the share to access /var/www/html in order to play with HTML and PHP files. All this works fine on the Core 1 system and didn't require changes. I will get to Core 6 sometime but not until I've solved these and other issues in a standalone system. Just one other point. When I've performed upgrades from Core 1 to Core 5 or 6 the process takes hours so I thought it would be easier and quicker to do a new install and copy the relevant config files and data, but now I'm not so sure. </question>

<answer> It looks like you've opted for security when installing Fedora Core 6. As such, it's been set up to deliver only local mail, which you were able to switch easily enough, and to prevent sensitive directories being shared. While it is possible to alter this so that /var can be shared, you really should reconsider. Blocking the sharing of /var is for a good reason ­ a lot of sensitive information is stored on /var and it's easy to render a system unbootable with a modicum of malice, incompetence or plain carelessness. The question shouldn't be `how can I share /var?' but `do I need to share all of /var?' ­ to which the answer is no. If you want to access /var/www/html remotely, then share only /var/www/html. In doing this, you'll avoid the potential risks associated with sharing /var/log or /var/lib but still be able to do what you want. There are also alternatives to using Samba. If both computers run Linux, you could use NFS to mount /var/www/html on the remote computer. If you're using KDE on the editing computer, you could avoid using any form of remote mounting or directory sharing by using KDE's FISH implementation. This uses SSH to communicate with the remote computer, so putting fish://hostname/var/www/html into Konqueror's (or Krusader's) location bar will load the directory's contents into a file manager window, from where you can load files into a KDE-aware editor. Going from Fedora Core 1 to Fedora Core 6 is a huge step. Many key components will have changed, so an update is likely to consume more time than the hours required by the package manager when you have to fix other problems. A fresh install is the best approach, but making a jump of a few years in major components is likely to result in differences in the way things work, as you have discovered. </answer>

<title>BT wireless</title>

<question>I have recently switched to BT broadband and I'm trying to connect to the BT Home Hub using Wi-Fi. I have installed the Intel/PRO 3945abg drivers and iwconfig shows the network interface as up, but KNetworkManager won't connect to the hub. I've set the encryption system to Open System and entered 40/104-bit hex key. The network manager hangs at 28% and then re- prompts for the WEP key. The BT Home Hub docs say that the encryption is 128 -bit. Any pointers as to how to connect to the hub would be greatly appreciated. Here's the output from iwconfig:

eth2 IEEE 802.11g ESSID:c
       Mode:Managed Frequency:2.412 GHz
Access Point: 00:14:7F:BE:0D:9D
       Bit Rate:54 Mb/s Tx-Power:15 dBm
       Retry limit:15 RTS thr:off Fragment thr:
       Encryption key:xxxx-xxxx-xx Security
       Power Management:off
       Link Quality=77/100 Signal level=-57 dBm
Noise level=-58 dBm
       Rx invalid nwid:0 Rx invalid crypt:65 Rx
invalid frag:0
       Tx excessive retries:0 Invalid misc:126
Missed beacon:0


<answer>The iwconfig output looks good except for the encryption key, which is too long for 64-bit and too short for 128-bit, so this is probably an encryption problem. The first thing to do is turn on encryption on both the Home Hub and your computer. Wireless encryption is good generally, but it gets in the way when you're trying to configure your connection. It's easiest to configure an unencrypted connection first and then apply encryption when the connection is working. You turn off encryption for the Home Hub in its web administration page. The manual will tell you the address to type into your browser, and the default password, to access this. While you're there it's a good chance to change the password if you haven't already. Your iwconfig output indicates that this should work with no problem. Once you've verified it works by connecting to an external web page (try because Mike likes to see the hit count go up) you can turn WEP encryption back on. WEP uses so-called 64-bit or 128-bit encryption. `So called' because 24-bits aren't available to you to change, which is where the 40-bit and 104-bit figures come from. The 128-bit key should be entered as a 16-character hexadecimal string, usually broken up with dashes to make it more readable, as in XXXX-XXXX-XXXX-XXXX. If you can't get this to work with KNetworkManager, try running iwconfig directly from a terminal, as root. This may provide you with some useful error messages. The commands that you need are:

ifconfig eth2 up
iwconfig eth2 key open XXXX-XXXX-XXXX-
iwconfig eth2 essid "BTHomeHub-8AF2"
dhcpcd eth2

Once you have it working through the terminal, you can plug the details into KNetworkManager, or turn off NetworkManager in Yast and use the standard Yast network configuration instead. Searching the internet for information on this brought up far more problems than success stories. The consensus seems to be that this isn't a particularly good wireless hub (even though it's styled to look like a smart Apple accessory), and that a wireless access point/router from one of the standard networking companies would actually be a much better bet. But given that this unit comes free with your connection, it's probably worth spending at least some time trying to get it working acceptably. </answer>

<title>Shuffling NICs</title>

<question>How do you get Ethernet NIC cards to remember their names between reboots on a SUSE distro? I'm running SUSE Enterprise 9 on my Linux router/firewall, which has three NICs installed; one for the external internet port, one for our internal network and one for our DMZ which carries all of our externally accessible resources such as web, mail and FTP servers. In most respects this installation operates beautifully. The problem is that the Ethernet device names seem to a) get randomly allocated on reboot (so hat was `eth0' last time the system rebooted often becomes `eth1' on the next reboot), and b) any persistent names assigned to these devices such as `nic1' or `nic2' are frequently ignored (even though PERSISTENT_NAME="nic1/2/3" is defined in the device files in /etc/sysconfig/ network/ifcfg-eth-*). The upshot of this is that I almost always have to run ifconfig when I restart the router and patch the device IDs in the iptables definitions to suit the current (pretty much random) device configuration. This is a problem because the router rarely recovers from any outage condition without intervention. I have attached the config file of the DMZ NIC in /etc/sysconfig/network/ifcfg-eth-id-00:02:96:00:3f:8e. This card usually comes up as `eth2' and has (theoretically) been assigned the persistent name `nic2' for the purpose of our iptables firewall definitions. When the system boots, it occasionally notices that the device should be called `nic2' but, more often than not, it ignores the PERSISTENT_NAME definition. Unfortunately, I don't have enough LAN cards to try this in another box (with a different distro) and I can't afford to take the server down for the time I may need to resolve the issue. </question>

<answer>This is odd ­ your config file looks correct, and works with SUSE here. The fact that it works occasionally indicates that some fundamental piece of software is not missing. Have you upgraded this system so it now uses udev? That could be forcing the names in spite of your settings in /etc/sysconfig/network. If so, the easiest and cleanest way to fix this is to use udev naming rules. Create the file /etc/udev/rules.d/10-network.rules, as root, and add these:

ddress}=="00:02:96:00:3f:aa", NAME:="nic0"
ddress}=="00:02:96:00:3f:bb", NAME:="nic1"
ddress}=="00:02:96:00:3f:8e", NAME:="nic2"

replacing the strings after ATTRS{address} with the MAC addresses of the three cards. While the SUSE system had a problem with re-using the standard names, udev does not as this renaming is done before any names are applied, so you could use eth0/1/2 here if you wished. You may find you already have a file in /etc/udev/rules.d containing net persistent naming rules, in which case you should edit this file to add the above assignments. An alternative approach is to use the nameif command to rename the interfaces. This must be done before the interfaces are brought up. Create the file /etc/mactab with its contents a list of interface names and MAC addresses, like this:

nic0 aa:bb:cc:dd:ee:ff #internal
nic1 00:11:22:33:44:55 #external
nic2 66:77:88:99:00:aa #dmz

The nameif command will read this file and rename the interfaces accordingly. This should be considered only if you're not using udev, as udev rules provide the best way to handle persistent naming of network interfaces, and just about anything else. </answer>

<title>Which kernel?</title>

<question>I am trying to set up my system to use my Belkin USB wireless stick with ndiswrapper. The notes tell me I need a certain kernel as a minimum. I'm a new user, so can you tell me how I find this information? Also, can you give me any advice on setting up this item? </question>

<answer>There are various GUI tools that will tell you which kernel you're running: the KDE Control Centre shows it as `release' on the startup page, or you can use your distro's package manager to find the version of the kernel package (some distros call it `linux'). The simplest way is to open a terminal and type one of:

uname --kernel-release
uname -r

You may not need to use ndiswrapper as some Belkin wireless devices have native support. In this case run:

sudo lsusb

in a terminal to find out more about your device. Then search Google or your distro's forums for information on this device. You may also find details of which driver would be best for you to use at If there's no native driver for your device you'll have to use ndiswrapper. The most important point to remember when doing this is to use the driver that came with the device. Manufacturers have a habit of changing the internals of devices while leaving the model number the same, so a driver for an apparently identical device may be useless. If your distro (you don't mention what you're using) has a tool for configuring wireless devices, use this rather than trying so set it up manually. Some, such as SUSE's Yast, will also set up ndiswrapper for you. </answer>

<title>Apache homes</title>

<question>I want to set up Apache so that users have personal websites in their home directories, with /homes/user/website linking to I know I can do this using the userdir module. However, the problem is that users mount their home directories from a Windows box. As such, when they drop files into this folder, it does not give Apache any permissions to read the files they put in. How can I set this up so anything the user drops into their public folder is readable by the Apache user automatically? I've seen mention of something called mod rewrite but this doesn't seem to be the answer. Neither do I want the users to have to change permissions (too low-level for them!) or run some script every couple of hours to check their permissions! Is there an Apache module that can do something like this? </question>

<answer>mod_rewrite is a very powerful tool, but the wrong one for this job as it alters redirects-requested URLs based on regular expressions. You were right with your first choice of the userdir module. Your problem boils down to making sure the HTML and other files that users drop into their web space are readable by the server without making the whole user directory world readable, which is easily done with some carefully chosen ownerships and permissions. Working with the default Apache userdir configuration, http://hostname/~username/ is mapped to /home/username/public_html/. The first step is to make sure that the user directories are readable by the users only:

chmod 711 /home/*

Then the public_html directories need to be readable by the group under which Apache is run. This is usually `apache', but some distros run the server as `nobody' Look for the Group directive in the httpd.conf file:

chgrp apache /home/*/public_html
chmod 750 /home/*/public_html
chmod g+s /home/*/public_html

Now the users' directories can only be read by the users themselves (chmod 711) while the public_html directories belong to the `apache' group and can be read (but not written) by members of that group. The third command makes the directory setgid, so any files created in here will automatically belong to the apache group instead of the user's normal group. Ownership of the file is still with the user. If you want to use a different directory for the user's file instead of public_html, edit the relevant part of your Apache configuration. This can vary from one distro to another but one of your config files will contain the line:

UserDir public_html

Change this to wherever you want the HTML files to be kept in each user's home directory. </answer>

<title>The Big Question</title>

<question>My son has a PlayStation Portable. I'd like to convert some DVDs and other video files to MPEG4 so he can watch them on long journeys. I'm sure Transcode or Mencoder should be able to do this, but their man pages are full of jargon. Is there an easy way to convert videos for the PSP? </question>

<answer> Yes there is! When converting from DVD, the easiest program is normally dvd::rip, a graphical front-end to Transcode, MPlayer and the like. However, it can't handle the variant of MPEG4 that the PSP uses, so you need FFmpeg, another command line program but one with less confusing options than Transcode or Mencoder. A GUI for FFmpeg, called Vive, can be found at It only comes as source code but is very easy to install so long as you have the compiler toolkit installed. Download the latest tarball from the site, currently 2.0.0-beta1, and install it with

tar xf vive-2.0.0-beta1.tar.gz
cd vive-2.0.0-beta1
su -c "make install"

Give the root password when asked. Ubuntu users should replace the last command with the following and use their own password

sudo make install

Vive should now be in your KDE or Gnome menu, or you can run it from the command line with vive. Vive uses presets to collect settings for types of output. There's a sample settings file that's not installed by default; install it with

mkdir ~/.vive
cp /usr/share/doc/vive/examples/preferences ~/.vive

This file contains a preset for iPod/PSP videos, but doesn't generate PSP- specific files, nor does it handle widescreen videos. Add this to the preferences file

comment=Encoded by Vive

For widescreen videos, copy the block, alter the name to, say, PSPwide, and make the aspect, width and height values 16:9, 368 and 208. When you run Vive, you can select either a DVD title or a file to encode ­ press Load to have Vive read the list of titles from the DVD. Then choose the output file and a preset to use. You can also alter the values for video and audio encoding from the defaults of the chosen presets. Video files must be saved in the /MP_ROOT/100MNV01 directory on the memory stick and be named M4V00001.MP4, M4V00002.MP4 and so on. The Vive GUI can only convert one file at a time, but the program can be run from the command line for batch processing. To convert all the AVI files in a directory, try

for FILE in *.avi
vive -p PSP -i $FILE -o ${FILE/.avi/.mp4}