Answers 95

From LXF Wiki

Answers 95

<title>Remote confusion</title>

<question>I got into Linux many years ago after installing Red Hat 5.1 on my Amiga 4000. While managing to get to grips with it fairly well, I have never succeeded in getting a remote X session to work. I can log in via SSH and use the shell, but I really want to access my remote machine with X. My remote machine runs MythTV on Kubuntu, and the one I want to access it from is running Gentoo. I only want to access the desktop for simple administration tasks (not viewing MythTV), so it shouldn't be impossible, but I've got so confused as to which is considered client and server or remote and host that I'm lost! I'm using AMD64 and some don't seem to like it. </question>

<answer>I too started using Linux on an Amiga 4000 (with Red Hat 4.5) things were nowhere near as easy back then as they are now. Remote X access is relatively straightforward, and useful with MythTV because the mythtv-setup program can run on a remote back-end but opens an X window. The client­server thing can be confusing with X if you are used to the web model of considering the remote machine to be the server and your desktop computer the client. The X server is the program responsible for creating the X display, so it runs on the local machine. The clients are the programs running on that display. So your Gentoo desktop is the server and the programs on the MythTV box are the clients. Running anX program on a remote server over SSH is straightforward and works with the default SSH settings in Gentoo and Kubuntu. SSH into your Kubuntu machine from your Gentoo box with the -Y option. You can then run X programs and make them open their windows on your Gentoo desktop. For example, doing

[user@gentoo]$ ssh -Y kubuntu
user@kubuntu's password:
[user@kubuntu]$ mythtv-setup

will run the mythtv-setup program from the Kubuntu box on your Gentoo desktop. You may occasionally find that you cannot log out of the SSH session after running an X program. This can be caused by the program having started other processes that are still running; for example, KMail opens a couple of communication sockets. Run ps in another SSH session to identify these, then kill them and you will get your prompt back. The other applications you refer to are probably desktop-sharing programs, which mirror or open an X desktop on a remote machine. These require X to be at least installed on the remote computer, and in the case of programs that mirror it, the desktop must be running. As you are using KDE, the simplest of these is KDE's own krfb and krdc. The former is a server, run on the remote computer and configured in the KDE Control Centre. The latter is run on the local box to show the other computer's desktop in a window. Both are installed by default in Kubuntu; you will need to emerge kde-base/krdc on your Gentoo system. VNC works differently by opening a desktop screen specifically for the remote display, separate from any local desktop screen running. </answer>

<title>CentOS conversion</title>

<question>According to the CentOS website at www.centos.org, CentOS "aims to be 100% binary-compatible" with "a prominent North American enterprise Linux vendor." That got me thinking. Can you point Yum on an honest-to-goodness install of Red Hat to the CentOS repositories? I've noticed when upgrading my CentOS box that a lot of the packages still have the Red Hat name (such as patch_for_foo-RHEL-6.3.2). So it would seem that this could be a way to keep a server up to date after your Red Hat service runs out. I know it would not be the ideal way to do things, but would it work? </question>

<answer>This would seem to be possible, according to reports from the CentOS forums, provided you are using equivalent versions, such as going from RHEL 5 to CentOS 5. You have the choice of either using the CentOS repositories instead of the Red Hat ones or converting your installation from Red Hat Enterprise Linux to CentOS. Before you do anything else, you should make sure you are no longer registered with Red Hat Network. Put this in your Yum configuration to add the CentOS repositories:

[CentOS5 base]
name=CentOS-5-Base
mirrorlist=http://mirrorlist.centos.org/
?release=5&arch=$basearch&repo=os
gpgcheck=1
enabled=1
gpgkey=http://mirror.centos.org/centos/RPM-
GPG-KEY-CentOS-5
[CentOS5 updates]
name=CentOS-5-Updates
mirrorlist=http://mirrorlist.centos.org/
?release=5&arch=$basearch&repo=updates
gpgcheck=1
enabled=1
gpgkey=http://mirror.centos.org/centos/RPM-
GPG-KEY-CentOS-5
[CentOS5plus]
name=CentOS-5-Plus
mirrorlist=http://mirrorlist.centos.org/
?release=5&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=1
gpgkey=http://mirror.centos.org/centos/RPM-
GPG-KEY-CentOS-5

Disable your RHEL repositories by changing the enabled=1 line to enabled=0 for each of them. Those settings have gpgcheck turned on, so each package is verified against the CentOS GPG keys before installing. You can install these keys with

rpm --import http://isoredirect.centos.org/centos/5/os/i386/RPM-GPG-KEY-CentOS-5

If you want to switch over to CentOS completely, you need to install two small packages from the CentOS repositories, either centos- release-5-0.0.el5.centos.2.x86_64.rpm and centos-release-notes-5.0.0-2.x86_64.rpm or centos-release-5-0.0.el5.centos.2.i386.rpm and centos-release-notes-5.0.0-2.i386.rpm, depending on whether you are running a 32-bit or 64-bit system. You should also make sure that you remove any Red Hat *-release-* packages. You may get conflict warnings from Yum because you still have the RHEL versions of most packages installed. The best long-term solution to this is to install the CentOS packages, turning your system into a pure CentOS one. As you no longer have a RHEL support subscription, there is no benefit in keeping the Red Hat-branded packages installed, and moving over to a pure CentOS system will make it easier if you need support from the CentOS community. </answer>

<title>Shrinking OGGs</title>

<question>Is it possible to reduce the bit rate of OGG files? I encoded at 458kbps, and they are taking up too much disk space. </question>

<answer>The Ogg Vorbis specification includes the ability to reduce the bit rate of a file without re-encoding, but the current software does not do this, so you need to uncompress and recompress each file. This does mean that there will be some loss of quality compared with encoding at the lower setting to start with, although it is likely to be minimal when coming down from such a high bit rate. Where you still have the original sources, re encoding from scratch is the best option. Otherwise, this will decode and re-encode a single file:

oggdec oldfile.ogg -o - | oggenc -q N -o newfile. ogg -

Use whatever N quality setting you want. Replace the -q with -b if you prefer to specify the average bit rate instead of quality level. You can convert all files in a directory with

mkdir -p smalleroggs
for f in *.ogg
for FILE in *.ogg
do
  if oggdec "$FILE" -o - | oggenc -q N -o
"smalleroggs/$FILE" -
  then
    vorbiscomment -l "$FILE" | vorbiscomment -w
-c /dev/stdin "smalleroggs/$FILE"
  fi
done

This re-encodes each file and copies the tags from the old file to the new one. If you want to recurse into a directory structure, you will need the find command to locate all *.ogg files. This version also deletes the original files, so use with care.

find -name `*.ogg' | while read FILE
  NEWFILE=${FILE/.ogg/_new.ogg}
  if oggdec "$FILE" -o - | oggenc -q N -o
"$NEWFILE" -
  then
    vorbiscomment -l "$FILE" | vorbiscomment -w
-c /dev/stdin "$NEWFILE"
    mv -f "$NEWFILE" "$FILE"
  fi
done

</answer>

<title>Now serving ­ PHP</title>

<question>I'm having trouble with browsing .php files on my Linux (Mandriva 2007 Free) machine. It keeps trying to open them with KWrite instead of just running them. As I'm currently trying to teach myself PHP, when I'm running an HTML file that calls a PHP process I really don't want to look at the code ­ I want the PHP to just, well, run. </question>

<answer>You don't run PHP files from a file manager. PHP is a server-side scripting language, so you need to load the PHP page from a web server into your browser. Locally, they are just text files, and your file manager will perform whatever action it is configured to do on text files ­ in your case to load them into KWrite. This means you need to run your own web server, which is nowhere near as scary as it sounds. Fire up the Mandriva Control Center, go into the software installation section, type `mod_php' into the Search box and select apache-mod_php-5 for installation. This will also install various other packages that you need to serve PHP files. When the installation is complete, go into the System section of the Control Center and select the System Services item. Ensure that httpd (the Apache process) is set to start on boot and, if it is not running now, start it. Point your browser at http://localhost and you should see the Apache test page, or maybe just an `It works!' page, confirming that you now have a working web server. Now all you need to do is put your PHP files in the web server's DocumentRoot, the directory where it looks for files to serve. Mandriva defaults to using /var/www/html for this, so save the following as /var/www/html/test.php:

<?php
phpinfo();
?>

Load http://localhost/test.php into your browser and you should see some information about the server and the system running it. If so, Apache is not only installed, it is set up to serve PHP pages and you can continue learning the language. Good luck! You may run into permissions problems editing files as your normal user for inclusion in the DocumentRoot directory. This can be solved by adding your user to the Apache group and setting the directory to be writable by member of that group, by typing this in a root terminal:

gpasswd -a yourusername apache
chgrp apache /var/www/html
chmod g+w /var/www/html

You will need to log out and back in again for this to take effect. </answer>

<title>Synced storage</title>

<question>I have just installed a Buffalo LS-250 LinkStation [a networked storage device] on my home network (me running Kubuntu Dapper and three Windows XP machines). I have no problems at all copying files to and from my Dapper laptop and it was very easy to set up. But! What I would like to do is to sync my laptop with the LinkStation, and I'm not sure how to do it. I've successfully set up Unison between my laptop and one of the Windows XP machines, but I don't know if this is possible with the LinkStation. I've looked at rsync, but that too seems to need a software installation on both the laptop and the LinkStation. A straightforward command line copy would do me, so that I could write a script to copy only new files each way, but rsync now seems to be the default for that. Also, on the XP machines I can open and edit files on the LinkStation, but Samba only lets me open a copy on the Dapper laptop. Can this be changed? </question>

<answer>You actually have two Linux computers on your network, because the LinkStations run Linux too. There is an active community at http://linkstationwiki.net with plenty of information on the various LinkStation models, including your LinkStation Pro. Of most interest to you will be the replacement firmware project. FreeLink replaces the standard firmware with a Debian variant. This is more extreme than OpenLink but gives more flexibility, although you currently lose the web interface. OpenLink is based on the stock firmware but adds some software. The most interesting of these are SSH and rsync. However, the LS-LG that you have is a new model, and OpenLink did not support this at the time of writing, although that may have changed by the time you read this. If you don't wish to mess with your firmware, there is a much simpler solution. If you mount the device using Samba you can use rsync without installing anything on the remote machine as you are effectively syncing two local directories.

rsync -avx ~/myfiles/ /mnt/buffalo/myfiles/

You should be able to work with files directly on the device over SMB. As you use KDE, you should try the KIO slave route first, opening a file as smb://name/path/to/file. Try to browse the files in Konqueror and open them in your editor. If this fails, it is probably down to the share permissions and Samba setup. If you run the programs from a shell, you should be able to gain more information from the error message printed there. For example:

 kwrite smb://name/path/to/file

</answer>

<title>Blank SUSE</title>

<question>I hope there is a simple answer to this simple hardware-related question. Every time that I try to load SUSE 10.2 with my new 19-inch flat-screen monitor, I get the message `not supported' How do I get over this? The computer works fine with an old 14-inch CRT monitor. </question>

<answer>A hardware issue that doesn't involve proprietary driver woes? Makes a change! Right, is this a single message right in the middle of your screen with nothing else displayed? If so, it is a message from your monitor telling you that the computer is sending a signal that is out of its normal range. It usually means the computer is trying to display too high a resolution or with too high a frequency. This is caused by the installer incorrectly recognising the monitor, so its idea of what it thinks the monitor can handle is different from the monitor's. There is a simple answer, as this affects only the installer, and that is to force the installer to use a lower resolution. Press the F3 key at the boot menu screen to select a different resolution. Work your way up the menu (lower resolutions are towards the top of the list) until you find a setting that works. As a last resort, you can install in text mode. This is less attractive and takes getting used to, but you end up with an identical installation. This problem affects only the installation, you will be able to choose suitable video settings to ensure you have a graphical desktop. It may well detect your monitor correctly at this stage. </answer>

<title>Out of sight</title>

<question>I want to access a computer, running without a monitor, via remote desktop connection (krdc). Because the remote machine boots without a monitor, X.org drops back to VGA (640x480). Is there any way I can force X.org to use a higher resolution? I do not want to use X forwarding, I need to view the whole desktop. The computer is running Debian Etch and I have attached my xorg.conf. </question>

<answer>This drop in resolution is caused by your X.org configuration. Here is the offending part of your xorg.conf:

Section "Monitor"
 Identifier "BenQ T701"
 Option      "DPMS"
EndSection

As you can see, once this part is extracted from the whole file, it gives no details about the monitor's capabilities and limitations. This is becoming a standard approach and generally works well with modern monitors that support EDID (Extended Display Identification Data). This is where the software queries the monitor and gets back the information needed to set up a suitable display. Since it is possible to damage a monitor by sending it a signal at too high a frequency or resolution ­ although most monitors have protection against that sort of thing these days ­ X.org falls back to a safe 640x480x8-bit display if it gets no response to its EDID query. The solution is quite simple, add the information on horizontal and vertical frequencies that X.org needs, and it will stop trying to ask the nonexistent monitor. You need to add HorizSync and VertRefresh options to the Monitor section above. If you ever connect a monitor to that computer, you will find the values in the monitor's manual. If you are never, ever going to connect a monitor to this system, you can use any reasonable figures, otherwise get them from the monitor's manual to make sure it works when you to want to use it. After restarting X, you should find it opens a display at the resolution given in xorg.conf, 1,280x1,024. </answer>

<title>Where is Windows?</title>

<question>I have just installed Mandriva 2005 from your Special edition [Linux Format Special #1] This is the second time I've done this. The first time I could read my Windows hard drives but this time I can't. I appear to be locked out. How can I get access to these disks as I did last time? The previous installation was on another hard drive, which I don't have any more. </question>

<answer> The solution to this depends on two things: the type of filesystem you are using on your Windows partition and what you mean by "locked out" If you had full read and write access to the Window partition before, it is most likely using the FAT32 filesystem. In that case, if you mean you are able to mount the partition but not write to it, or read into directories, this is a simple permissions problem. Fire up the Mandriva Control Center, go into the Mount Points section and select Create, Delete And Resize Hard Disk Partitions. Select your Windows partition, go into Expert mode and press the Options button. The box in the middle of the Options window will probably contain `defaults'. Tick the box labelled Umask=0, followed by OK and Done. You now need to remount the partition to apply the new settings. You could do this by rebooting, but this is Linux, not Windows, so open a terminal and type

 su -c "mount /mnt/windows -o remount"

replacing /mnt/windows with wherever your Windows partition appears. Give the root password and you can now read and write to your Windows partition. The reason for this is the umask=0 that you added to the partition's mount options. The Windows FAT32 filesystem doesn't have any file permissions of its own. This option tells the system to treat all files and directories as readable and writable by everyone. If your Windows partition uses the NTFS filesystem, the situation is more difficult. While read access for this filesystem has been around for a while, full read/write access has only recently become really usable. Read access can be enabled by following the steps outlined above, but replace the remount command with

 su
 umount /mnt/windows
 chmod 777 /mnt/windows
 mount /mnt/windows

You should now be able to read from ­ but not write to ­ your Windows partition. While it is theoretically possible to enable write support with this distribution, this is rather limited and more trouble that it is worth. Mandriva 2005 is generally considered to be rather old now, and in the intervening time things have moved on a lot in this area. I recommend you upgrade to the latest release: Mandriva 2007 Spring was on last month's DVD. </answer>

<title>On permissions</title>

<question>I had a Squid box working fine, but a power spike took out the boot sector of the disk. I have reinstalled (and taken the time to upgrade to Debian Etch). My problem is that the winbind_privileged folder is dynamically created at boot time. When it is created, the permissions are wrong. I need them to be root:proxy so that the proxy server can use the AD [Active Directory] authentication. How can I go about fixing this problem? </question>

<answer>The only reliable solution to this appears to be the slightly kludgy one: to allow the directory to be created and then change the group ownership. Using your favourite text editor, as root, edit the file /etc/rc.local and add the following before the final exit line:

if [[ -d /path/to/winbind_privileged
]]
then
   chgrp proxy /path/to/winbind_
privileged
fi

Use the correct path for the winbind_privileged directory, of course. This script is run right at the end of the boot process. If you need to issue this command sooner, say immediately after Squid starts, you need to create a separate script. Put these lines into /etc/init.d/fixsquid:

#!/bin/sh
if [[ -d /path/to/winbind_privileged
]]
then
   chgrp proxy /path/to/winbind_
privileged
fi

Once again, use the correct path for the winbind_privileged directory. Now make it executable and set it to run at just after Squid is started with by running these commands as root:

chmod +x /etc/init.d/fixsquid
ln -s ../init.d/fixsquid /etc/rc2.d/S35fixsquid

Init scripts are run in alphanumeric order, and Squid is run from S30squid, so this runs it soon after that (the S means a startup script; names that begin with K are run on shutdown to Kill the process). </answer>

<title>Wi-Fi woes... again</title>

<question>I have a somewhat similar problem to Andrew Wood in LXF90 [Wi-Fi Woes, Answers], but with a D-Link DWL-G122 Rev C USB dongle. I tried NdisWrapper on another D-Link DWL-G122 Rev B, and it works with the Windows drivers that comes with it. However, with Rev C, I just can't get it working. Is there any way to determine which is the exact INF file to be used? Google tells me that the Rev C is using the Ralink RT73 chipset. How can I confirm it locally, with Linux (Mepis)? Is this RT73 the same as any of the RT2x00 chipsets? </question>

<answer>Sadly, this is an all too common problem. Manufacturers will change the internals of a product while leaving the outward appearance and name the same. This does not affect Windows users as long as they use the driver disc supplied with the device. You need to take the same approach with NdisWrapper ­ use the INF file from the disc that came with the device, probably rt73.inf. You can identify the device with the lsusb command. This will give you two hexadecimal numbers for the manufacturer and product IDs. For example, my D-Link shows

   `Bus 001 Device 005: ID 2001:3700 D-Link
   Corp. [hex] DWL-122 802.11b'

where 2001 and 3700 are the manufacturer and product IDs respectively. With these numbers you can find out more information at http://qbik.ch/usb/devices. The RT73 and RT25xx are different chipsets but the RT2x00 project supports the RT73 too, and the RT61, yet another variation. There is also a standalone RT73 package from the RT2x00 site at http://rt2x00.serialmonkey.com. This is marked as a legacy package, but it is probably easier to install, so give it a try first. Download the rt73-CVS tarball, unpack it and follow the instructions in the README file. If this does not work for you, try the new RT2x00 driver set, which pulls the RT2400, RT2500, RT2700, RT61 and RT73 drivers into a single package. Being so new, it has to be downloaded from the project's Git repository, there is a link to full instructions on doing this on their downloads page. Another option is the Linux driver for the RT73 available from Ralink's website, currently at www.ralink.com.tw/data/RT73_Linux_STA_Drv1.0.3.6.tar.gz. The archive contains full installation instructions. You will need to compile the driver from the source code in the tarball, which means you will need your kernel source package and GCC installed, from the standard Mepis packages. You will also need to install a firmware file from www.ralinktech.com.tw/data/RT71W_Firmware_V1.8.zip. The situation should become a lot clearer as the RT2x00 driver package matures. I am no fan of NdisWrapper, but it is easy to see why people use it when it so often appears to `just work'. </answer>

<title>Updating Debian offline</title>

<question>We have a number of computers running Debian that do not have full internet access. Some are not networked at all. What is the best way to keep these up to date? Currently we copy updated Deb files to a CD and install them manually on each computer, but there must be a better way. We thought about a local Debian mirror, but that would consume a lot of bandwidth to keep up to date and still wouldn't help with the non-networked systems. </question>

<answer>The answer lies in a useful package called APTonCD. This creates a repository on a CD (or DVD) that you can use to install or update non- networked PCs. APTonCD is also a useful backup and replication tool because you can use it to create CDs or a DVD containing all the packages currently installed on a computer, then use those discs to reinstall that computer or install the same set on packages on another machine. If you use Ubuntu Feisty you can install APTonCD via Synaptic, but it is not in the standard Debian Etch repositories, so get it from http://aptoncd.sourceforge.net and install on each of your computers with

dselect --install aptoncd_0.1~rc-0ubuntu1_all.deb

The easiest way to use this is to have one internet-connected computer that you keep up to date and use this to build CDs or a DVD to update the others. First run the program on your internet-connected computer and click on Create APTonCD. It'll scan your system for all packages in /var/cache/apt/archives, which is all the packages you've installed unless you've cleaned out this directory. You're then presented with the full list of packages, all selected. Remove any you don't want from the list (you may wish to do this to ensure it all fits on a single disc) and add extra packages. APTonCD will add the dependencies of any package you add, unless you tell it to not do this. APTonCD can burn to CDs or DVDs and will create as many discs as are needed to hold the files. Press OK and APTonCD will create one or more ISO images ready to burn to disc with your favourite CD/DVD burning app. The program will offer to burn the disc as soon as it has finished writing the ISO image(s). Once you have written the images to a CD or DVD, put it in one of your non-networked computers, run APTonCD and select the Restore tab. The first two options deal with restoring a system from the CD, which may be of interest at some time but isn't what you are looking for in your question. The third Restore option adds the disc as a repository, which can then be used by apt-get, Synaptic or other package management tool to update the computer. If you look in /etc/apt/sources.list, or select Settings > Repositories in Synaptic, you will see that your new CD has been added to the available software sources. Run the Update Manager and you can see and apply any updates to this system. It is a good idea to clean out your sources.list the next time you create and add a disc from APTonCD, otherwise you'll end up with several CD entries in here, one for each time you update. </answer>