Answers 97

From LXF Wiki

Answers 97

<title>DVB to DVD</title>

<question>I've got my DVB-T stick working but my wife still won't look at a computer screen; is there some way I can convert files saved from the stream into something that can be played on our DVD player through the television? </question>

<answer> DVB and DVDs use two variants of the video codec, MPEG2. DVB uses MPEG2-TS while DVDs use MPEG2-PS; Transport Stream and Program Stream respectively. The main difference being that Transport Stream is designed for use over an unreliable connection, like radio transmission, so it has more redundancy and error correction, resulting in files that are around 30% larger. Transcoding from MPEG2-TS to MPEG2-PS is simple and fast because it only involves the error correction data, the video itself doesn't need to be re-encoded. There are a number of programs you can use to turn a DVB MPEG into a DVD. One of the simplest, albeit rather slow, is tovid (http://tovid.wikia.com), the todisc command in this package takes a list of video files in almost any format and converts them to a DVD ISO image. If you want a GUI for this, a couple of programs that you may find useful are dvdstyler (www.dvdstyler.de) and qdvdauthor (http://qdvdauthor.sourceforge.net). However, if you only want to create a DVD from a single MPEG2 file, these are overkill, when a shell script will do the job more quickly:

#!/bin/sh
mplayer -dumpfile title.audio -dumpaudio $1
mplayer -dumpfile title.video -dumpvideo $1
mplex -f 8 -o title.mpg title.{audio,video}
dvdauthor -x title.xml
mkisofs -dvd-video -o title.iso dvd

Where title.xml contains:

<dvdauthor dest="dvd">
<vmgm /><titleset><titles>
<pgc><vob file="title.mpg" /></pgc>
</titles></titleset>
</dvdauthor>

This separates the audio and video stream, then recombines them with the data necessary for DVD authoring, but without the DVB extras, before creating a DVD file structure and writing that to an ISO image. Before writing the ISO image to a DVD, you can test it with:

mplayer -dvd-device title.iso dvd://1

You will need mplayer, mjpegtools and dvdauthor installed to do this, all of which will be in your distro's repositories, most are probably already installed. Alternatively, if you use MythTV to record and watch the programs, install the mytharchive plugin which does DVD exports. This application can combine several programmes onto a single disc ­ re-encoding if necessary to fit more on one disc (but though that takes a lot longer, it's worth it if you are going to do this regularly and don't want to become overwhelmed with lots of discs) and offers a choice of menu styles and layouts. This is what I use most of the time. </answer>

<title>Butt-ugly buttons</title>

<question>I'm new to Linux, having got rid of Windows XP, and am now using PCLinuxOS 2007 on my Fujitsu Siemens Amilo xi 1546. When using Firefox, the radio buttons on web pages are ugly and not as smooth or round as in Internet Explorer. Is there a fix to make them look better? I searched the net and found something about putting two radio button images in the /Firefox/res folder, and add some code to /Firefox/res/forms.css, but the links to the code and images are gone because of the dated thread I found them on. Being a newbie to Linux, can you make it simple? </question>

<answer>The default Firefox widgets do have a rough appearance. The fix you mention is probably the one by Osmo Salomaa, that you can download from http://users.tkk.fi/~otsaloma/art/firefox-form-widgets.tar.gz. But to save you the trouble and in case the archive has gone AWOL again by the time you read this, we have included the file in the Magazine/Answers directory of this month's Linux Format DVD. To install it, exit Firefox, copy firefox form-widgets. tar.gz from the DVD to your home directory then open a terminal and type

tar xf firefox-form-widgets.tar.gz
cd firefox-form-widgets.tar.gz
su
cat res/forms-extra.css >>/usr/lib/firefox-2.0.0.3/
res/forms.css
cp -a res/form-widgets /usr/lib/firefox-2.0.0.3/res/
exit

You will need to be root to modify system files, which is handled by the su command. The exit command switches you straight back to a normal user, as it's unwise to remain as root for any longer than is absolutely necessary. Incidentally, Ubuntu users get a graphical installer for these widgets, courtesy of one of their forum users. Find this at http://ubuntuforums.org/showthread.php?t=369596 </answer>

<title>Critical updating</title>

<question>I am the IT manager for a small company that provides web services to international branches, VPN solutions and other services, all on CentOS, as well as internal services such as Samba and CUPS. Patching Linux servers is a relative unknown to me but I have to do it now. The paralysis brought on by fear of breakages can't continue ­ it will result in a less secure system. I've read book after book, article after article. They all seem to gloss over this topic with a catch-all "back up your data" Which data? It's not as simple as tarring up a home directory when it comes to enterprise services ­ they're all over the OS, with libraries that other services are dependent upon. What if an update breaks something? How do I roll back? I understand that the major server distributions spend a great deal of time making sure that their repositories are self consistent, however there are things that never make it to the distros ­ certain CRMs for example, third- party webmail solutions etc. Anything more than one package with similar functionality could feasibly mean that I end up chasing dependencies by hand if something goes wrong. The ideal solution is, of course, to apply the patch to a test environment first. In truth though, how many people have a mirror of every live service available all the time? A failover box may be available, but I'd rather not change the one that thing I know should work if everything else fails. Virtualisation seems to be the way to go. Virtualise your environments, take a snapshot, apply the patch, roll back the entire operating system if something goes wrong. This seems a little inelegant though ­ like changing your car when you run out of petrol. </question>

<answer>The car analogy seems a little strange ­rolling back to a snapshot only undoes the changes made since the snapshot was taken, it is like an undo function but to a fixed time rather than a single operation. With critical production servers, you do really need to test everything on a separate system before applying it to the live servers. You are thinking along the right lines with virtualisation, but you can use it for the test environments. That way you could effectively have test versions of all of your systems on one or two machines. This has a number of distinct advantages. First, you can use a single box with a number of virtual machines on it, which would require no more resources than a single box running any one of those servers, with the obvious exception of disk space. When you want to update a particular system, load the virtual machine, apply and test the updates and replicate them on the production server when you're completely satisfied that they work reliably. If there's a problem, revert the snapshot and try again, all the while your production server is reliably doing its job. Another advantage of testing on a separate system first applies when you're installing from source. You don't need to compile on the production system, so you don't need a full compiler toolchain on that box. This reduces the number of packages installed on the remote server and so improves its security. You can use checkinstall (http://checkinstall.izto.org) to build RPM packages of the program for installation on the production systems. </answer>

<title>Raid Aid</title>

<question>We've set up an Apache Tomcat server with two 500 GB drives using software RAID 1. I made a few changes to some files, restarted the server to test them and found the changes I had made to the files were gone. Some files I had deleted had also reappeared. I checked my mail and had received errors from mdadm.

A DegradedArray event had been detected on md
device /dev/md0.
The /proc/mdstat file currently contains the
following:
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
     1959808 blocks [2/2] [UU]
md0 : active raid1 sda1[0]
     486424000 blocks [2/1] [U_]
unused devices: <none>

I'm making a backup of all the important information, but if possible I'd like to salvage the server, since the setup was very specific and time consuming. I'm new to the world of Linux administration, and unsure where to start. </question>

<answer>The contents of /proc/mdstat indicate that a drive has failed on the md0 array (/dev/sdb1?). Your machine will continue to function with a degraded array, but with slightly reduced performance and no safeguard against another disk failure. There are a number of tools available to test the disk, but the safest option is to replace it and rebuild your arrays. This will also mean replacing /dev/sdb2 of course, so the other array will have to be rebuilt too. Fortunately, this is a simple task and largely automatic, but it can take a while. You can also continue to use the computer after replacing the faulty disk while the arrays are being rebuilt, but this will result in noticeably reduced disk performance. It is easiest if you can add the new disk before removing the old one as this means you can rebuild md0 first, then switch md1 to the new disk at your convenience. Assuming your new disk is added as /dev/sdc, connect it up and reboot. Then partition the disk as you did for sda and sdb, setting the partition types to Linux Raid Autodetect. Now run these commands as root, to remove the faulty disk from the array and add the new one:

mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/
sdb1
mdadm /dev/md0 --add /dev/sdc1

When the new disk is added to the array, the RAID driver will synchronise it with the existing disk. This can take a while, monitor the contents of /proc/mdstat to follow the progress. When the process is complete you'll have both your arrays working correctly, but using three disks, one of suspect reliability, so repeat the above commands for md1, sdb2 and sdc2 to transfer the other array to the new disk. Now you can power down and remove the faulty disk when it suits you as it is no longer in use. Needless to say, as with any critical disk operation, you should ensure your data is backed up before you do any of this. You can check the old disk with either smartmontools (http://smartmontools.sourceforge.net) which is probably available in your distro's repositories or check the manufacturer's web site. Most of them provide a diagnostic tool that runs from a bootable floppy disk, which you will need if the disk is to be returned under warranty. If the computer has no floppy drive, most of the diagnostic programs can be run from the Ultimate Boot CD (www.ultimatebootcd.com). </answer>

<title>Starting over</title>

<question>A few months ago, having installed Mandriva on my system, I replaced it with SUSE 10.2 from your LXFDVD91. I have two internal hard disks both split into two partitions. Windows 2000 shows them as C and D on the 0 disk and F and G on the 1 disk. SUSE is installed on drive F. I also have an external disk which is drive J. When I originally installed Linux, I unfortunately had drive J switched on and unless this is switched on at start-up, cursoring down the available items on the start menu is not possible. What I wish to do is uninstall Linux from the system completely and start again with another, larger, secondary hard disk. However, nowhere in your tutorial pages or the Help tabs in Linux software does there appear to be any means whereby I can go back to my simple Windows 2000 and HD 0. I suspect that if I wipe the Linux software from HD 1 (drive F in Windows), which I am tempted to do at the moment, there will be no menu appearing at switch on, so I am completely baffled. I'm hoping that you could please tell me how to uninstall Linux completely, so that I am able to go back to where I was before installing Mandriva. Given the choice at switch on between Win2000 and XP, and not having to have the external drive switched on always I would be very grateful to be able to sleep soundly again. </question>

<answer>The boot menu would probably start to work if you left it a while, it would appear that the Grub bootloader is trying to read the missing disk and should time out eventually. You do not need to reinstall to fix this, only edit the bootloader configuration. You should be able to do this from YaST. Boot with the external drive connected then unmount and disconnect or power off the drive. Run YaST, go to System > Boot Loader > Boot Loader Installation and select Propose New Configuration from the popup menu at the bottom right of the window. This should scan your disks (which no longer include the external drive) and set up a new menu for your Windows and SUSE installations. Go into the Section Management tab to make sure everything is as you wish and then click Finish. If you really want to remove Linux from these disks, select Restore MBR of Hard Disk from the same popup menu, which will replace the bootloader code with whatever was there before you installed SUSE. If this was Windows, fine, but if you went straight from Mandriva to SUSE, this will replace the Mandriva boot code, which you don't want. In this case, you should boot your Windows CD in rescue mode and run fixmbr, which will wipe all Linux bootloader code and replace with the Windows bootloader. Alternatively, you could simply replace the secondary disk, which probably would mess up your hard disk booting without doing any of the above, and boot straight from the SUSE install disc, install it and let it set up a new boot menu for you, making sure you leave the external drive disconnected this time. SUSE, as with all current Linux distros, is quite capable of detecting the external drive when you connect it after installing the operating system. </answer>

<title>Safe surfing</title>

<question>I've been toying with getting into Linux for a couple of months now. I tried downloading a distro, but struggled with the amount of technical jargon involved. I've loaded Ubuntu 7.04 and I love it. I'm still struggling to get my head around the fact that it is free and so is a load of other software that came with it, but I'm sure I'll get used to this. As I'm new to this, I need to double-check that what I am doing is safe and I'm not opening my PC up to external hackers. Are there steps that I should be taking to put in a firewall and virus checking software? I've installed Ubuntu 7.04 as a dual boot with Windows XP Home edition. On XP I have F- Secure 2007 combined firewall and virus checker. I connect to the internet using an external modem-router via an ethernet cable. </question>

<answer>Viruses are not a real problem with Linux, although it is good to be prepared. The most popular anti-virus program for Linux is ClamAV (www.clamav.net), which is included with Ubuntu and can be installed with the Synaptic package manager. ClamAV detects Windows viruses as well an any targeting Linux, which, combined with the plugins available for most mail programs, means you can also use it to make sure no nasty attachments reach your Windows setup via your Linux mailer. Firewalling is handled differently on Linux to Windows. The lack of spyware, and the virtual impossibility of embedding it in open source software, means that it concentrates on keeping out intruders. The Linux netfilter software is built into the kernel, so the various firewall programs you see provide more or less easy ways of setting up, testing and applying the filtering rules. There are several packages in the Ubuntu repositories that are well worth looking at, including: Firewall Builder (www.fwbuilder.org), Guarddog (www.simonzone.com/software/guarddog) and Shoreline Firewall (www.shorewall.net). The first is a GTK program that fits in well with the default GNOME desktop while Guarddog is a KDE program. They offer similar features but with a different approach. Shoreline Firewall is a script- based program that is definitely harder to set up the first time but provides more flexibility. Any of these are capable of protecting your system, so try them and see which you like best. You should also reduce the chances of intruders even reaching your firewall. Your router is the first line of defence, so turn off any port forwarding services you do not need. You should also disable any unnecessary services in Ubuntu's System > Services window, although be careful about what you disable here, some services are needed for normal operation of the computer. If unsure, turn off services individually and keep track of what you have done so you can turn them back on if you experience problems. Although Linux is inherently more secure than Windows, this should not be relied on, Linux programs can have security holes too. These are usually fixed promptly, so keep your system up to date. The four steps of blocking at the router, disabling unnecessary services, running a firewall and keeping your software updated will mean you can safely use the Internet with confidence. </answer>

<title>A swap for Vista</title>

<question>I was messing about with trying to create a swap partition on an old flash mp3 player, and I accidentally made a swap file on my Vista partition. I have not switched the swap file on, and used the mkswap command to make the swap file. My Vista still works, although I have to boot in through the recovery partition, so I am guessing it has only affected the start of the drive. Is there any way to reverse the mkswap command to enable me to fix the Vista partition? I have checked in GParted, and it reports the drive as a swap drive. Fdisk shows the partition as NTFS, which it should be, but there's no * under the boot heading. Does that mean that if I can restore the Vista boot info to the disk, it should work? </question>

<answer>If Vista still works, the partition must be OK. It looks like you have changed the partition type setting, probably to Linux Swap, and cleared the boot flag. This means that Windows cannot recognise the partition and that the bootloader does not think it can boot from here. Use whatever partition editor you prefer to set the partition type to NTFS (07) and set the bootable flag for it. I find cfdisk easy for this, and it is on just about every live CD I have ever tried. Boot from a live disc, open a root terminal and run cfdisk with:

cfdisk /dev/hda

When you've done this, select the partition, press t to set the type and choose NTFS from the list of alternatives, then press b to make it bootable. Finally press W (that is a capital W) to write the changes to the disk. You can also do this with a graphical editor like gparted or qtparted, but I find cfdisk faster for this. You don't even need to wait for a desktop to load if your favourite live disc already has an option to boot straight to a shell prompt (Knoppix users, for instance, can type knoppix 2 at the boot prompt). </answer>

<title>Can USB Samba?</title>

<question>I have set up a small server running Debian Etch, mainly to use as a fileserver but also eventually for some web-based stuff. I have a USB hard drive that I want to use as shared storage via Samba. My problem is that no matter what I do the drive is always mounting as root. If I set the mount point permissions to 777, user=guest and group=users and then mount it as a normal user, the permissions stay the same but user and group both revert to root. So I still can't write to the drive. If I mount as user root I have no problems accessing locally but in either situation Samba then won't let me write either. Someone suggested this was maybe a udev issue and that I needed to play with that so that the permissions are altered when it mounts. I'm not up on udev so don't know where to start. The drive is sda with partitions sda1 and sda2. </question>

<answer>Udev only handles creation of the device node (/dev/sda1 or whatever) not the mounting, so this is unlikely to be at fault. It is possible that udev is creating the node with restrictive permissions, but this would only stop users mounting the device (not root) it wouldn't affect the mounted filesystem. The user mount option doesn't take the name of a user, it simply allows any user to mount that filesystem, nor does it affect the permissions of the filesystem. The solution to your problem depends on the type of filesystem you are using. If this is a Linux-type filesystem that supports user permissions, setting the ownership and permissions of the mount point should suffice, but you have to do this after the filesystem has been mounted, otherwise you only affect the mount point, not the mounted filesystem. In Windows filesystems, particularly FAT32, you can add the option umask=002 to /etc/fstab to make all files user and group readable and writeable. Then use the uid and gid options to set ownership of all files in the filesystem. You can use numeric values here or user and group names, eg:

/dev/sda1 /mnt/somewhere vfat umask=002,uid=guest,gid=users 0 0 

</answer>

<title>Insecure FTP</title>

<question>I have an external server that acts as an FTP server for the company personnel and also as an anonymous FTP server for our clients. It's been a good little server until recently, where we find that it's getting abused by folks other than our clients and is causing some network bandwidth issues for us. We don't have issues if the server is running after hours, as that doesn't affect the staff that use the same network during the day. So I put the service in a cron job and have it stop in the morning and restart when the office closes down for the day. This solved the network bandwidth issues, but then caused another problem. The staff that need to update the files on the FTP server need to be able to do so during the working hours of the office. I need to have it running for the local staff but not running from the outside. I have some thoughts but they deal with modifying either hosts.allow/hosts.deny files or using some kind of xinetd trick to get them to work. I'm not sure what would be a good solution for this. The server is running CentOS 4.5, using vsftp running as a standalone daemon. The machine only has a single network card and IP address and is only visible via that address. </question>

<answer>While it is possible to do what you want, only making the public server available out of office hours, this is not the ideal solution. It is reasonable to assume that those abusing your server are not always putting legal material there, which could lead to legal action against you or losing your Internet connection. Remember, this is your server and you are responsible for the content available from it. Providing anonymous upload and download capability is asking for trouble. If you must do this, keep the upload area separate from the downloads, so people cannot download material that has been anonymously uploaded, it has to be moved over by someone with a login account. A better solution is to disable anonymous uploads altogether and provide your clients with their own FTP accounts. If you really want to continue offering unrestricted anonymous access out of office hours, you can use the hosts.allow and hosts.deny files in /etc. You enable this by putting

tcp_wrappers=YES

in /etc/vsftpd.conf. Then ensure your local network has access ­ add this line to /etc/hosts.allow

vsftpd: 192.168.1.

Note the trailing "." on the address to match the whole subnet, change the address to match your network. Now put these two lines into /etc/ cron.d/vsftpd

0 18 * * 1-5 root sed -i `/^vsftpd/d'
/etc/hosts.deny
0 8 * * 1-5 root echo "vsftpd:
ALL" >>/etc/hosts.deny

and force cron to reload with

killall -HUP cron

This will deny modify hosts.deny to deny all addresses, except those specified in hosts.allow, between 0800 and 1800 Monday to Friday and clear the block at other times. </answer>

<title>Setting up a server</title>

<question>I'm setting up a web/mail server. I'm a novice, so what would you recommend as a user friendly, secure Linux OS and web/mail server software with an easy interface for sharing Windows files? Preferably something with a large user manual, it will be used in a small office. </question>

<answer> Oh dear, all I can say with any certainty is that whatever I recommend will produce howls of disagreement from those who favour other distributions. While the popular distributions are general purpose and all suitable for desktop, workstation or server use, there are a number of smaller distributions that are specifically designed for the sort of thing you want to do. One such distro is ClarkConnect (www.clarkconnect.com). While ClarkConnect is best known as an Internet gateway, a means of connecting a network to the Internet, with suitable content filters and access controls, it can also be used as an intranet server. As you admit to being a novice and will be using this in a commercial environment, I strongly suggest that you consider taking one of the paid for versions, although you could install a free version first to try it out. ClarkConnect provide Community, Office and Enterprise versions. The Community Edition is completely free while the other two have 30-day trial periods. The paid-for editions offer extra features and, most importantly, support. ClarkConnect needs a keyboard and monitor attached to the server for installation and basic initial configuration, after that everything is done via its web interface. Administration is done over a secure SSL connection using a non-standard port, you need to know the IP address of the ClarkConnect server, which you can see after logging in with the root password, then you can use any web browser on the network to connect to http://ip-address:81. A Linux installation is as secure as you make it. Using a server-oriented distro with a minimal number of packages helps, but it is still up to you do ensure that software is kept up to date, use the Software Install menu in the administration interface, and that you set up a firewall. The firewall is included and set up from your browser. Even if you are not using this as an Internet gateway, it is wise to protect the services and data on the box with its own firewall, in addition to whatever you have on your Internet gateway or router. It is also possible to use a general purpose distro for this task, by opting to install server instead of desktop packages when you install it. Most distros include the excellent Webmin program that can be used to administer servers with a web browser, but for a separate server machine, especially when your experience is limited, a purpose-built distro is probably the best choice. While ClarkConnect doesn't come with a printed manual, there's plenty of documentation on the website, both in the detailed user guide the various How Tos. There are also user forums for peer group help. </answer>