Linux Posts

Linux

The Stupid Simple Guide to Setting Up Your Own DNS Server

I’m a developer, first and foremost. I like writing code. To me, maintaining servers, configuring things, troubleshooting network issues and the like -  these are things I do to support my primary interest and job as a developer. I’m not ignorant of these things, but all things considered they’re not my favorite things to do. One thing I will admit I’ve been ignorant over the years is DNS. Oh sure, I know at a high level how it works. I even know a bit about the different record types. I knew enough to have my own domain name, configured using Godaddy’s DNS servers to point to my server. But actually running my own name server? Something I’ve never done and, for some reason, had this unnatural fear of. Well, no more. I’m now running my very own shiny new name server and, actually, it wasn’t really as difficult as I thought. And because this was a learning experience for me, I figured I’d walk you through what I did as well. Picking  a Server There are two big players in the “DNS Server software” space: BIND and djbdns. BIND is the 900 pound gorilla that has been around forever and ever, and is insanely difficult to configure. djbdns is from the same guy who wrote qmail - I’ll let you be the judge of that. But after researching and actually attempting to install both of these, I eventually gave up. Both just came across as being too complex for a simple name server handling a couple of domains, and the documentation for both was equally complex. That’s when someone on Twitter pointed me to MaraDNS. I looked it over and was surprised to find good, readable and simple documentation that made it look easy to install. So I decided to give it a whirl. Here’s what I did. Note that this install is for a Gentoo system. Yours will be different if you’re using something else. Installing and Configuring MaraDNS First step is to install it. emerge maradns And let Portage do its thing. Once it’s installed, you really only have to worry about a few files. In /etc/mararc, you need to check to be sure you’re binding to the right interfaces. In my config, I bound it to the loopback and to the main interface: ipv4_bind_addresses = "x.x.x.x, 127.0.0." After that, you tell it to be authoritative, and what domains you are wanting to serve records for. csv2 = {} csv2["robpeck.com."] = "zones/robpeck.com" Note the period at the end of the domain name - it’s important. Each entry in the csv2 array should map to a zone file. I put mine in the “zones” subdirectory (which, in Gentoo, lives under /etc/maradns). mkdir -p /etc/maradns/zones Then, with your favorite editor (which should be vi :P), you create your zone file. The one for robpeck.com (partially) looks like this: robpeck.com. NS ns1.epsilonthree.com. robpeck.com. NS ns2.epsilonthree.com. robpeck.com. +3600 A x.x.x.x robpeck.com. +3600 MX 0 robpeck.com. www.robpeck.com. +3600 CNAME robpeck.com. So what are we doing here? Well, here it helps to know something about the different types of DNS records. I’m not going to cover all the different types of records - this is a good list of common ones and Wikipedia has a full list. The important ones you need to know are NS (Name Server), A (the main record), MX (mail server records), and CNAME (alias). The “+3600” is setting a timeout on the records to one hour (3,600 seconds). By default, the server will send one day (86,400 seconds). Here, I’m telling the server what the name servers are (strictly speaking, this isn’t required, but I added it all the same) and that the main address for people requesting “robpeck.com” is this IP address. I’m also saying that people who request “www.robpeck.com” should get the IP address for “robpeck.com.” I also add an MX record that points to robpeck.com with 0 as the priority (the first (and only) server). That’s it! Restart MaraDNS: /etc/init.d/maradns restart And you can test it out. dig @localhost robpeck.com A You should get a big long printout, but what you want to see is these two lines: QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0 robpeck.com. 3600 IN A x.x.x.x Assuming the above is the correct address, congratulations, your DNS server is now resolving properly locally. Delegating your Domain The next step is delegating your domain to your own server. I’m not going to cover this in too much detail because how it happens depends on the registrar. In general, this is a two step process: Register your name server’s IP address to a name. At NameCheap, when you’re in the domain screen this is done under Advanced Options > Nameserver Registration. Under GoDaddy, this is under the “Hosts” section of the domain information screen. You need to add at least two “nsX.domain.com” entries, but they can both point to the same IP. Delegate your domain to the names you just created. At NameCheap, you would go General > Domain Name Server setup, and Specify Custom DNS Servers. Then, enter the two (or more) names you just created “nsX.domain.com”. I can’t remember how I did this in GoDaddy, but I remember it was pretty apparent. That’s it! They say it takes 24-48 hours, but I started seeing requests hit the new name server within about an hour. Of course, since I wasn’t actually changing IP addresses, there was no real downtime. As of now, all my domains are being served off my own nameserver. It’s kind of a neat feeling of accomplishment, knowing you’re not relying on someone else’s DNS setup - they’re just providing you a name. This makes domain transferring much easier and adding new records much easier. And seeing as how I’m currently in the process of transferring all my domains away from GoDaddy, this will ease the transition.
Read More
Ramblings

RIP Dennis Ritchie

Unlike Steve Jobs, unless you’re in the tech industry, there’s a pretty fair chance you’ve never heard of Dennis Ritchie.
Read More
Linux

Do Version Numbers Matter?

The recent announcement by Linus Torvalds that the next release of Linux will be 3.0 has provoked rather furious discussion around the Internets about whether or not the incrementing of the version number is warranted. Linus himself has said that “absolutely nothing” has changed. “It will get released close enough to the 20-year mark, which is excuse enough for me, although honestly, the real reason is just that I can no longer comfortably count as high as 40.” This got me to thinking about the nature of version numbers. Once upon a time (when versions were driven more by engineers and convention, and less by marketing), a version number meant something. Major, minor, revision. A major new release that modified significant portions of the code from the previous release incremented a major version number. Version numbers less than 0 were beta releases. Linux has been at 2.x since 1996, and at 2.6.x since 2003. Mac OS has been at 10.x since 2001 (even though the current version of OS X is significantly different from the original release in 2001). Meanwhile, Google Chrome has blasted through major 11 “versions” in three years. Mozilla is planning to release versions 5, 6, and 7 of Firefox this year. You can’t tell me that they are going to change major parts of Firefox three times this year. In this case, version numbers are purely being driven by marketing. They need to “catch up” to Chrome and Internet Explorer. But we live in a different world now. One where, arguably, version numbers are becoming less and less important. The growth of “app stores,” I think, is desensitizing your average user to a version number. While apps in the app store still have versions, I couldn’t tell you what “version” any of the apps on my iPhone are (other than the OS), and I bet you can’t either. Any of the apps I’ve installed from the Mac App Store I could not tell you the version of them. I just know that, when I see the number on the icon, I know I need to do updates. The updates happen, and I get a new version with whatever new features are there (or, in the case of the Twitter app, whatever features have been removed). Then there are web apps which are versionless. What version of Gmail do you use? You don’t. You use Gmail. Sure, there’s probably a revision number or something in the background, but the user has no clue what version they’re using. And they don’t need to, because there’s no action they need to take. So version are numbered in a wide variety of ways depending on the product and overall seem to be becoming less important as the growth of broadband, “app stores,” web apps, and automatic updates make thinking about version numbers less important. So why does it matter if Linus ups Linux to 3.0? Ultimately, it’s just a number.
Read More
Linux

BASH Quickie: Backing Up MySQL Databases

In some ways, after years of doing programming and scripting, I’m now sort of rediscovering the power of the shell. Tonight, I was working on my server and remembered that I needed to start backing up my MySQL databases (which you do also … right?). So instead of writing a script to do that, with a little research, I was able to come up with a way to: Dump each database to a separate SQL file, with a timestamp. bzip the file. Keep 5 days worth of backups for each database, rotating the oldest backup off. Here’s what I came up with: cd /backup/mysql; for i in $(mysql -BNe 'show databases' -u root -p<password>); do mysqldump -u root -p<password> $i | bzip2 > $i-`date +"%Y%m%d"`.sql.bz2; rm -rf $i-`date -d "-5 day" +"%Y%m%d"`.sql.bz2; done > /dev/null 2>&1 Shoved that in my crontab. Works great. Linux rocks.
Read More
Apache

Automatically Provisioning Polycom Phones

The goal of this project were twofold: To completely eliminate the need for me to touch the phone to provision it. I want to be able to create a profile for it in the database, then simply plug the phone in and let it do the rest. And… To eliminate per-phone physical configuration files stored on the server. The configuration files should be generated on the fly when the phone requests them. So the flow of what happens is this: I create a profile for the phone in the database, then plug the phone in. Phone boots initially, receives server from DHCP option 66. Script on the server hands out the correct provisioning path for that model of phone. Reboots with new provisioning information. Phone boots with new provisioning information, begins downloading update SIP application and BootROM. Reboots. Phone boots again, connects to Asterisk. At this rate, provisioning a phone for a new employee is simply me entering the new extension and MAC address into an admin screen, and giving them the phone. It’s pretty neat. **Note: **there are some areas where this is intentionally vague, as I’ve tried to avoid revealing too much about our private corporate administrative structure. If something here doesn’t make sense or you’re curious, post a comment. I’ll answer as best I can. Creating the initial configs I used the standard download of firmware and configs from Polycom to seed a base directory. This directory, on my server, is /www/asterisk/prov/polycom_ipXXX, where XXX in the phone model. Right now we deploy the IP-330, IP-331 and IP-4000. While right now the IP-330 and IP-331 can use the same firmware and configs, since the IP-330 has been discontinued they will probably diverge sometime in the not too near future. With the base configs in place, this is where mod_rewrite comes into play. I added the following rewrite rules to the Apache configs: RewriteEngine on RewriteRule ^/000000000000\.cfg /index.php RewriteRule /prov/[^/]+/([^/]+)-phone\.cfg /provision.php?mac=$1 [L] RewriteRule /prov/polycom_[^/]+/[^/]+-directory\.xml /prov/polycom_directory.php` RewriteCond %{THE_REQUEST} ^PUT* RewriteRule /prov/[^/]+/([^/]+)\.log /prov/polycom_log.php?file=$1` To understand what these do, you will need to take apart the anatomy of a Polycom boot request. It requests the following files in this order: whichever bootrom.ld image it’s using, [mac-address].cfg if it exists or 000000000000.cfg otherwise, the sip.ld image, [mac-address]-phone.cfg, [mac-address]-web.cfg, and [mac-address]-directory.xml. So, we’re going to rewrite some of these requests to our scripts instead. Generating configs on the fly We’re going to skip the first rewrite rule (we’ll talk about that one in a little bit since it has to do with plug-in auto provisioning). The one we’re concerned with is the next one, which rewrites [mac-address]-phone.cfg requests to our provisioning script. So each request to that file is actually rewritten to provision.php?mac=[mac-address]. Now, in the database, we’re keeping track of what kind of phone it is (an IP-330, IP-331 or IP-4000), so when a request hits the script, we look up in the database what kind of phone we’re dealing with based on the MAC address, and use the variables from the database to fill in a template file containing exactly what that phone needs to configure itself. For example, the base template file for the IP-330 looks something like this: <sip> <userinfo> <server <?php foreach($phone as $key => $p) { ?> voIpProt.server.<?php echo $key+1 ?>.address="<?php echo $p["host"] ?>" voIpProt.server.<?php echo $key+1 ?>.expires="3600" voIpProt.server.<?php echo $key+1 ?>.transport="UDPOnly" <?php } ?> /> <reg <?php foreach($phone as $key => $p) { ?> reg.<?php echo $key+1 ?>.displayName="<?php echo $p["first_name"] ?> <?php echo $p["last_name"] ?>" reg.<?php echo $key+1 ?>.address="<?php echo $p["name"] ?>" reg.<?php echo $key+1 ?>.type="private" reg.<?php echo $key+1 ?>.auth.password="<?php echo $p["secret"] ?>" reg.<?php echo $key+1 ?>.auth.userId="<?php echo $p["name"] ?>" reg.<?php echo $key+1 ?>.label="<?php echo $p["first_name"] ?> <?php echo $p["last_name"] ?>" reg.<?php echo $key+1 ?>.server.1.register="1" reg.<?php echo $key+1 ?>.server.1.address="<?php echo $p["host"] ?>" reg.<?php echo $key+1 ?>.server.1.port="5060" reg.<?php echo $key+1 ?>.server.1.expires="3600" reg.<?php echo $key+1 ?>.server.1.transport="UDPOnly" <?php } ?> /> </userinfo> <tcpIpApp> <sntp tcpIpApp.sntp.address="pool.ntp.org" tcpIpApp.sntp.gmtOffset="<?php echo $tz ?>" /> </tcpIpApp> </sip> The script outputs this when the phone requests it. Voila. Magic configuration from the database. There’s a little bit more to it than this. A lot of the settings custom to the company and shared among the various phones are in a master dealnews.cfg file, and included with each phone (it was added to the 000000000000.cfg file). Now, on to the next rule. Generating the company directory Polycom phones support directories. There’s a way to get this to work with LDAP, but I haven’t tackled that yet. So, for now, we generate those dynamically as well when the phone requests any of its *-directory.xml files. This one’s pretty easy since 1) we don’t allow the endpoints to customize their directories (yet), and 2) because every phone has the same directory. So all of those requests go to a script that outputs the XML structure for the directory: <directory> <item_list> <?php if(!empty($extensions)) { foreach($extensions as $key => $ext) { ?> <item> <fn><?php echo $ext["first_name"]?></fn> <ln><?php echo $ext["last_name"]?></ln> <ct><?php echo $ext["mailbox"]?></ct> </item> <?php } ?> <? } ?> </item_list> </directory> We do this for both the 000000000000-directory.xml and the [mac-address]-directory.xml file because one is requested at initial boot (the 000000000000-directory.xml file is intended to be a “seed” directory), whereas subsequent requests are for the MAC address specific file. Getting the log files Polycoms log, and occasionally the logs are useful for debug purposes. The phones, by default, will try to upload these logs (using PUT requests if you’re provisioning via HTTP like we are). But having the phone fill up a directory full of logs is ungainly. Wouldn’t it be better to parse that into the database, where it can be easily queried? And because the log files have standardized names ([mac-address]-boot/app/flash.log), we know what phone they came from.Well, that’s what the last two rewrite lines do. We rewrite those PUT requests to a PHP script and parse the data off stdin, adding it to the database. A little warning about this. Even at low settings Polycom phones are chatty with their logs. You may want to have some kind of cleaning script to remove log entries over X days old. Passing the initial config via DHCP At this point, we have a working magic configuration. Phones, once configured, fetch dynamically-generated configuration files that are guaranteed to be as up-to-date as possible. Their directories are generated out of the same database, and log files are added back to the same database. It all works well! … except that it still requires me to touch the phone. I’m still required to punch into the keypad the provisioning directory to get it going. That sucks. But there’s a way around that too! By default, Polycom phones out of the box look for a provisioning server on DHCP option 66. If they don’t find this, they will proceed to boot the default profile thats ships with the phone. It’s worth noting that, if you don’t pass it in the form of a fully-qualified URL, it will default to TFTP. But you can pass any format you can add to the phone. if substring(hardware, 1, 3) = 00:04:f2 { option tftp-server-name "http://server.com"; } In this case, what we’ve done is look for a MAC address in Polycom’s space (00:04:f2) and pass it option 66 with our boot server. But, we’re passing the same thing no matter what kind of phone it is! How can we tell them apart, especially since, at this point, we don’t know the MAC address. The first rewrite rule handles part of this for us. When the phone receives the server from option 66 and requests 000000000000.cfg from the root directory, we instead forward it on to our index.php file, which handles the initial configuration. Our script looks at the HTTP_USER_AGENT, which tells us what kind of phone we’re dealing with (they’ll contain strings such as “SPIP_330”, “SPIP_331” or “SSIP_4000”). Using that, we selectively give it an initial configuration that tells it the RIGHT place to look. <?php ob_start(); if(stristr($_SERVER['HTTP_USER_AGENT'], "SPIP_330")) { include "devices/polycom_ip330_initial.php"; } if(stristr($_SERVER['HTTP_USER_AGENT'], "SPIP_331")) { include "devices/polycom_ip331_initial.php"; } if(stristr($_SERVER['HTTP_USER_AGENT'], "SSIP_4000")) { include "devices/polycom_ip4000_initial.php"; } $contents = ob_get_contents(); ob_end_clean(); echo $contents; ?> These files all contain a variation of my previous auto-provisioning configuration config, which tells it the proper directory to look in for phone-specific configuration. Now, all you do is plug the phone in, and everything else just happens. A phone admin’s dream. Keeping things up to date By default, the phones won’t check to see if there’s new config or updated firmware until you tell them to. But his also means that some things, especially directory changes, won’t get picked up with any regularity. A quick change to the configs makes it possible to schedule the phones to look for changes at a certain time: <provisioning prov.polling.enabled="1" prov.polling.mode="abs" prov.polling.period="86400" prov.polling.time="01:00" /> This causes the phones to look for new configs at 1AM each morning and do whatever they have to with them. Conclusions The reason all this is possible is because Polycom’s files are 1) easily manipulatable XML, as opposed to the binary configurations used by other manufacturers, and 2) distributed, so that you only need to actually send what you need set, and the phone can get the rest from the defaults. In practice this all works very well, and cut the time it used to take me to configure a phone from 5-10 minutes to about 30 seconds. Basically, as long as it takes me to get the phone off the shelf and punch the MAC address into the admin GUI I wrote. I don’t even need to take it out of the box!
Read More
Apache

Google Chrome, Mac OS X and Self-Signed SSL Certificates

I’ve been using Google Chrome as my primary browser for the last few months. Sorry, Firefox, but with all the stuff I need to work installed, you’re so slow as to be unusable. Up to and including having to force-quit at the end of the day. Chrome starts and stops quickly But that’s not the purpose of this entry. The purpose is how to live with self-signed SSL certificates and Google Chrome. Let’s say you have a server with a self-signed HTTP SSL certificate. Every time you hit a page, you get a nasty error message. You ignore it once and it’s fine for that browsing session. But when you restart, it’s back. Unlike Firefox, there’s no easy way to say “yes, I know what I’m doing, ignore this.” This is an oversight I wish Chromium would correct, but until they do, we have to hack our way around it. Caveat: these instructions are written for Mac OS X. PC instructions will be slightly different at PCs don’t have a keychain, and Google Chrome (unlike Firefox) uses the system keychain. So here’s how to get Google Chrome to play nicely with your self-signed SSL certificate: On your web server, copy the crt file (in my case, server.crt) over to your Macintosh. I scp'd it to my Desktop for ease of work. ** These directions has been updated. Thanks to Josh below for pointing out a slightly easier way.** In the address bar, click the little lock with the X. This will bring up a small information screen. Click the button that says “Certificate Information.” Click and drag the image to your desktop. It looks like a little certificate. Double-click it. This will bring up the Keychain Access utility. Enter your password to unlock it. Be sure you add the certificate to the System keychain, not the login keychain. Click “Always Trust,” even though this doesn’t seem to do anything. After it has been added, double-click it. You may have to authenticate again. Expand the “Trust” section. “When using this certificate,” set to “Always Trust” That’s it! Close Keychain Access and restart Chrome, and your self-signed certificate should be recognized now by the browser. This is one thing I hope Google/Chromium fixes soon as it should not be this difficult. Self-signed SSL certificates are used **a lot **in the business world, and there should be an easier way for someone who knows what they are doing to be able to ignore this error than copying certificates around and manually adding them to the system keychain.
Read More
Apache

MySQL-based Apache HTTP Authentication for Trac and Subversion

In working on a side project with a few friendly developers, we decided to set up a Subversion repository and a Trac bug and issue tracker. Both of these, in normal setups, rely on HTTP authentication. So, being that we already had an authentication database as part of the project, my natural first thought was to find a way to authenticate Trac and Subversion of these against our existing MySQL authentication database rather than to rely on Apache passwd files that would have to be updated separately. Surprisingly, this was more difficult than it sounded. My first thought was to try mod_auth_mysql. However, from the front page, it looks as if this project has not been updated since 2005 and is likely not being actively maintained. Nonetheless, I gave it a shot and, surprisingly, got it mostly working against Apache 2.2.14. Notice I said “mostly.” It would authenticate about 50% of the time, while filling the Apache error logs with fun things like: [Sat Feb 13 11:11:27 2010] [error] [client -.-.-.-] MySQL ERROR: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 [Sat Feb 13 11:11:28 2010] [notice] child pid 19074 exit signal Segmentation fault (11) [Sat Feb 13 11:34:14 2010] [error] [client -.-.-.-] MySQL ERROR: Lost connection to MySQL server during query: [Sat Feb 13 11:34:15 2010] [error] [client -.-.-.-] MySQL ERROR: MySQL server has gone away:` Rather than tear into this and try to figure out why a 5-year-old auth module isn’t working against far newer code, and with very little to actually go on, I just concluded that it wasn’t compatible and looked for a different solution. That’s when I came across mod_authnz_external. If your’e not familiar with this module, what it allows you to do is auth against a program or script running on your system, therefore allowing you to auth against anything you want - a script talking to a database, PAM system logins, LDAP, pretty much anything you have access to. All you have to do is write the glue code. In pipe mode, mod_authnz_external uses pwauth format, where it passes the username and password to stdin, each separated with a newline. It uses exit codes to return back to Apache whether or not the login was valid. Knowing that, it’s pretty easy to write a little script to intercept the username/password, run a query, and return the login. #!/usr/bin/php <?php` include "secure_prepend.php"; include "database.php"; $fp=fopen("php://stdin","r"); $username = stream_get_line($fp,1024,"\n"); $password = stream_get_line($fp,1024,"\n"); $sql = "select user_id from users where username='%s' and password='%s' and disabled=0"; $sql = sprintf($sql, $db->escape_string($username), $db->escape_string($password)); $user = $db->get_row($sql); if(!empty($user)) { exit(0); } exit(1); ?> Then, you just hook this into your Apache config for Trac or Subversion: AddExternalAuth auth /path/to/authenticator/script SetExternalAuthMethod auth pipe <Location /> DAV svn SVNPath /path/to/svn AuthName "SVN" AuthType Basic AuthBasicProvider external AuthExternal auth require valid-user </Location> Restart, and it should be all working. Some may argue that the true “right” way to do this is LDAP. But with just three of us, LDAP is overkill, especially when we already have the rest of the database stuf in place. The big advantage to this, even over mod_auth_mysql, is the amount of processing you can do on login. You basically can run any number of queries in your authenticator script - rather than just one. You can update with last login or last commit date, for instance. Or you can join tables for group checking; say you want someone to have access to Trac, but not Subversion. You can do that with this.
Read More
Linux

Diffing files via FTP

I ran into a situation today where I needed to diff files on a remote server against the ones on a local server when the only connection method I had to connect to the remote server was FTP. I wrote a little quick and dirty script to diff files over FTP. It’s stupid simple - it downloads the file and runs diff on it against a local file, outputting the result. It’s great for finding changes on a webhost that cripples real developers by only offering FTP. It’s also a great companion to ftpsync, which apes some of the functionality of rsync, again on crippled webhosts. The command format is: ftpdiff <local file> <username:password@host:/path/to/file>
Read More
Linux

ngrep and memcache

You can use the Linux command ngrep to “watch” what is going into and coming out of memcache. ngrep is an amazingly useful tool for troubleshooting a wide array of network issues; I previously have used it extensively for troubleshooting SIP errors. In this case, I’m using it to be sure memache sessions in PHP are actually working. codelemur ~ # ngrep -d lo port 11211 interface: lo (127.0.0.0/255.0.0.0) filter: (ip) and ( port 11211 ) #### T 127.0.0.1:60912 -> 127.0.0.1:11211 [AP] get a804f5517468d4696c60da7eaf8a7179.. ## T 127.0.0.1:11211 -> 127.0.0.1:60912 [AP] VALUE a804f5517468d4696c60da7eaf8a7179 0 16..test|s:4:"test";..END.. ## T 127.0.0.1:60912 -> 127.0.0.1:11211 [AP] set a804f5517468d4696c60da7eaf8a7179 0 1440 16..test|s:4:"test";.. # T 127.0.0.1:11211 -> 127.0.0.1:60912 [AP] STORED.. It doesn’t help too much if you have multiple memcache servers (which is kinda the point of memcache), and since it’s raw data you can’t inspect the packets if they’re compressed, but in a testing environment, it’s a great way to be sure all things are kosher.
Read More
Linux

Ubuntu 8.04: My Thoughts

Every so often I get the urge to check out desktop Linux - just to see how things have progressed and whether or not it is in a usable state yet. For the last few times, the distro of choice I have tried has been Ubuntu, as that seems to be the new de facto starting point for a desktop distro. Before beginning this review, let me first say that desktop distros have come a long way over the last few years, and Ubuntu is by far the most usable of the ones I’ve seen. Ubuntu itself has come a long way and, for someone who is willing to compromise on some points, is quite usable for someone who’s willing to spend some time tweaking things. Having said that, it still has a ways to go before reaching Windows. And it’s not even in the same league as Mac OS X. First, a little about my test rig: An AMD Athlon64 3700+ with 2 gigabytes of memory, two 250gb SATA hard drives (one for Windows, one for whatever OS I’m testing at the time), and dual GeForce 7600 GS’s running three 19” Samsung LCDs. Not your standard setup, mind you, but not ultra advanced and bleeding edge, either. The installation: The installation is much the same as previous releases of Ubuntu: load up the live CD and, from within the live environment, launch the installer. The installer itself asks fewer questions that the Windows XP installer, yet seems to be able to do more. And doesn’t require endless reboots to get everything working. My installation proceeded mostly okay (being that Windows resides on sda, I installed Ubuntu in sdb), except that after I installed and rebooted … nothing. It kept booting into Windows. I reinstalled again just to be sure I didn’t blitz through the boot record screen, but sure enough, writing to the MBR on sda doesn’t work when you have two SATA drives and you’re installing Ubuntu on sdb. This has been a bug for at least the last two times I’ve tried to install Ubuntu. I can fix it with grub commands and properly write a boot record to sda, but for the purposes of testing (and because I’m lazy and wanted to play with it) I just plugged sdb directly in and removed sda. So I’m up and running. This is something that would befuddle a lot of folks, but to be fair I’ve had problems with Windows in the past, but it seems like it would be an easy fix. So I have Ubuntu installed now. Yay. Next step is to get my three LCDs working. This is where we run into what I think is the biggest hinderance to desktop Linux: X. If I plug three monitors into two video cards on a Mac, it’s going to turn on all three monitors and allow me to drag things between them all effortlessly (one big desktop). If I plug it into Windows, I’ll need to download the drivers, but after that, no problems. Not so in X, though in fairness it is likely more due to the intrangisence of Nvidia when it comes to providing open source support. First, if you want to do anthing, you have to download a “Restricted” driver. This is Ubuntu-speak for “we didn’t want to compromise our oh-so-precious ‘free’ principles in the name of usability” (in case you can’t tell, I have very little patience for zealotry). In Ubuntu 8.04, the Restricted Drivers Manager has been poorly renamed to Hardware Drivers. Doesn’t make a lot of sense, since a driver for hardware may or may not be restricted. So, I download and install the Nvidia drivers. Next, fire up the nvidia-settings utility to fix the X config. I was running this from the shell, but I later discovered that it puts a nice menu item in the Administration for you. It sees all my cards and, using this, I am able to configure everything up. You have multiple options for ways to do three monitors, but only one works: Xinerama. You could do three separate X screens, but you can’t move windows between them. You could do Twinview on one screen and a separate X screen but, again, you couldn’t move windows between a dual screen and the third monitor, the windows on the Twinview screen don’t maximize and minimize properly, and the login screen is right in the middle of the two monitors so that it’s very difficult to see what you’re tying when you login. Only Xinerama lets you move windows between the three monitors, allows them to maximize properly, and has the login on a single screen. This was about an hour of changing settings and restarting X before I got it right. The downside? It still isn’t supported in Compiz, which is a real bummer becauase compositing window managers was one of the things I was really looking forward to using. Anybody know if Compiz accepts bounties, because I really want this feature? So no Compiz. Oh well. Next, get my other hardware working. I have a Logitech MX1000 Laser (greatest mouse ever, by the way), and I like to map the buttons to do various things (most notably, I use the “cruise” buttons to go back and fourth on web pages). In order to get this to work: sudo apt-get install xserver-xorg-input-evdev cat /proc/bus/input/devices (find Logitech USB Receiver) sudo cp /etc/X11/xorg.conf /etc/X11/xorg.conf.bak sudo gedit /etc/X11/xorg.conf Changes: Section "InputDevice" Identifier "Configured Mouse" Driver "evdev" Option "CorePointer" Option "Name" "Logitech USB Receiver" #this should be the name of the device which I made bold here. EndSection sudo apt-get install xvkbd xbindkeys gedit ~/.xbindkeysrc Changes: /usr/bin/xvkbd -xsendevent -text "\[Alt_L]\[Left]" m:0x0 + b:12 /usr/bin/xvkbd -xsendevent -text "\[Alt_L]\[Right]" m:0x0 + b:11 After restarting (yes, again) I have working buttons. Yay. The volume control on my Microsoft Natural Egro 4000 works now. It seems like this required some hacking last time around. Yay. Now to install some developer tools so I can get to work. I love Synaptic; I wish Mac OS X had real package management the way Linux does - it’s one of the things Linux really has going for it, though I generally prefer Gentoo’s portage manager. So I install Eclipse. Huge package, and I was getting really crappy download speeds, so I let it run all night and went to bed. The next day found Eclipse installed and ready to go. Installed PHP, SVN, Apache. So I now have the tools to work. My conclusions: I like Linux. I really do. I want to see Linux succeed on the desktop. And Ubuntu has gone further, faster than any other Linux distro. It is now by far the most fit and ready to use of any desktop Linux distro. I have a usable system now, and, theoretically, there is nothing stopping me from using my machine for most of my daily work. Having said that, there is a lot to be said for style. First of all, it’s ugly as sin. The Gnome UI, while it is much improved, is still terrible when compared to Windows and OS X. Also, who thought that brown was a good color for a UI? Second, the names of some of the tools are un-intuitive: “Hardware Drivers,” “SCIM Input Method Detection,” “Authorizations,” and others need to have more intuitive names, and once you use any of them, the layout is not really intuitive either. The initial screen layout with a menu at the top and a taskbar at the bottom is also not really all that usable, though it can be corrected by removing the top panel. I’m using it now (typing this in Drivel) so it is usable, but it still can’t displace my Mac for ease of use.
Read More