OA - Ubuntu
A blog about Ubuntu, mobile GIS and archaeology

UNR Jaunty (French) Mini 9

Jun 25, 2009 by Joseph Reeves

Oh, what a job I was asked to do, install Kubuntu 9.04 on a French Dell Mini 9; I could barely contain my excitement. I'm not a Kubuntu user, I much prefer Gnome over KDE, I speak only the most rudimentary French (although with a little help I recently explained to a contractor how I wanted our new French office cabled) and using an azerty keyboard makes my head and hands hurt.

Chris has recently gone through installing Kubuntu on a Mini 9 in another blog post. Tony's comments on that post, such as the problems connecting to wireless access points and the intrusiveness of KDE Wallet weren't things I wanted to reproduce myself on a tiny little screen. It's that kind of annoyance that made me ditch Kubuntu on my proper sized laptop some time back, there was no way I wanted to see that in miniature. And yes, I'll admit now, I've never installed and used KDE 4.2; I've had a brief play and have been shown its various merits and have had its technical superiority explained to me. Despite this, it still looks like it's going to get in my way, not like this lovely Gnome.

I'd been given my task however, install Kubuntu 9.04 on this little French computer.

A temporary fit of madness must have come over me whilst I perused the available downloads on ubuntu.com; before I could say what was going on I'd downloaded Ubuntu Netbook Remix and had installed usb-imagewriter on my full sized laptop. A quick bug report later and I had the image on a USB stick. I could perhaps file another bug - why does the shortcut to ImageWriter appear under Applications > Accessories whereas USB Startup Disk Creator appear under System > Administration? Couldn't these either appear in the same place or, preferably, be combined into a single application?

Installation is easy; as Chris describes you push 0 on boot, change the boot order, wait for the live image to load, click install, use the laptop until you're told that it's completed, restart and pull the USB stick out. Give that to a friend / colleague so that they can enjoy the UNR experience. Run a quick update and you're ready to go, even Firefox is installed by default!

Following Fabian Rodriguez's suggestion I installed the droid fonts (sudo apt-get install ttf-droid) and changed my settings to suit. I'm becoming a big Android fan, so having this little computer look even a little similar is a good thing. I also quickly installed openoffice.org-base and openoffice.org-sdbc-postgresql and connected to the database that my colleague is currently developing. Easy-peasy! As the French would say.

Ah yes, our friends the French... Coucou madame! So far I'd done all this whilst struggling with the azerty keyboard, however, all the menus, dialogues and applications remained in my native English. A quick trip to System > Administration > Language Support and I had downloaded the French language packs and chosen it as my defaults. I logged in and out again and was still in English, pig dogs, so I rebooted and found myself in a whole world of computing en francais.

I think I'll give this computer to one of our French staff now and ask them what they think of it. It doesn't mean too much to me any more...

Canonical support

May 25, 2009 by Yann Hamon

Let's introduce a subject that I've rarely seen discussed on planets or forums: Canonical paid-for support. At Oxford Archaeology we have been paying customers for almost a year, and I think it is a good time to look back and see if it was worth it, what worked well, and what could be improved.

Why we chose to buy support

The IT insfrastructure has, over the years, become a critical element in a company like ours. Systems that handle the payslips, the websites, the databases, are very costly when they are not working as expected. You may hire the best sysadmin ever, there will always be the case where a problem/bug occurs that he is plainly unable to solve - and where getting help from the community just takes an unacceptable amount of time.

We purchased support from Canonical to ensure that our road will never be blocked by an unsurmountable bug - and to know that we have a company guaranteeing that the software we deploy will actually work as expected, and that commits itself to fixing it if it doesn't.

How we use Canonical support

I tend to use Canonical support as a very last resort when I really don't know how to solve my problem anymore, or when I have a critical bug that requires a quick fix. This happens fairly rarely, as the support from the community is pretty good - I opened five support cases in the last year.

The support portal for Canonical is http://landscape.canonical.com ; so landscape is not only a monitoring solution, it is also the whole support portal. It integrates salesforce.com for the management of support tickets. Here is what it looks like:

Landscape - Canonical Support

One good point in it is: you do not have to use landscape (the monitoring system) to be supported. We already have a monitoring solution (Munin + Nagios) - and frankly both externalizing our infrastructure and using proprietary services go completely against the very own reason we chose Ubuntu in a first place. Ubuntu gives us the ability to manage our infrastructure ourselves and with free software, and we are happy with it that way.

Quality and responsiveness of the support

The support quality is pretty good. I usually get an answer the next day - the people there are friendly and qualified. Not-so-basic questions on how to use the systems are answered quickly and in a professional manner.

More complex problems (ie: that require a patch / a new version of a package to be pushed) can take more time, but this is normal. I remember encountering a bug affecting apache; I had someone from the support browsing the apache mailing lists to find the correct patch, and making a patched package available to me on a PPA. This was definitely vey much appreciated - the bug was severe, and I wouldn't have been able to fix it myself.

Certified hardware

Now what this actually means remains a mystery to me. What does it mean to run Ubuntu on a "certified server"? Is it a requirement for a server to be supported? Does it mean that Ubuntu will run on it? Does it mean that 100% of the hardware the server contains will work, with opensource drivers? Does it mean that 100% of the hardware will work, but eventually with proprietary drivers? And what about the servers that are configurable, and where you can chose from several RAID cards for example - does it mean all the configurations available will work?

The state of the art here is a lot uglier than what it could be. Take for example the Sun X4100 (from which we bought a few at the time because they were supported):

I think we all remember too well the mess Microsoft got in with its "Vista ready" logo. We should learn from that and not do the same mistakes... If a server is certified, it means it has been tested - it would be really helpful to know what exactly has been tested, and to which extent Canonical is committed to getting Ubuntu to work on it :).

Supported software

It is also quite unclear to which extent Canonical will support supported packages. Imagine a software you use has a feature that is badly broken - and that is fixed only 3 or 4 versions of the software later... For example for kvm (yes, I already mentionned that in the past ;) ) - what features of KVM is Canonical committed to support?

  • Guest multi-processor support?
  • Live migration?
  • Paravirtualized network drivers?

All of this is in main, so officially supported. On the other hand I knew when I deployed that some of these features would not work... it is still unclear to me what I can reasonnably expect them to fix. I bet KVM isn't the only package with this issue?

It may sound painful but I think it would be a good idea to explicitly mention in all the packages in main (in the README, or any other text file in the package) which features of the package are *not* supported.

Level of commitment?

Although this never happened to me so far - what if Canonical encountered an issue it is unable to fix? Is Canonical at the very top level of the support chain, or would they eventually hire the services of a company that could fix it, if they reckon they are not able to fix it themselves? I doubt it - but it could be an interesting premium service (just a hint) :)

Conclusion

I am globally very satisfied with the level of support I am getting - I know I have a bunch of very capable people who would help me whenever I got a question, or who would spend time fixing a bug for me - which is something I may sometimes not be able to do myself. By paying these people to do that I also have the feeling that we contribute (in our own microscropic way) to making Ubuntu a better product, for us, and for all other Ubuntu users.

There are loads of companies out there who are using Ubuntu without paying for support - I hope they will understand that by paying for support, they will allow Canonical to hire more packagers and support staff, which will directly lead to a better product, saving them money in the end.

How much do Canonical job offers tell us about Ubuntu?

Apr 27, 2009 by Yann Hamon

I used to look a lot at the job offers on Ubuntu.com, as many times the job offers from a company tell a lot about the direction the company is going. It uses to be very interesting, as you can see how much effort is put into launchpad, Ubuntu mobile, or sometimes even learn about new projects before they get announced (in fact I even found my current job there, as companies proposing jobs related to Ubuntu are allowed to advertise there).

However, I was a bit saddened that a very large part of these jobs were non-technical or not really benefiting the community: business development, sales consultant, system administrators for Canonical's servers, launchpad developer, support, ... it was fueling the idea that some other communities criticized in the past that Canonical was only packaging and selling other people's work, without creating much added value. I am a pretty strong opponent of launchpad and landscape closed-sourceness myself...

So, I was really thrilled to see that Canonical was now hiring a "Desktop Architect - Network experience" person, and a "Desktop Architect – Sound Experience" person. Add this to the few offers from October which sadly are still there (Gnome developper, OpenGL developper), and it seems to me that Canonical finally decided to pass the second gear.

I was a bit afraid to see Canonical go in so many directions (Ubuntu mobile, ARM support, Ubuntu netbook remix, ubuntu server) - so I must say I am very happy to see that Canonical is still committed to providing the best Desktop experience, ever ;)

I wouldn't go as far as saying that it is related to Ayatana, but who knows... Really looking forward to what will come out of all this now! A working network manager, anyone? ;)

Kubuntu Jaunty Mini 9

Apr 06, 2009 by Chris Puttick

Ok, not original, but it took me a few trawls around to make all this work, then it worked so easy I thought I would share :) - these instructions should work for most netbooks (except the bits that say "Mini 9" of course) and other computers (except the bits that are about netbooks...).

Install process

First backup your data - actually irrelevant to my tests as these were brand new out of the box machines for corporate use, but I just like to remind people ;)

Download ISO of Kubuntu Jaunty from the Kubuntu download page. At time of writing, this was actually the beta which was downloaded from the Kubuntu beta page.

Install usb-creator onto your current Kubuntu install (available in Kubuntu 8.10 for sure, not sure about earlier). This excellent application simplifies the creation of bootable USB flash drives to the point I could do it slightly hungover...

Plug decent-ish USB flash drive with 1GB of free space or so into your computer.

Run usb-creator, entering sudo password when prompted; select downloaded ISO and target USB flash drive. Press the Make Startup Disk button.

Wait. Not very long...

Now (re)boot target Mini 9 computer with USB key in it and either modify the BIOS to make USB its first boot device or press "0" during the splash screen to get to boot options menu and select USB from there.

Go through install setup process as suits. Click the button to complete.

Wait. Not very long at all...

Install completes in under 10 minutes. Reboot.

Wait. Not very long at all, at all...

Repeat as necessary (from the (re)boot target step just above).

Aftertweaks
Create your very own Kubuntu NBR by going to system settings and setting all the fonts to small. If likely to use Konsole, configure its fonts to small too.

Add Firefox (sorry guys, but it really should be in there by default. Trust me...).

Open mixer window and ensure speaker volume is turned up.

Done. Enjoy :)

Things that remain annoying
(i) Getting NetworkManager to use a Bluetooth connected mobile as a modem. This I'm told worked easily with the rather lovely BlueMan and previous versions of NetworkManager, but no longer does, As these netbooks are intended for non-expert users, I really need to get this sorted in a click GUI sort of way. A fix has been committed, I understand, hopefully it will be packaged up shortly.
Update 090423: yes, it does work very easily with Blueman and the older KNetworkManager (after hours of struggling after the rfcomm0 net interface started appearing in the network device list when you used Blueman to connect the dialup servive of the phone...). See bug https://bugs.launchpad.net/ubuntu/+source/plasma-widget-network-manager/+bug/334122 for latest on this, but for now remove the plasma-widget and install KNetworkManager. Now I just need to figure out how to make the older application start automatically...

(ii) Be damned if I can figure out how to make the edge of touchpad work for scrolling. Come to think of it, not sure the edge of the touchpad worked for scrolling with the Dell build either...
Update 090409 - just noticed this is now working, I guess an X update or something...

(iii) The launcher doesn't put the focus into the Search box on the first one I built, did the second, which then stopped doing it again...
Update 090415 - after this morning's update this is working as expected :)

(iv) be nice if the launcher could be scaled so that the "Leave" section displays all its content without scrolling (picky, I know), although now I've got used to just letting it suspend to RAM by closing the lid and not worrying about it.

Any of these issues may of course be resolved when using the release version rather than the beta or by some clever fellow providing a nifty solution as a comment below. I'll update the post as things progress!

Evolution is better than revolution - unless it aint

Apr 06, 2009 by Joseph Reeves

We have precious few Linux flame wars here at Oxford Archaeology; on the whole we're just all too nice and well meaning. There's one exception, however, and it's a classic - the good old KDE versus Gnome flame war stalwart.

I'm a Gnome user; I tried KDE for a whole year, but came back to the correct path. I don't think Chris was convinced by my argument (check out the only comment I got), but he's my boss, so I won't push it too far.

Some time has passed since then, although recently Chris penned a popular entry on this blog Evolution is better than revolution. Aha! Admission that the Gnome way was the way forward; nobody likes those ugly KDE releases that change to very different, yet still ugly, KDE releases. Everyone likes to keep it slow and considered. Mellow.

Slashdot picked up on the same theme, discussing Bruce Byfield's article on the "evolutionary advantage"; evolve rather than revolt by starting afresh and ditching support for all the old cruft in your software. It seems that Gnome are now going down the same route, as reported by El Reg.

So, evolution is better than revolution, unless it isn't? Ubuntu has a good core that can be evolved gradually whereas other projects need to be "evolved" in a slightly more revolutionary way? Frankly I'm just pleased that with Gnome 3.0 expected sometime in the future, there's really no reason to consider that awful KDE again ;-)

Flames in the comments please!

KVM 84 backported in Hardy

Mar 29, 2009 by Yann Hamon

Dustin Kirkland announced a few weeks ago that he was trying to backport KVM-84 to Ubuntu Hardy. This was made following a post on the ubuntuserver blog that described the way KVM-84 would be backported: ~ubuntu-virt PPA -> hardy-backports -> hardy-{proposed|security} -> hardy-updates.
This raises some interesting questions. But let's define the context.

At OA, we've been using KVM in production since the very beginning - we started testing KVM in april 2008, deployed in late may, and started to move critical pieces of infrastructure on it later in the year. Some bugs surfaced after a while: problems with virtio networking, problems with the default NIC as well, performance problems with SMP, networking problems with SMP, need to use QCOW2 for windows VMs (although virt-install doesn't create QCOW images by default)... We managed to workaround most of them, mostly by using e1000 as NIC and living with the SMP slowness. We reported these bugs in launchpad and landscape, and Canonical had quite a tough time helping us with these issues.

In Ubuntu Hardy, the version of KVM used is kvm-62. The current version of KVM is KVM-84 (if not newer already) - and as you can see on the mailing list the development is going on really quickly. KVM is an opensource project created by QUMRANET and now belonging to Redhat. So, when you arrive on the mailing list or the IRC channel saying "I am running a one year old version on a concurrent distro and there is a bug", you'd be lucky to get help from a KVM dev (ie, a redhat engineer).

This is made worse by the development model, which is somewhat weird, as afaik there is no "stable" version (or only stable versions, pick the one you like). New versions get released all the time, fixing bugs in the previous versions. This is making the life of Ubuntu packagers hard, as they need to backport security and critical functionality bugs.

To summarize: It is extremely hard for Canonical to support packages for longer than the upstream would support them. There is no chance for Canonical to support KVM-62 in 3 years time, as some bugs may be fixed so much later that the code would have changed significantly, making the patches not that easy to apply (I am also still wondering how Canonical plans to support PHP4 in Dapper in 2 years time).

Backporting KVM-84 is a truly significant move, that acknowledges the lack of control on the supported packages, but also that confirms the commitment of Canonical to support packages during the whole support period. As a customer we are thrilled to see that. I think it is also the first time Ubuntu will push a new version of such an important package in -upgrades, that introduces so many new changes (but I may be wrong on that). It is quite an interesting precedent if you ask me :)

Now KVM is at the core of our infrastructure and I just can't afford to upgrade and experience new, significant bugs. I will be helping kirkland in testing kvm-84, reporting as many issues as I can, but I must say I am quite eager (and scared at the same time) to see how Canonical will handle quality assurance on this. I really hope they get it right :]

Announcing our competition winner

Mar 26, 2009 by Joseph Reeves

Last weekend I asked if anyone could come up with a name for the animal bone recording system we are launching; we were "crowd sourcing", which I think is Web 2.0 speak for "getting others to do it". Still, suggestions poured in and OA Digital staff have been getting a little bit democratic, a little bit creative and a little bit etymological; results are ready...

The winning entry was suggested just two minutes after the competition opened, leaving me feeling a little like ITV. Regardless, Jeremy Ottevanger wins it with zooOS.

From our new, ink's still wet, Launchpad project:

zooOS (pronounced Zeus) is an open source system for recording and analysing animal bones found during archaeological excavations. The project builds on the work done in the York recording system and by English Heritage in extending it. Main technologies are PHP/JS and PostgreSQL.


https://launchpad.net/zooos

zooOS then, a word with mixed etymology meaning animal bone whilst highlighting the importance of Open Source. Sounds like a Greek god too. Well done Jeremy.

Schuyler Erle's Ossuary was a high scoring contender (I think we're going to use that one for something else); I personally also liked ryts' FOSS-il, Sean Gillies' SQLETONS, and Skeletor, as suggested by both Jody Garnett and Benjamin Kay. Benjamin also came up with BONEDb, which made me laugh; RafaMJ's Zoonecropenis was certainly inventive, but sadly not a winner...

If you're interested in bones (who isn't?) and like Open Source Software (again, who doesn't?), then come by our Launchpad site and see how we're doing. Feel free to get involved! Thanks to all who entered, I look forward to some exciting stuff from zooOS.

We need your help

Mar 21, 2009 by Joseph Reeves

We're starting work on an Open Source animal bones database for use by Archaeologists (and others?) - it's going up on our Open Archaeology Launchpad site, but we've hit a snag - what do we call it?

Chris has emailed about it, Jo has blogged about it, but we're still without a name.

What would you call an Open Source animal bones database that will be finding use all over the world? Immortality is the prize awarded to the winner, there'll be no losers.

Evolution is better than revolution

Mar 03, 2009 by Chris Puttick

I originally posted a version of this as a comment on this ZDNet article, but it seemed something that deserved wider exposure than a ZDNet comment... ;)

In so many things. Life. Cultures. Political systems. Motorcycles. And of course operating systems and the applications that run on them. In all these and others, evolution is almost always better than revolution.

Desktop and server alike, actually we (the users, both home and professional) don't want change that is big enough to be unsettling and difficult to deal with. We want improvements. We want the niggles ironed out.[1] We want things to just get a little bit better over time and each upgrade. And like evolutionary pressures everywhere, what is seen as better will tend to change over time.

We want upgrades to be something you just do in your own time without any major concerns, without trepidation. Particularly in an organisational context; evolutionary change between software versions means no huge planning overheads, far less rigorous testing requirements and far less stress for all concerned. Upgrades should be regular and be more like patches rolled into service packs bundled up together, but with no new features held back for nefarious marketing purposes.

In short, we want the development cycle of Ubuntu and OpenOffice, Firefox and (now it seems) KDE. All of us would prefer it, even those still stuck with legacy systems that don't follow such comforting behaviour.

New versions of software and operating systems being revolutionary, a big step-change, is a mindset from the legacy proprietary thinking. We (who have moved to the new way) don't need to be "sold" the upgrade because we don't have to pay for it. Incremental change, more often, is far more comfortable for all involved.

So in fact open source is more people-friendly and better for organisations, even in its development paradigm...

[1] Like why does the shell in Ubuntu still not default to using Page Up pattern matching from history on partial commands? Costs nothing to put it in AFAIK as it is just a config setting. But is incredibly useful.

Edit: markup fixed

KVM - The informations you may be missing

Feb 24, 2009 by Yann Hamon

Good morning Ubuntu, we are together today to try to help people who want to deploy KVM. Ubuntu Hardy being the latest LTS version of Ubuntu, it is a good choice for people like me who don't want to spend time upgrading every 6 months. On the other hand, there are some issues with KVM on it - so let's work around them!

Short howtos

You should use the e1000 network NIC (really)

Using the default network NIC, I have noticed tiny but highly annoying ethernet errors. Run a ifconfig -a:

RX packets:50916071 errors:24 dropped:27 overruns:0 frame:0
TX packets:4185949 errors:0 dropped:0 overruns:0 carrier:0 

Oups! That may be the reason why the md5 of some of the isos I copy sometimes don't match. Sooo - I moved to virtio for my network. With not so much more success - virtio seems very unhappy with large file transfers, and the driver would just crash, leaving the VM without connectivity. Not great - luckily the e1000 seems to work fine so far.

So, run "virsh dumpxml virtualmachine" - copy the result and paste it into a new file, the name doesn't really matter. Open that file and look for the section with the interface, where you will add the "model" line:

    <interface type='bridge'>
      <mac address='00:16:3e:00:50:19'/>
      <source bridge='br0'/>
      <model type='e1000'/>
    </interface>

Now, run "virsh define thefile" and restart the VM. Should be much better :)

The VMs don't start automatically when the host starts. But they could!

A trick found thanks to virt-manager and to the help of people on #ubuntu-virt. Go to /etc/libvirt/qemu/, and there create a folder called "autostart". There, make a softlink to the XML files of the vms you want to start automatically:

cd /etc/libvirt/qemu/
sudo mkdir autostart
sudo chmod 755 autostart
cd autostart
sudo ln -s /etc/libvirt/qemu/virtualmachine.xml virtualmachine.xml

And that should do the job.

Get Windows XP/2000 to run

Slightly tricky. Forums are full of questions about that... Use the Qcow2 file format! Sadly virt-install doesn't have any way to specify the file format (at least in ubuntu Hardy) - so let's start with a normal creation of a VM:

cd /var/lib/kvm/
sudo virt-install --connect qemu:///system -n vm -r 1024 -f vm/windows -s 20 -c /home/yann/win2Kserver.iso --vnc --noautoconsole --os-type windows --os-variant win2k

Then, we quickly stop the VM while it boots *cough* : virsh destroy vm - and convert the image to qcow2:

cd /path/to/your/image 
sudo qemu-img convert windows -o qcow2 windows.qcow2

There are also some tweaks we will need to do to the XML definition of the file. So, as previously, run "virsh dumpxml virtualmachine" and paste the result in a file. There, update the path of the disk:

<source file="/path/to/your/vm/windows.qcow2" />

Update the network NIC to e1000 (see previous section), and add a link to the cdrom (windows reboots after the formatting part, and can't find its cdrom anymore as virt-install attachs the cdrom only for the first boot).

      [...]
      <target dev='hda' bus='ide'/>
    </disk>
    <disk type='file' device='cdrom'>
      <source file='//home/yann/win2Kserver.iso'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
    </disk>

"virsh define file", and "virsh start vm". You should now be able to install windows XP and 2000 (didn't test other versions) without problems!

And some news...

Spice-up your remote desktop

Sorry for the ugly title ;) So, at the moment, we use VNC to connect to our KVM virtual machines. Let's admit it: VNC's performances suck. When Qumranet developped KVM, they also developped a proprietary alternative to VNC called SPICE. The interesting bit is that now that Redhat has bought Qumranet, they want to make it opensource, and apparently it's planned to happen pretty soon. Here is the interesting interview. I've been looking at alternatives to VNC for months, and beyond NX and its single implementation, I've not been very successful - so huge hopes from me on that side.

Some more links for those with real interest

Shamelessly copied from an interesting article on linuxfr.org:

  1. Red Hat Sets Its Virtualization Agenda
  2. Red Hat Announces Broad ISV Ecosystem is Virtualization-Ready

And Ubuntu?

As Redhat is owning KVM, they are the best place to look for shiny news... But regarding Ubuntu, I heard the current buzzwords were Eucalyptus and Open nebula. Haven't really had time to try any of these yet - would be incredibly grateful for feedback on them :)



That's all for today - as a strong KVM supported I'd be happy to reply to questions and queries related to it in the comments. Thanks for reading ;)

Zimbra - Presentation, example of an "anonymous mailing list"

Feb 14, 2009 by Yann Hamon

Hello, today we are going to talk about Zimbra. This post is the result of my overall satisfaction with it - I hope it won't be considered as an attempt to advertise for them hoping for special favours in return. People who know me know I really like their product, and I believe the people there really deserve some praising.

Introduction

So, what is Zimbra? Zimbra is a complete, multi-platform, easy to install, scalable mail and collaboration software. It is based on opensource software, including mysql, jetty, mailman, spamassassin, postfix, Java, amavis, openldap .. Zimbra bundles all these together, and includes a nice web-2.0-ish web interface to manage everything. Among the features you will find:

  • Antivirus/Antispam
  • Shared calendars
  • Shared task lists
  • An integrated jabber server
  • A very good and fast search (everything is indexed)
  • Shared email folders
  • Tags for emails
  • Integration with LDAP/Active Directory
  • Static HTML and mobile phone interfaces
  • POP(s) and IMAP(s) access

Zimbra at OA

The good

At Oxford Archaeology we have gradually moved over the past year from Exchange to Zimbra ZCS 4, and then 5. This involved the transfer of 350 people's mailbox, PSTs, their training, mailing lists, and so on. Thanks to the Exchange import and PST import tools, this went rather smoothly. I just had to write some homemade awk scripts to bulk-import the users and mailing lists, but I think this is now a feature of Zimbra 5.

Before, people in our main offices in Oxford would use Outlook, people in our branch offices Outlook Web access (which really sucked, to be fair, and only worked properly in internet explorer). Now people would access their email in the same way, from the main office, a branch office, or from home; and this is a huge step forward.

On a whole, I must say I am very impressed by their web interface and the features it proposes (you can give it a go here, just press skip registration). More than the product itself, I really like the way the project is managed. Several staff spend time on the forums and provide free support to non paying customers; this is great. The bugzilla is public, and as a customer you can join a support ticket to a bug/feature request, therefore giving it more weight. The support is quick, and even with the lowest level of support you are entitled for 24/7 phone support in case of major breakdown. The quality insurance of the product is very good; bugs do happen, but you can usually live with them, and if serious they are treated with the appropriate speed.

The bad... and the ugly

My biggest concern right now is the fact that Zimbra has been bought by Yahoo. Things have got a lot worse since that happened. Yahoo started over-branding the zimbra interface with their crappy logos, pushed an ugly yahoo search nobody wanted, installed yahoo zimlets by default. The Zimbra Desktop is now Yahoo Zimbra Desktop. The previous conception was that Zimbra would never propose Zimbra hosting, and instead let a pool of partners propose their own solutions instead; these have all been screwed now.

Then it makes me unsure about the free will of the developers, who now uses the YUI libs for ajax development; was this because it was the best option? To this, add the shadow of a purchase by microsoft or AOL - who may not have the best interest in keeping it running as it is. Finally, Zimbra licensing sucks (even the opensource is ad-licensed - you have to keep the original logo). I really hope Yahoo will understand that it is a bad idea to use paid-for Zimbra customers to push their unsuccessful products. That was for the ugly part of Zimbra.Zimbra's power by the example: an anonymous mailing list

Ok let me define what I mean by "anonymous" mailing list. I had the request recently to create a new email address. Several people should be able to read the emails sent to that email address, and to reply to these emails. Mail sent as a reply should appear to be coming from that "mailing list" and never display nor contain the name of the actual sender.

How it used to work

This used to be implemented as a full account, with a password shared among all the person who would have to use it. This is bad because:

  • Well, first I need to pay for a new account (you pay for a certain number of mailboxes).
  • Someone who would leave the company in bad terms would still be able to log into it, unless you change the password every time someone leaves (regarding the number of mailing lists, this is not really possible).
  • The user needs to log into two different mailboxes, and remember more passwords.

The Zimbra way

Let me try to explain how you could do this with Zimbra. Good luck setting this up without weeks of hassle on any other system ;). This is just ONE way to do this and may not be the best; I found it to be quite flexible.

First, let's create a new user account, which we will call "mailinglists@domain". Let's say the address of the mailing list we want to create is anon@domain. We create an alias "anon@domain" for the mailing list mailinglist@domain.

We do not define a password for the user account mailinglists@domain - and use the admin interface to "su -" into it. In that account, we create a new folder called "anon". We then create a new filter to file any incoming email, addressed to anon@domain, into the folder "anon".

We then right click on that folder, and select "share with". Now you add all the persons that should be able to read these emails.

This is for the first step; now the persons who are able to read these emails need to be able to reply using anon@domain as sender. In the admin interface, go to the profile of a user that should be able to send emails as anon@domain; go to the preferences tab, and "Sending mail" section, at "Allow sending mail only from these " add "anon@domain".

Then, use the admin interface to "su -" to the user account, and go into the preferences tab, accounts sub-tab. Select "Add new Persona", and configure it like this:

Now save. You're done! When sending an email, the user will have a new drop-down list that will allow him to select with which address he wants to send. Following this method will allow you to use only a single zimbra license for all mailing lists, allow any new person to have access to all the archives, and not tie the mailing list to any particular account.

Zimbra and Ubuntu

Ubuntu has been a supported platform for Zimbra since Ubuntu 6.06. Ubuntu 8.04 has become fully supported as well a few months after the release of 8.04 (right, we pushed a bit :P). We believe Ubuntu 8.04 is a platform of choice for Zimbra - and hope Zimbra will continue to support it in the future.

Edit: title typo fixed

Keep the fun in free software

Jan 27, 2009 by Yann Hamon

This article could also have been entitled "is it possible to still have fun developing a free software once it is successful". It is non-technical, very subjective, and summarize something I've seen happening in many projects. I promise my next post will contain some command lines again ;)

Step 1 - a geek starts the project

I would define a "geek" by someone who likes to be intellectually challenged on technical matters - someone who actually enjoys learning how to do fairly complex technical stuff. There are quite a bunch out there, the ubuntu community is a good place to find some ;) So, what is the best way to learn linux? Install it, and use it. To learn how to do a website? Make one. To learn programming? Start a small project.

I believe this is the path we all have followed: when we want to learn something, we try to find a little project that is, if possible, fun to code, challenging, and eventually useful. So that is what our test-case user, John, does: he wants to learn programming, and starts his own tiny software.

Step 2 - little project grows and gain some users

It appears to some people that John's project is actually quite useful. So, these people start using it. These people are very technical, know that the software is alpha quality, and make wise recommendations on how to improve it. They are grateful to John for coming up with such a neat little piece of software - John is very happy to see a little community gathering around his software. His programming skills are increasing very quickly, and the requirements of these early users keep him intellectually challenged.

John now realizes that his first idea, which he just picked late at night after a couple of drinks, is actually pretty good; he sets himself new goals to be achieved, quite tough, but that would make his software even more awesome.

Step 3 - project goes mainstream

John has spent countless hours writing code for that software, learned immensely and had a lot of fun. The program made it to a couple of news portals, and quite a lot of people are now gathering to the small website that he built up to present it. Some people find it great, most of them in fact; but a few ones also start to complain about lacking documentation, internationalization, and code quality.

Although John's software works great for him, John understands that documentation, internationalization etc.. are an important part of the project, and may get him even more users. So he spent a lot of time doing this, which was quite boring, but resulted in people translating the program, and even more people using it. Nearly two years have passed, John is now quite a good programmer.

Some more programmers have joined, and start working on the code. Things like backward compatibility, scalability, and user interface start to become important. Every little change needs to be carefully thought, things that were built in the past can't be changed as easily as John wants. But well, it's all a lot of fun, and John wants to see to which point he can keep his community growing.

Step 4 - Upgrades, and angry users

The latest update of John's program didn't work as well as expected- and the documentation is outdated. The forum is starting to get flooded with negative remarks, complaints, to which John starts to reply nicely, but quickly gets overwhelmed. Some people in the forum expect help and don't even bother to try to be nice.

John has the feeling he is spending a lot of time doing support, documentation, the process to change the code he wrote in a first place has become incredibly complex; he feels less challenged than before, and some changes in his private life don't allow him to spend as much time to the project as what would be needed. It will eventually grows to a point where he won't learn anything anymore at all, and spend his time trying to reply nicely to feature requests and supporting people on the forum.

Step 5, eventually

John has supported his little piece of software for months, more not to leave his users alone than for his own profit, but the user-base keeps growing, and there is no sign of relief in sight. John's qualities as a programmer have been recognized and he now even works in a company, programming, which he enjoys a lot. But he just has no more fun supporting that piece of software he wrote so long ago. So he stops spending time developing the software and supporting it.

Eventually, if the software is successful enough, the community would take the work over. If not, the software would have a long and slow death - its user base decreasing slowly over time, noticing that no updates are available.

Keep it fun!

So to my question - if the project never made it to step 3 - would John still be working on it?

I believe that most people who get involved in free software are geeks. They do so because it is intellectually challenging, interesting, because every day is there to learn new stuff. But once a project gets successful, it gets less and less fun to support - and as your knowledge evolves, you also learn less and less. I mean, there are so many projects where I have the feeling the developer(s) just aren't enjoying it anymore... maybe they are working on even more challenging projects now? Anyway - is there a way to keep a project both successful and fun?

Ubuntu in the server room - enabling the root account?

Jan 19, 2009 by Yann Hamon

I tend to generally believe that passwords are quite a poor way to authenticate users. How many systems have been compromised by automated bruteforce attacks because some users on the system would use "password", "letmein" or their first name as a password? Working in a corporate environment where the average user isn't very technical, I can say a good 10% of the users have poor, if not very poor passwords.

Even for sysadmins; we would tend to either reuse the same password for all the machines, have a different one for every machine and not be able to remember it (or store the passwords in a very poor manner), or... Well, we decided to go the key-authentication way (like many others), and not use passwords at all. Sysadmins & devs can log into the servers with their key, and use sudo (for those with appropriate permissions) without password; adding passwords on top of it would actually bring us back to my first point.

This is perfectly fine for remote SSH access. But I found myself pretty stupid one day when the internet connection of one of the servers went down, I plugged a screen & keyboard in, and got a login prompt - asking for a password. Or another day, when someone broke the sudo on its machine - we could still login, but we had to reboot it into single mode (it was a dev-server, so not *that* bad) to be able to sudo again.

Alright, to my point now: having a root account, with a very easy password to remember, is very, very convenient for servers, in my opinion. Because:

  • You do not want to have to reboot your production server in single mode if for some reason you screw something up (sudo / eth* interface / ssh).
  • The root user is usually a more "robust" than your own user. It's home is /root (by default), not /home/root, which makes it easy to put on a different partition (so that it can still login if the /home is full). It can be set to use a more "robust" shell; if for some reasons you screw your bash, you can use the root account to use sh (which would eventually be a pain to use on an everyday basis), etc.

There are really a number of ways you can screw your own user up in a way that would prevent you from logging in - and the last thing you would want to do is reboot a server that is being in use (and has 3 years of uptime ;) ). In many of these cases, having a root user properly setup would help. Well then, what's the point of using sudo, and separate user accounts, as many would just use root for "convenience" if they had the password?

My idea on it is that the root password should never be used - never, unless the system is that badly broken that you would have to get physical/serial port access to the server to get it fixed. And my belief is if someone has physical access to the server, then he can have full access to everything anyway, so why not just make it easy? This is where Linux comes into help. Just before the solution, here were my requirements:

  • Make it easy for someone with physical access to the server to get an admin access (we have our servers on site - this is still valid if you have a serial connection access though)
  • Make it impossible for someone without physical access to use the root account "directly" - let it be ssh root@server, su - root or anything similar.

I spent a long time grepping /etc, I found /etc/login.defs, /etc/securetty, /etc/consoles... Nothing would work. So thanks to Canonical support on that one: you need to enable it into PAM first, by adding in /etc/pam.d/su:

auth requisite pam_securetty.so 

In that /etc/securetty file, you just leave the ones that should be able to log in as root: tty1, tty2, tty3, tty4 in my case. I would love some feedback on this, as it seems fine - but maybe it's not? :)

Open Archaeology

Jan 14, 2009 by Joseph Reeves

Open Archaeology

Yann's been contributing a couple of technical posts about the use of Ubuntu at Oxford Archaeology, but I thought I'd talk about something a bit different.

First of all, if you'd like to read some background into our use of Ubuntu you can click for the case study that's also included with every Ubuntu desktop CD image that's downloaded:

http://www.ubuntulinux.org/products/casestudies/oxford-archaeology

The main thing I'd like to write about, however, is our Open Archaeology project; our commitment to Open Source, Open Standards and Open Data.

Archaeology as preservation and communication

Commercial Archaeology, for those that don't know, is largely concerned with the preservation of the archaeological remains that are at risk of being destroyed by development. In most cases we adopt a policy of "preservation by record"; by which we mean that the majority of the archaeological remains discovered on a site, such as features produced by human interaction with the landscape, are physically destroyed, but preserved through the meticulous records we keep. For the most part we're not talking about Indiana Jones style temples that exist to be explored and looted, instead we're preserving the archaeology that you find under a well known airport expansion.

Just producing records, however, is largely a waste of time. You have to be producing records for somebody and in a way that's of the most benefit to them. Archaeology isn't just a process of preserving the remains of the past, but of communicating these remains and their significance to others. If you wanted to know about the archaeology of Terminal 5, for example, you wouldn't want me to dump a load of mud on your doorstep and tell you to work it out for yourself; it would be completely impractical and anyway, it's currently supporting some large buildings and runways. What you would want, however, was the archaeological information that was derived from this initial evidence. Of course, the past doesn't belong to anyone either, so you'd be interested in getting as much from these results as anyone else.

Archaeological Freedom

Our Open Archaeology policy, our combination of technological and social commitments, allows us to preserve and communicate the past most effectively. We also have the freedom to engage in a level of creativity not afforded by proprietary tools; archaeology is safe, free and not constrained by the (lack of) imagination displayed by an application developer catering for another industry entirely. I think that's a pretty cool achievement of the Free Software universe. This brings me to our Launchpad site:

https://launchpad.net/openarchaeology

We don't just talk the open talk, we walk the walk as well; we're active members of the Open Source community, both as users and contributors. That's the reason why all interested people are invited to check out what we do, talk to us about it and help not only improve Free Software, but also our archaeological record and our ability to communicate and think about the past.

That's the cool thing that being free allows you to do.

Joseph

Apache + mod_proxy + mod_ssl - A good, secure reverse proxy

Jan 11, 2009 by Yann Hamon

Having several offices in the company and one of our policy being to allow people to work from home, a lot of our services are available as web services. As we make heavy usage of virtualization, these websites are spread on many different virtual machines, depending on their requirements (PHP, Java, Python, Mysql, Postgresql...). With only a limited number of public IP addresses, we hence had the need for a HTTP reverse proxy.

Came the question: what would be the reverse proxy that would best fit our usage? We had a look at several alternatives: Varnish, Squid, Apache with mod_proxy, nginx, haproxy... The reasons that made us chose mod_proxy were the following:

  • It is in main - So long-term supported, which is critical for a frontal webserver. As we have canonical support, we can rely on them so that business-critical bugs will be sorted.
  • Very good documentation - most of the other solutions are badly documented (nginx documentation is in russian and part of it is non translated, squid documentation isn't great neither...)
  • Easy configuration - a lot easier than varnish or squid...
  • Low traffic: we don't have millions of hits a day; so even if maybe other reverse proxies have better pure performance, it is not our main point of interest.
  • You get all the other apache modules with it: mod_rewrite, mod_cache, etc...

After several months of use, I am very happy with that choice. It runs on a quite low-end VM and the load rarely goes over 0.1.

After some weeks, we decided we wanted to improve security and purchased a wildcard SSL certificate. There is also a big advantage in using a reverse proxy to do encryption: the backend application doesn't have to support HTTPS - and you have a single way to configure it. In other words, it is easy (you just do the configuration once for the apache proxy, and don't have to configure it for every single HTTP server you may be running), and completely transparent for the backend.

Several months ago, many people pointed me at Pound - but it just wasn't needed, thanks to mod_ssl. In the end, what I am doing is mass name-based virtual hosting with SSL; which is apparently not recommended (throws warnings in the logs, any idea why?) but works like a charm. This is how one of my vhost declaration would look like:


<VirtualHost *:80>
	ServerName website.thehumanjourney.net

        # Rewrite all incoming http request from external IP to https
        RewriteEngine On
        RewriteCond %{REMOTE_ADDR} !^10.0.*$
        RewriteRule ^/(.*) https://%{SERVER_NAME}/$1 [R,L]

	ProxyPass / http://INTERNALIP/
        ProxyPassReverse / http://website.thehumanjourney.net/
	ProxyPassReverse / http://INTERNALIP/

        CustomLog /var/log/apache2/website.thehumanjourney.net.access.log combined
        ErrorLog /var/log/apache2/website.thehumanjourney.net.error.log
</VirtualHost>
<VirtualHost *:443>
 	ServerName website.thehumanjourney.net
      
        SSLEngine on
	SSLCertificateFile /etc/apache2/ssl/thehumanjourney.crt
	SSLCertificateKeyFile /etc/apache2/ssl/thehumanjourney.key

        ProxyPass / http://INTERNALIP/
        ProxyPassReverse / http://website.thehumanjourney.net/
        ProxyPassReverse / http://INTERNALIP/

       CustomLog /var/log/apache2/website.thehumanjourney.net.access.log combined
       ErrorLog /var/log/apache2/website.thehumanjourney.net.error.log
</VirtualHost>

The ProxyPassReverse is the bit of black magic that took me a while to figure out; but in the end, it is just if the website creates a redirection using its internal address, the proxy would convert it to the external address before passing it to the user.

Even with SSL, the CPU doesn't blink at all. Many people consider this as a bad choice as they associate apache with bad performances; but before you decide not to use apache as a reverse proxy because it is "slow" - ask yourself the question: do you really have the need for the X GB/s throughput that nginx or varnish may provide?

About KVM

Jan 04, 2009 by Yann Hamon

At Oxford Archaeology, we are running many, many different systems. Websites, Database application, license servers, development servers, mail, monitoring, backups, remote access servers, proxyes... Most of these do not have huge performance requirements, but are quite complex to set up.

We decided, now a couple of years ago, that virtualization would allow us to virtually separate the systems, while limiting the number of physical servers needed. At the time, the quick and easy choice was vmware server - while we would study a long-term, stable solution. As always with temporary solutions, the vmware servers have been running far longer than initially planned, in fact we still have two legacy vmware server. Among the many problems we have had with vmware server:

  • Time shifting. We tried many, many different approaches - pretty much everything we could find on the net - and none really worked in the end.
  • SMP for vms not working properly: first, there is a limit to 2 CPUs you can assign to a VM - and as soon that you assign more than one CPU to a VM, it starts to use a lot, awful lof of CPU doing nothing.
  • Impossible to use a physical disk bigger than 2TB (which ultimately was a problem for us)
  • Complex install.. see http://moxiefoxtrot.com/2009/01/02/installing-ubuntu-810-in-fusion/
  • Very poor performance, even with vmtools...

At the time, we planned to move to Vmware ESX. It is a tested and solid solution that has been around for a while. We finally decided not to use it though, because of its requirement for a windows license server, and expensive licensing and support scheme. We then started to consider Sun's XVM, XEN and KVM. Our choice ultimately went on KVM for the following reasons:

  • The host running KVM is an unmodified linux OS - for both XEN and ESX you need to run a special patched operating system (that makes it quite hard to stay up to date with security updates).
  • Being included in Linux, you get KVM support when you buy normal Ubuntu server support. As we planned to get Ubuntu support anyway, we got support for our virtualization platform at no additional cost.
  • Performance: the performances of KVM are very good, as it is assisted by hardware extensions.
  • VMbuilder: Ubuntu has developped a tool (ubuntu-vm-builder, now called vmbuilder) that can build appliance in approx. a minute. This is a huge improvement in comparison to the way we used to create VMs. (VMbuilder can now also build images for XEN and vmware)
  • Keep It Stupid Simple: KVM uses everything that linux already provides. Every VM is a simple process, it uses linux' process scheduler, you can renice the processes, assign them to a specific CPU... This results a lot smaller codebase than XEN, for example, which I assume will be easier and more cost-effective to maintain on the longterm, making the project more trustworthy.

And most importantly: the companies involved. Even if KVM is at the moment lacking several features, posts on the development mailing list show patches coming from companies such as redhat (which bought KVM last year), AMD, Intel, IBM, HP, Novell, Bull... Who are you going to trust, a project managed by a single company (citrix, vmware), or a project supported by so many major actors?

We started to test KVM in April 2008 and started deploying it in June. We purchased supported server for this task: Sun XVM X4150 (nice little pieces of hardware). So far we are running up to 20 servers per host. We are also very happy that KVM allows to run unmodified windows guests with good performance. There are even paravirtualised network drivers for windows if you need hardcore network performances.

We haven't had major issues with KVM since its deployment; most of our issues have been solved in a quick and friendly way on the #ubuntu-virt IRC channel. We went in some networking issues, which got solved by using the e1000 nic instead of virtio or the default one on Ubuntu VMs.

So, a small trick now for those who have been reading that far. Most of those of you who have already deployed KVM know that the display is exported via VNC, tunneled in SSH. Imagine that you have your KVM host at work, but not accessible from the outside; and you want to access directly the display of one of your VMs. All that you will need is one machine with SSH that is accessible from the outside, and from which you will be able to connect to your KVM host.

Edit your local ~/.ssh/config file, add this:

Host kvmhost
ProxyCommand ssh -l login SSHHost nc InternalIPOfKVMServer %p
User login
Compression yes

Save, exit; now you can run:

virt-viewer -c qemu+ssh://kvmhost/system NameOfTheVM

I am not sure if I was clear... Feel free to ask questions about this, or KVM generally - I would be happy to answer them either in the comments or in further posts.

Yann

Hello Planet Ubuntu!

Jan 02, 2009 by Yann Hamon

Hello Planet Ubuntu!

I decided the new year was a good time to finally get started.

So let's start with a quick disclaimer: yes, we are a company (more precisely an educational charity); we provide archeology services and heritage management mainly in the United Kingdom and in France. We've been involved in Ubuntu and free software for a while, and I thought it would be interesting to provide feedback about our experience using Ubuntu in an enterprise environment. So, we came up with a proposal to the Community Council a couple of weeks ago which ultimately turned into a corporate blogs policy for the planet. As you can see we are on "trial" so I hope we will be good citizens ;)

On the technical side, what will be this blog about?

  • Virtualisation: We have been among the first to deploy KVM in a production environment. Expect tips and tricks about libvirt, KVM, jeos, ubuntu-vm-builder...
  • Mobile GIS: as archaeologists we love maps! We believe maps and GIS tools on openmokos are quite nice...
  • Ubuntu server: we have tens of Ubuntu servers and virtual machines around; apache as a reverse proxy with HTTPs, custom munin plugins, puppet, I've got some ideas of stuff to write about...
  • Ubuntu Desktop: some of our archaeologists are getting interested in swapping their XP desktop with a Ubuntu one. I hope I will get them to post some feedback over here!
  • ... and last but not least, Ubuntu, Canonical, Canonical support, Landscape. We've chosen Ubuntu as a platform because we believe it is one of the best around - some stuff is really great about it, but it's also not only pink, happiness and perfection everywhere - we hope we can help to push toward what we believe is the "right path" ;)

So, hello, and happy new year 2009 to the Ubuntu users!

Yann