OA - Ubuntu
A blog about Ubuntu, mobile GIS and archaeology

KnowledgeTree & SAS - Debunking myth from reality

Apr 13, 2011 by Yann Hamon

I've received the latest newsletter from KnowledgeTree a couple of days ago. It goes along these lines:

How do I keep my business documents protected?

Your business documents must always be secure.  You need a vendor that takes security seriously.

Well, I won't argue with that - but I just want to point out that no, SAS is not the solution to everything. When you are using a provider with closed source software, it is just as secure as the provider tells you - which often means, not much, despite the price they'll charge you.

To make my point, here is a list of security issues, ranging from trivial to critical, that are still present in the latest version of KnowledgeTree. Those have been reported in January 2010 via appropriate channels - with regular mails querying about progress. I've got good reasons to believe many of these issues also affect the hosted version.

  • You can DOS any on-premise version of Knowledgetree using the files from /usr/share/knowledgetree-ce/bin - they are not protected by any htaccess or other form of access control - and run very heavy maintenance tasks. Reload a few times one or more of the following files:
    • md5_validation.php
    • recreateIndexes.php
    • storageverification.php
    And your website will be down. For a while.
  • The call_home.php file in the root folder writes any value posted to it to var/system_info.txt , and is also available to  unauthenticated users. The maximum size of a file in ext4 is 16TB, and the maximum post size is set to 2GB by KT - if var/ is not on a really big partition, you can just keep making requests to this file with big POST values and let it fill the partition.
  • Knowledge Tree lets the installation files in place, even once it is installed. If you try to access, for example: /setup/wizard/installWizard.php  - you will be said, rightfully, that KT is already installed - but that's unless you add ?bypass=1 to the query (really). Fiddling a bit there and you can easily trash the db and all. Not good.
  • There is a bug in the ZIP library KT uses. After I reported it, a statement was written on the wiki. Of course the latest community AND enterprise version distributed are..., bug included, and good luck finding the patch. Gzip archives allow files with relative paths in them. Try crafting a gzip file containing a file with a path like "../../../../usr/share/knowledgetree-ce/script/malicious.php" and use the bulk upload tool to upload it - that's right, any authenticated user can upload PHP files and execute them, potentially accessing all files, or worse.
  • You get the regular XSS too: In preferences -> Display name, try putting a </body> , or some other html - it will screw the display. This was also in the hosted version last time I tried. It escapes some javascript, but would allow, let's say, a hidden iframe with a virused PDF, to stick to the most common exploits in the wild. But this could be also more damaging, if the attacker enters an iframe with the following address:
  • http://KT/knowledgetree/login.php?redirect=admin.php%3Fkt_path_info=principals\
    %26action=createUser%26new_password=test%26confirm_password=test . This would create a new user account "pirate" with password "test", should an admin display this iframe. Not too secure, I'd say.
  • The default install of Knowledgetree (enterprise edition) also leaves a Zend server with the default password configured. Don't forget to change it. Most people don't, and some searches on Google would confirm this.
  • You also get the boring ones: http://KT/knowledgetree/ktwebservice/download.php?code=test&d=test&u=%3Cscript%3Ealert%28%22%20helloworld%22%20%29%3C/script%3C

I'll stop there, there are quite a few more. The list of griefs doesn't stop there though, as Knowledgetree will not scale past a few tens of thousands of files (see my article on their wiki about that).

I understand it is not very nice to post about security issues in the wild before a fix has been released, but at some point it might be the only way to get them fixed - and if it doesn't help there, at least people will be aware of what they engage into when installing/purchasing KnowledgeTree, and existing customers will know that they've been misled.

What conclusions should we draw from that?

  • Telling people your SAS solution is secure is not enough.
  • When using closed source software, as much as with SAS solutions, you can't tell how secure the solution *really* is. So when you receive a newsletter/mail from a SAS company that tells you how much more secure their solution is, do yourself a favour: junk it.

PS: knowledgetree is also a Canonical case study. We learn that, thanks to Ubuntu, they now have "200 times the clients for a quarter of the cost". Interesting, eh? It's plain sad to see such a fiasco, remembering the time I invested working on KT extensions and analysing the code.

So long, KnowledgeTree.

KVM, VMBuilder & Puppet - (Really) Automated deployment

Aug 10, 2010 by Yann Hamon

I've spent some time automating the deployment of virtual machines using vmbuilder and puppet. There are two reasons behind this:

  • I give every IT person in the company (about 8-10 of us) a VM for development, to play with new software and try fancy configurations. As a result, those machines often break and need reinstalling.  
  • We do not have enough space on our SAN to put all VMs on it. As a result, there are still VMs on the disks of the KVM servers. If one of those breaks, I need to be able to recreate the machines from backups quickly.

The idea is to use puppet to "describe" the virtual machine, create the machine using vmbuilder, and set itself up on it. This has a lot of advantages in comparison to just making a backup of the whole image: first, it doesn't take any additional disk space, it is configurable (you can change the IP or disk size when doing a deployment) - and it gives you full knowledge of how the VM is setup, instead of having just a "blackbox", that would be very difficult to recreate by anyone but you.

Warning: this is a work in progress. One of the reasons I post this here is to get some feedback :)

Prerequisites: the version of vmbuilder in lucid is badly broken - --tmpfs, --firstbook and --templates do not work properly. I grabbed python-vm-builder_0.12.4-0ubuntu0.1_all.deb  from -proposed.

So, there comes the puppet configuration. Let's define a vm called yhamon-dev on the node kvmhost1.goo.thehumanjourney.net :

node "kvmhost1.goo.thehumanjourney.net" {
virtual_machine {
yhamon-dev :
fqdn => "yhamon-dev.goo.thehumanjourney.net",
ip => "",
netmask => "",
dns => "",
gateway => "",
memory => "512",
rootsize => "5120",


 This will call the function virtual_machine which I have defined here (had to split some lines - lines here split by \ are only on line):

define virtual_machine ($fqdn, $ip, $netmask="", $dns="", \
$gateway="", $memory="1024", $rootsize="6144" ) {
exec {"create_vm_${name}":
path => "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
timeout => 3600,
command => "virsh destroy $name ; virsh undefine $name ; /usr/bin/vmbuilder \
kvm ubuntu -d /var/lib/kvm/$name -v -m $memory --cpus=1 --rootsize=$rootsize\
--swapsize=512 --domain=$name --ip=$ip --mask=$netmask --gw=$gateway --dns=$dns\
--hostname=$name --suite=lucid --user=yhamon --name='Yann'\
--rootpass=password --libvirt=qemu:///system \
--components=main,restricted --execscript=/var/lib/kvm/postinstall.sh --debug\
--verbose --firstboot=/var/lib/kvm/kickstartpuppet.sh \
--copy=/var/lib/kvm/puppetkeys/$fqdn/files --tmpfs=- --addpkg=puppet \
&& virsh start $name" ,
unless => "/usr/bin/test -d /var/lib/kvm/$name",

A few words about this function. First, it checks if the folder /var/lib/kvm/$name exists ($name being the name of the vm, in our case yhamon-dev). If that folder doesn't exist, it will stop and undefine any VM running with that name in libvirt. You might not want this - I use it so I can just delete the folder and not bother about removing the vm from libvirt as well.

After this, it will call vmbuilder with the arguments as given previously.  We need to raise the timeout and specify the $PATH. We use the --tmpfs argument and a local APT repository to speed up image creation. The tricky part here is to get puppet installed and configured right, so that it will start at the first boot and configure the server without intervention. Four steps are required here:

  1. The installation of puppet, as done with --addpkg puppet
  2. Preventing puppet from starting on its own, and therefore eventually doing things we don't want it to - this is done by the postinstall.sh script given to --execscript
  3. Copying puppet's SSL keys and configuration files - this is done by the --copy argument
  4. Deploying those configuration files and starting puppet, this is done by the kickstartpuppet.sh script from --firstboot.

The first step is pretty much self-explanatory, so let's start with the second. We override the /etc/default/puppet file to prevent puppet from starting after the first boot, and remove any new SSL keys the install script might have generated:

$ cat postinstall.sh 
chroot $1 rm -rf /etc/puppet/ssl/
chroot $1 /bin/bash -c "echo -e 'START=no\nDAEMON_OPTS=""\n' > /etc/default/puppet"


We deploy the keys and puppet configuration that we had properly saved from the previous, existing VM to the KVM host, in /var/lib/kvm/puppetkeys/yhamon-dev.goo.thehumanjourney.net - and copy everything to /root/puppet/ (reason given below). I believe this to be more secure than autosigning puppet certificates.

/var/lib/kvm/puppetkeys/yhamon-dev.goo.thehumanjourney.net$ cat files 
/var/lib/kvm/puppetkeys/yhamon-dev.goo.thehumanjourney.net/ssl/certs/yhamon-dev.goo.thehumanjourney.net.pem /root/puppet/ssl/certs/yhamon-dev.goo.thehumanjourney.net.pem
/var/lib/kvm/puppetkeys/yhamon-dev.goo.thehumanjourney.net/ssl/certs/ca.pem /root/puppet/ssl/certs/ca.pem
/var/lib/kvm/puppetkeys/yhamon-dev.goo.thehumanjourney.net/ssl/certificate_requests/yhamon-dev.goo.thehumanjourney.net.pem /root/puppet/ssl/certificate_requests/yhamon-dev.goo.thehumanjourney.net.pem
/var/lib/kvm/puppetkeys/yhamon-dev.goo.thehumanjourney.net/ssl/public_keys/yhamon-dev.goo.thehumanjourney.net.pem /root/puppet/ssl/public_keys/yhamon-dev.goo.thehumanjourney.net.pem
/var/lib/kvm/puppetkeys/yhamon-dev.goo.thehumanjourney.net/ssl/crl.pem /root/puppet/ssl/crl.pem
/var/lib/kvm/puppetkeys/yhamon-dev.goo.thehumanjourney.net/ssl/private_keys/yhamon-dev.goo.thehumanjourney.net.pem /root/puppet/ssl/private_keys/yhamon-dev.goo.thehumanjourney.net.pem
/var/lib/kvm/puppetkeys/yhamon-dev.goo.thehumanjourney.net/puppet.conf /root/puppet/puppet.conf


After the first boot, we copy the files to where they belong:

/var/lib/kvm$ cat kickstartpuppet.sh

cp /root/puppet/puppet.conf /etc/puppet/puppet.conf
puppetd --test --debug
cp -r /root/puppet/ssl/* /var/lib/puppet/ssl/


There is a ugly hack here that can probably be leveraged - the first time it runs, puppet seems to do all sort of magic, that involves moving the /etc/puppet/ssl/ folder to /var/lib/ssl/, and potentially setting up more configuration. This is why we copy our ssl configuration to /root/ first - puppet would otherwise override our files in /var/lib/ssl the first time it runs should we copy them to there directly. So we run puppet once, let it do its work, copy our files from /root/puppet/ssl to /var/lib/puppet/ssl, and restart it. Improvements to this part welcome...

Finally, in puppet, we need to define the configuration for yhamon-dev.goo.thehumanjourney.net:

node "yhamon-dev.goo.thehumanjourney.net" {
include baseclass_dev_vm
# Add anything here

There you go. Now if I want to recreate the virtual machine yhamon-dev, I just remove it (sudo rm -rf /var/lib/kvm/yhamon-dev/), and wait for a few minutes, no intervention required. Hope this will be helpful.

Am of course happy to answer any questions you might have regarding this setup... I'm not sure I've been that clear!

Even Accenture say so

Aug 06, 2010 by Chris Puttick

In a press release that surprised me (I'm more used to thinking of them as a .net developing, MS Shiny Partner, if ain't got a 6-figure pricetag it ain't worth nothing sort of a consultancy), Accenture have announced nearly everything open source is just great. I particularly liked the bit that says that increased demand for open source was based on its "quality, reliability and speed".

Specifically in their survey of 300 executives with IT budgets of the very large kind, three-quarters of respondents in the UK and US cited quality as a key benefit of open source, and over two-thirds cited improved reliability and better security/bug fixing.

Good news indeed for those struggling to gain acceptance of open source as an enterprise solution in your organisation. Woot, I say unto you. Woot.

Full article here.

Archaeological Archives, Online

Jul 15, 2010 by Joseph Reeves

Open data in Archaeology is growing in popularity, so I thought I'd write a quick post about it here. We've recently released our latest Open Source GIS application and we're continuing to use as much Free software as possible, we're also part of a movement to make archaeological data free to all.

 The Open Knowledge Foundation has a Working Group on Open Data in Archaeology and their blogs often contain articles on the subject, this one being a good introduction on this topic. Our contribution to the cause is slowly growing, we release as many reports through our Library as we can, but now we've started making tentative steps towards putting the whole archive online, see for example:


Sims, Mike (2008) Buscot Wick Farm, near Lechlade, Oxfordshire. Project Report. Oxford Archaeology. (Unpublished)

McAlley, Rowan (2009) Rushey Weir, Oxfordshire. Project Report. OAU. (Unpublished)

These contain not only the PDF reports, but also copies of everything that was recorded during the excavation and everything that was submitted to the receiving body (usually a local museum). If you want to know what archaeologists do, this is the best way to find out.

 As fair as I am aware, no other institution is putting so much primary data online. I acknowledge that we only have a tiny amount available at the moment, but thanks to the hard work of our archives department this will be increasing in the future. We're open with our software and we're open about what we do with it; I think that's quite nice :)

OA Digital releases gvSIG OADE

Jul 14, 2010 by Yann Hamon

Hello everyone, posting this on behalf of my colleague Ben as he was being a bit shy :) Please forgive him the "Press release" style - this is the result of months (if not years) of work so we're all pretty enthusiast!
So here we go...

OA Digital are proud to announce the immediate availability of gvSIG OADE 2010, the user-friendly, open source GIS that gives you freedom, functionality and flexibility.

Following two beta versions and extensive testing at sites around the world, this release represents a stable, feature-rich GIS desktop application, second to none in its data management and geoprocessing capabilities.

This version is based on the source code of gvSIG 1.10, as developed by CIT Valencia and others. We would like to thank them all for their great work making user-friendly, cross-platform and open source GIS a reality.

For installation instructions, support and downloads:


This software runs on Windows, Linux and Mac OS X.

New features

Perhaps the most exciting new feature in this release is the integration of nearly 180 GRASS GIS raster and vector processing modules.

See here for an overview of other features:


About this edition of gvSIG

GvSIG OADE 2010 differs from the official CIT gvSIG 1.10 release in the following respects:

  • completely new installer frontend
  • includes latest Java Runtime Environment
  • reworked menu structure, keyboard shortcuts and layer context menus
  • better integration into all supported operating systems
  • comes with SEXTANTE 0.6 and GRASS GIS 6.4 integrated
  • additional documentation and sample data
  • fully self-contained; can e.g. run from a removable USB device

We would like to hear from you

If you are using gvSIG OADE as part of your research, teaching or business, we would like to hear from you! Let us know what you, your department or company use gvSIG for and your motivation for using the software.

Interacting with Ubuntu One - what it could be (There should be an app for that!)

Jul 13, 2010 by Yann Hamon

Nowadays we tend to have more and more devices that allow us to connect to the internet. Forget about the family desktop, now people have netbooks, smartphones, laptops - so very often, more than just one device. But ideally, we would like to access all our data, pictures, music, from any device we have, from any location. 

Enters Ubuntu One and the world of file synchronisation. Push your files to Ubuntu One, and you will be able to access them from any Ubuntu machine in the world. Nice idea, but here comes the fail:

  • Forget about accessing Ubuntu One from your Iphone or Android phone
  • Forget about accessing Ubuntu One from Windows machine (might come later, but will still require installation, so no easy access from friend's computers, or internet cafe)

Now here is my pledge: how nice would it be to have an appstore for webapps, stored online, for apps that would interact with cloud storage.  Think http://bitspace.at/ , but using ubuntu one/dropbox storage.

I would love to have:

  • A HTML5 photo browsing app, stored on Canonical's servers, that would interact with Ubuntu One storage.
  • An html5 music player, from which I could listen to my music stored in the cloud in streaming
  • An HTML5 ODF editor, where I could quickly edit my files.

Advantages  over existing solutions:

  • Access and use all your files from any internet-connected device, provided it has a good browser
  • No need to install software on target computer - same look and feel whatever the platform.

Issues that need to be solved:

  • The applications purchased should be downloadable as ZIP files and uploadable to a new host (potentially your server), should you wish to change the application provider.
  • The apps should be able to interact with many file storage providers - and a standard would be needed for that (CMIS, anyone?), so you could change the storage provider too.
I believe this is all technically doable already, I am just missing the time to write my own apps... Anyone up to the challenge? What do you think of the concept?

OpenStreetMap + Launchpad? Not yet

Jul 01, 2010 by Joseph Reeves

At the end of last year I tried to encourage the use of OpenStreetMap within Launchpad rather than the existing Google Maps; there was a bug and a blueprint for people to comment on and I hoped that we might see Free Software embrace Free Data. 

 Since those bugs were first filed OpenStreetMap has come on leaps and bounds; it's saved lives across the world and has led at the forefront of a data revolution that everyone is getting into. The quality of the map data is improving rapidly, you can compare the coverage in your local area to Google's to see for yourself. Importantly, the barrier for entry to OpenStreetMap is incredibly low; if your street isn't on the map it's easily added.

 Sadly it seems like the logical marriage of Free Software and Free Data might not be; the blueprint has been marked obsolete and the bug closed with the note "We might reconsider this in the future when OSM provides all features that Google Maps does". I'm not sure what these features are, maps over https perhaps, but it is wrong to think that we need OSM to provide this. OpenStreetMap is a database of geograhic information that anyone can download and use as they like, even with the Google Maps API; we can provide required features.

 Launchpad is Free Software, but Google does not provide Free map data; Google provides map data at no cost. Would we host Launchpad servers on Windows Server if it was provided without cost? 

 A quality of Open Software that makes it better than closed equivalents is that it's Open. We all know this and need to apply the same thinking to data.


Money for nothing

Apr 16, 2010 by Chris Puttick

So, there I am, helping my much loved and elderly aunt with her computer. The computer's a tad old, running, more accurately crawling, mostly unpatched Windows XP Home and, like many of its age, the Windows installation has become somewhat overburdened by cruft; although given the 9 year old Celeron and 256MB of RAM I have difficulty believing it was ever sprightly. So I offer to help her get a new one and come and make it all work. Having clarified that all she is interested in is getting her email and writing the odd letter or two, I determine that a Kubuntu setup is going to be plenty for her purposes. I figure this will save her a few pounds also as well as make future maintenance very easy.

Check: Internet connection is not some weird Windows specific "lite" solution - nope, is TalkTalk ADSL with ethernet adapter.

Check: email account works fine with Firefox - no problem, some other sensible member of the family has already installed Firefox as the default browser on the old computer.

Excellent. Just before I leave she mentions that her current computer has been very good and she would quite like one of the same brand. Dell. No problem, says me, Dell I use at work, I'm sure we can get one plenty good enough and within your budget. As we have several recent Dell PCs running Kubuntu I am also confident that will be a good choice for the project.

So, knowing that Dell UK list no desktop PCs without Windows next chance I get I order through the company account at Dell, paying on my credit card. No problems.

Except for that one, tiny, crazy, recurrent problem...

My account team have absolutely no problem with my ordering a single PC without an operating system; I go on the website, check out the lowest price one, up the mouse option to the laser one and call it done, then email the account team the details, deleting the Windows (v.7 rather limited edition) and Works lines from the spec; formal quote promptly comes back, but while Works is gone, Windows is still on the spec listing. So I email back and point out that Windows is still included.

There follows a delay.

Then a telephone call. Sure, absolutely no problem, we can sell you that PC without Windows, but because it is a bundle deal on offer, if we build from components on the system, it will cost you £30 more. Sigh...

So once again, for the record.

To all PC manufacturers: I don't want to have Windows on my PC. It costs you money, I don't want it to cost me or my aunt money, it should be cheaper to sell me a PC without it; unless in all these special offer cases *someone* was funding the sale of the PC with Windows on it such that not only has the OEM cost of Windows been absorbed, but £30 more? Hmmm.

To the competition authorities: surely something must be going on? Maybe you should actually investigate?

To the statistics guys: yes, this PC will register as having been sold with Windows on. Like several percent of others sold that way, shortly after delivery it will not have Windows on anymore; to stop my aunt from having problems viewing websites maintained by people with little understanding of the web, standards and trends, I may well set the browser agent so it looks like it is IE8 on Windows Vista. Just so you know, ok?

OA and the GSoC

Mar 31, 2010 by Chris Puttick

An OA member of staff is offering to mentor a GSoC student to add support for SQLite3/SpatiaLite to the gvSIG desktop GIS software as part of our and gvSIG's involvement with the Open Source Geospatial Foundation (OSGeo). More information about this and other GSoC ideas can be found on the GSoC page on the OSGeo wiki. If you qualify for GSoC, go for it...

Divided we stand, united we fall

Mar 25, 2010 by Chris Puttick

 This is a thought that has been troubling me for some time, but the upcoming release of 10.04, the next long term support (LTS) version of Kubuntu (and the Kubuntu Gnome Edition ;) ) makes this post pretty timely. The LTS release is in effect the enterprise-class version, with a support window long enough to make its testing, installation and management feasible in a large scale desktop setup. An aside, but I differ as to where the need for long term support is strongest; while servers are individually more important, you can far more easily, cheaply, and effectively test and upgrade your server fleet than you can your desktops and laptops. Your desktop and laptop fleet (unless your organisation is both well-managed and very, very rich) will be far more complex to deal with, with many bits of hardware, and with individual needs dictating many combinations of software to test against.

But I'm not sure that 10.04 will be enterprise ready, at least not for most organisations of any size and not for any intending to purchase support from Canonical. Don't get me wrong, I'm sure that Kubuntu 10.04 will be a great piece of desktop software, just as 9.10 is. I'm using the latter to type this on my work laptop, run it on my work desktop and netbook and we have a number of non-technical users migrated or migrating to it. Kubuntu 10.04 will come with some great core software (although we could still do with Kivio being sorted out - anyone?), with a brilliant underlying architecture, a beautiful and highly (end-user) customisable interface and the capability of doing most any job an organisation or individual might need.

The issue is this: an organisation with a big end user fleet needs some stability for planning with; hence the LTS requirement, with five years in which to test, deploy and maintain, before entering the testing phase of the replacement desktop OS. But what they don't need is for the complete desktop eco-system to be locked-in to a point in time; uniform across the OS and apps. In fact that is the last thing they want - the testing cycle becomes long and intense with many factors to consider or potentially overlook and with many changes for people to need support with; where the desktop OS itself might well be good for 5 years or more, some or all of the applications and associated plugins are likely to benefit from updating both more frequently and in unrelated cycles.

Moreover, one of the (many :) ) advantages of adopting open source solutions should be that open source software updates are more often evolutionary than revolutionary; short testing cycles, quick and easy deployment, minimal training requirements. But the jump from one deployed LTS to the next will be 4 or so years worth of application releases; and some of the applications you would still have in use would be out of support from the original creator's perspective. Sure another advantage of open source is that the lack of support from the originators of the application is not a killer blow, but it does mean Kubuntu developers having to maintain software unnecessarily; I know I'd rather they worked on core improvements and new applications than spent time bug fixing end of life apps.

A further issue is this: right now, if I chose to deploy 10.04 and another organisation chose Windows 7, in 4 years time when we are all looking at our next desktop OS, they will be just looking at their OS and not their apps. Worse, if our only difference was in the choice of desktop, in 4 years time they will be using the latest and greatest versions of applications such as OpenOffice, Firefox, Inkscape and Krita, and we would be stuck on 4+ year old versions. Worse still from our perspective, this would mean we would be lagging in our support of open standards such as ODF, HTML et al, SVG, etc..

Before you start writing comments to the effect that it is possible to install the latest versions of applications mentioned (and others) by adding third party repositories, download third party debs and rpms, or compiling from source, I know, and occasionally I do just that. But then I am self-supporting. In the enterprise support is important; going out of the distribution means withdrawal of support, potentially not just for the updated application but for your entire desktop install. Just not acceptable in a large deployment.

So I have a solution to propose: a separation of the core OS and the mainstream userland applications. In the latter grouping I would put a relatively small number of apps like OpenOffice, the KDE desktop suite, GIMP, Inkscape, Scribus, Firefox, etc., and annually make available the latest version of those packages for all currently supported core OS releases. Anyone using a non-LTS would probably get one update of applications, LTS users would benefit from the choice of 3 or 4.

Maybe this service should only be made available to subscribers on the basis it mostly advantages enterprises, as individual users currently get the new applications by upgrading their entire desktop. Maybe it should be a service FoC; I guess the cost of achieving and maintaining this split should determine whether it is charged for. The related updater should have a simple per application tickbox (and conf file) acceptance of the use of new versions, to allow the enterprise IS management to easily control how and when new versions of applications are deployed.

This is just my solution to the problem identified. Others may have alternative solutions. Some may believe that this complete freeze is a good thing, or application updates should be in the LTS point releases. I'm pretty sure the complete freeze is a problem, and that it needs a supported solution that gives easy control over application update timings. The latter of course makes the former a choice while I feel the current approach restricts choice. Choice is the Linux advantage, isn't it?

KnowledgeTree calls out to its community

Feb 17, 2010 by Yann Hamon

Short introduction: KnowledgeTree (KT)  is a document management system written in PHP, targetting small to medium businesses. I would name Alfresco as its closest competitor. At Oxford Archaeology we are already using KT, and plan to increase its use to all our offices and documents (that's quite a few Terabytes).

A few days ago, Damien Williams from the Knowledge Tree team issued a call to the KT community: "Please help us serve you, the community, better". Getting things right between an enterprise developing an opensource software and that software's community is a very difficult thing, but also a very interesting topic, which I thought was worth answering to.

Let's start with the following: It is incredibly easy to get it wrong.
I love that article and believe it is a great first read for a company who wants to get things right.

Now to my view. "Community" is a broad term. In the knowledge tree world, I believe I've got 5 different community hats, as I am:

  • A customer
  • A system administrator
  • A plugin author
  • A soon-to-be-well-maybe-I-hope core developer
  • A free software activist

For every of these hats, I have different needs, opinions, suggestions, which I will detail now. Those are focused on Knowledge Tree, but I think they might apply to pretty much any other software we are deploying.

As a customer

As a customer, I want the software to be cost-efficient, and to answer the needs of my users in terms of speed, interface and features.

I am interested in case studies - is any other large deployment known? If yes, I am interested in a PDF where the customer details his deployment, the problems encountered, and how well it scaled. This will reassure me that I won't run into problems later that I couldn't have guessed when deploying.
Screenshots & Videos: What does the software do? How does it look like? Even better would be the ability to test-drive the software, without a complex registration system - or no registration at all.

A regular newsletter where I can read about upcoming features, technological innovations, interaction with other software can also be very interesting..

I also absolutely need to be able to give input - ask for features, get my voice heard for important decisions that will affect the direction the project wil take - even if in the end I am in a minority and another direction is taken. Even better, I would like to be asked to help, test-drive new beta features, give feedback about our deployment, ....

As a system administrator

As a system administrator, I need to get the best uptime and performance out of the software, while keeping it as secure as it can be. These are difficult tasks, that require a lot of information.

Where are the bottlenecks going to be? Is all the load going to be on the Apache servers? Is the database going to be the issue? Will I be able to scale the software horizontally - ie, have a cluster of apaches, a separated SQL install, or will my only option be to throw more RAM at one big box?

Therefore, as a system administrator, I need:

  • A short, 5-10 pages quick start guide. Sysadmins are always happy when they can get something running in a snap.
  • An in-depth, exhaustive manual which would detail every single configuration item. From the configuration files, to how I can finetune the SQL database, optimise the Apache servers... Are there any functions that I should consider deactivating if the load gets too high?
  • A good point for this manual would be to describe several different test cases. We had the simple one-server setup in the quickstart, now I want to know how I can setup a 5 apaches, replicated SQL, NFS, memcache'd KnowledgeTree.... Can all the elements be split?

Eventually I will run into issues. If I don't want to query the support for every tiny problem, a forum for system administrators is usually a great place to start asking questions. Having persons with a deep understanding of KT provide very basic support there is a great way to get more people use KnowledgeTree, and eventually participate in the forums later. Getting completely ignored is probably the worst thing that can happen there though.

If I run in more serious issues (potentially preventing people from working), I will report it to the support, which I expect to be quick, friendy and knowledgeable. In most free software companies you can get the software for free - so the support really need to be worth it :)


On the security side, I need to be aware of critical security issues as quick as possible - either via a mailing list, or via a RSS feed that I could follow.


As a plugin author

As a happy user of the software, I absolutely need this one feature that, heck, I seem to be the only one needing. So I am left with the task of either writing it myself, or getting someone to do it for me. When I can and time & skills allow, I tend to save the bucks and do it myself.

When you start writing code, what really helps are hello worlds. Simple pieces of code, demonstrating basic features, that you can just use, and extend. For KnowledgeTree, some "demo" code for every plugin type (dashlet, processor, admin page...) would be an amazing resource. Basic pieces of code, "bricks", also help a lot - how do I connect to DB? Surely I don't need to rewrite a logging system from scratch and I can just reuse the one existing? Getting a "Hello world" plugin and adding some bricks to it could get a good half of the work done in many cases. Maybe that's an area where I could contribute.

My other concern when writing a plugin is: what am I allowed to do? Sure I can just include any class or file from the project and just start using it - but which ones have an interface that is frozen? It'd be rather disappointing if at the next Knowledgetree upgrade my whole plugin breaks down, and I need a massive rewrite to get it working again. Or at least, an automated test that would tell me if my plugin will be, or not, compatible with a certain version of KT... Some indications in this area would be highly valued.

A manual intended to plugin developers would probably help kickstart plugin development - and provide a better consistency and quality among them all. Global programming guidelines, indenting,  GUI recommandations for consistency with the rest of the software? Objects/Classes/Interfaces already implemented I should be aware of before starting to reinvent the wheel?

A forum for plugins developers would be a great place to exchange tricks, advertise for a plugin, ask for reviews... would probably be rather empty at the beginning, but would hopefully improve over time.

As a core developer

Well, assuming I'm good enough, assuming I've been around for a while, and assuming I've got enough time for it - here are the things I would be interested in should I ever want to contribute more than a few lines of code.

One of the first things is, as Tim Towtdi said, some things require discussions. If I am going to revamp completely some pretty central and critical code and spend three weeks on it, one of the things I wouldn't be to happy to hear after that would be "right, that's a solution, but actually we thought on this other way which we actually think is better".

Ideally, a public mailing list would exist where the core developers would discuss all the important technical choices for the software. Reading this, I could know what is going on, the global direction, give input on implementation choices, and ask for input when I want to start to code something that might be non-trivial.

Let me stress this again: the discussions that matter should be public, the decision-making process should be fair and published, and some sort of meritocracy should be put in place. If someone has been contributing code to the core for the last two years, his voice should count as much as the voice of a developer from the company (at least on issues where the outcome would not directly hurt the company..).

A documented code is also quite important when you want to contribute. Giving a PHPDoc output is a nice start, the second step is to actually use PHPDoc style comments so that the documentation is not just a list of functions :)  Having to go through 20 000 lines of code to be able to contribute is what you want to avoid.

As a free software activist

I'm afraid that even with all the controversy about this, I have at least one foot in the FSF camp... I hate running BLOBs - y'know, those software where the commercial tells you "it is great, it is fast, it will scale" - and unless you try it on a large scale, you just have to believe him. If there is a bug, you just have to wait that they fix it, even if you might have the skills to fix it. Security is another concern.

It is a bit like a candy where the list of ingredients would be a secret: it might taste good, you'll have to trust big corporation when it tells you "no worries, it's good for you". But they forgot to write "Warning: might contain nuts".

So, I don't like black boxes, and am not a big fan of KnowledgeTree's decision to use Zend. I also *very* strongly objet the direction taken by the offline client that uses Adobe AIR - maybe it could have been coded in HTML5, with its offline features, drag&drop fileapi, and so on. Standards and cross-platform programming should not be an option...

As any of those

A community manager is also often a good idea. The community manager can act as a single point of contact for the community. He should be someone who is communicative, who has some time to spend on IRC and Forums, who the community knows and appreciate, and who would sometimes have to stand to defend the ideas of the community, which might not be the ones from the company - so someone appreciated inside the company as well :). It certainly doesn't have to be a full time person - any developer with strong free software values would probably be good.


I am very happy that knowledge tree is asking for input from the community. Caring about the community and valuing it is probably the most important thing in an opensource project anyway.

I hope this feedback will give knowledge tree, as well as other opensource companies, some indications about what makes us, the customers, the sysadmins, the programmers, the floss advocates, happy. I would also be happy to hear what is important for you, when dealing with a company that writes opensource software...

Troubles with KVM... Anyone got a version working?

Jan 20, 2010 by Yann Hamon

Dear Planet Ubuntu, I've been having issues with KVM for several months now (running Ubuntu Hardy LTS). I was using kvm62, had many issues with it (non fontional SMP, issues with network drivers, ...), then I moved to the kvm84 and libvirt 0.6.1 backports, an still am experiencing many issues - the worst being regular crashes of virtual machines, which isn't exactly fun. I thought I had fixed it (my 32 bits vms were running on 64 bits kvm processors, which should work, though.. but they are now crashing with a 32 bits processor as well).  This is what is produced by munin when the VM crashes.

Dear planet readers and KVM users, what version of Ubuntu are you running with KVM, and with what version of KVM/libvirt? Are you happy with it? I am getting somewhat desperate :(

Blueprint plug: Within Launchpad use OpenStreetMap instead of Google

Dec 27, 2009 by Joseph Reeves

The new year is almost upon us, so I thought I'd plug one of my wishes for 2010, the use of OpenStreetMap within Launchpad rather than Google Maps. There's a blueprint here

This move would, I believe, be beneficial to both Launchpad and OpenStreetMap; the first gets the use of free mapping whilst the latter gets increased exposure. As far as I can tell, the aim of Launchpad is to provide the tools necessary for collaborating on software projects; OpenStreetMap provides the tools required to collaborate towards a free map of the world. I'm a firm believer that linking the two projects slightly closer together can only be a good thing.

Returning your Palm Pre to health with a Koala

Nov 25, 2009 by Chris Puttick

So much has changed; Palm devices now use and support Linux, are cute and very, very neat (oh, and for iPhone users: among other things, the Pre can run more than one application at at time ;) ).

But K/Ubuntu 9.10 did away with /etc/event.d, which means that the debs thoughtfully provided by Palm to facilitate access to your Pre from your Linux box don't work as advertised. So for any Pre owners out there using K/Ubuntu 9.10 or variants, who's just updated their Palm Pre to the shiny new WebOS 1.3, and had it go wrong (like a scary message involving www.palm.com/ROM) here's how to do sort it all out the easy way[1].

First, don't go straight for the excellently cross-platform solution offered by Palm, the WebOS Doctor; you will need it though, so if you have already got it up and running, no matter.

 You need to install the Palm SDK and Novacom tools, which you can get from here:

http://developer.palm.com/index.php?option=com_content&view=article&id=1585 (bottom of page)

Run the installer as directed, but ignore the errors caused by the Novacom install, which are caused by the above referenced deprecation of event.d. To get this vital element working you can just run sudo /opt/Palm/novacom/novacomd (thanks to http://zootlinux.blogspot.com/2009/11/installing-webos-sdk-in-ubuntu-910.html).

Now go for it on the  WebOS Doctor :) - if you haven't already got it, go here: http://www.palm.com/us/support/downloads/pre/recoverytool/

See now that was not so hard. But it would have been so much easier (particularly for the non-techies) if these things were in the Partner respository. How on earth do things get into there?

[1] Where easy means "not trawling around a whole bunch of websites first"

Koala's LTS

Nov 10, 2009 by Joseph Reeves

I know that Karmic isn't a LTS release, and that this has very little, if not nothing at all, to do with Ubuntu usage within a modern corporation, but I couldn't help think of the OS I'm using when I read this news.

 Australia's koalas could be wiped out within 30 years unless urgent action is taken to halt a decline in population, according to researchers.

"They say development, climate change and bushfires have all combined to send the numbers of wild koalas plummeting". Chlamydia too is cited as a reason for the decline. After the linux.conf.au attendees supported the Tasmanian Devil earlier in the year, is it time for the Ubuntu community to get behind Koalas?

I'm not too sure what we could do to protect our fussy eater friends from development, climate change, bushfires and chlamydia, however, so if anyone has any good ideas, please post them in the comments.

Making progress part 2: small successes all add up

Aug 01, 2009 by Chris Puttick

So, a quick-ish follow up so my previous post doesn't come across as too negative, and to show that progress in this area can be made with just a little effort.

We are having significant success as a relatively small company using awareness-raising tactics with potential suppliers; conversations along the lines of "hey, we liked your product - but we couldn't make it work with Linux and we have a strategy that includes ensuring desktop Linux is an option in our future" are having an effect.

But it is arguably easier in the corporate IT world. Even a small company spends more money than an individual consumer; so at the very least the salesman who loses the sale gets the message. We have had very positive reactions from large companies, such as Dell, and from the most senior level, when raising concerns regarding cross-platform support.

Some areas are basically done with, such as printers. We've recently carried out a refresh of large multi-function devices (black and white and colour copier/printers) in a couple of our offices; it was part of the base requirements that the devices have cross-platform support, and were surprised to find that the selected supplier, Canon, not only had Linux support (despite some Internet postings to the contrary) but had it in the form of a full-featured GPL-licenced open source driver. There are few if any serious printer manufacturers now who do not provide and, as importantly, state Linux support.

Other areas are similarly done with in the sense that they work just fine e.g. digital cameras (we buy quite a few...), but the manufacturers are not admitting it in the documentation or on their websites. These are the fun ones from a purchasing perspective: "does this camera support Linux?" you ask innocently, knowing it is USB mass storage or PPTP; "I'm not sure, does it matter?" responds the salesperson. "Yes, it is a requirement for the purchase, could you find out?" you reply, struggling to keep a straight face as they start to look nervous...

It seems cruel, but there is a purpose. My knowing that the digital cameras, USB hard drive or ISP services, etc., will work just fine with Linux is not the same as it being there in black and white, with instructions (which are mostly very similar to those for the Mac, so it is not a huge cost to provide them) so the potential and not particularly technical Linux user can see that Linux is a real option. This is as true in enterprise decision making as it is for home users; you or I (I'm guessing you as the reader of an Ubuntu blog...) might know what things will work for sure, what should and what needs to be carefully selected (portable music players springs to mind - did I already mention the slimy demon that is Apple? :D ); you or I will know that just about any enterprise level hardware will be just fine, that any corporate function can be reproduced with a Linux platform, that Linux is a real option on the desktop and the server, but many of my peers at an IT decision-making level don't, as they lost contact with tech stuff a while back (and then some!).

Having sales and marketing functions listing Linux support for their hardware, no matter how distro-focused, no matter how qualified, puts it there in black and white "you can use Linux, you can use Linux". Drills the message in. Makes people comfortable with Linux. Makes the application software suppliers uncomfortable so they start considering porting to Java or platform neutral C/QT mixes, etc.. Makes the web-app providers list Linux as a client platform as well as a server platform. Then the FUD will start to really dissipate and proper informed choices made.

 So if you get involved with purchasing, or when buying something, hardware, software or services, for personal use, ask the question, no matter how obvious the answer; and when the answer is no when you know it to be yes, make sure to contact the manufacturer and ask why the information regarding Linux support is not made public, communicated to the sales channel, printed on the box. And if they say no, when it really is no, don't buy it. And make sure to tell them why you didn't buy it...

Making progress part 1: A little advice or a question?

Jul 26, 2009 by Chris Puttick

I guess the question needs to be first: do you (the community and individual users) want Ubuntu to be a real alternative for typical home users? You know, not the equivalent of a kit car relative to the every day car of Windows or a Mac, but something the average user can acquire and use without needing technical support to get along on a month by month basis.

If your answer is no, I'm intrigued. Why not? You like it to be hard?Answers on a postcard to "Keep Linux off the desktop", the usual address...

If yes, then I have a little advice. Pages like this:


need to start differently. I'll pick on Apple because they have just taken two contented Ubuntu users back to Windows just through the purchase of an iPhone, much to my chagrin, but I guess there are other hardware suppliers out there who because of their ignorance require similar pages  to be up on the help.ubuntu site.

Ok. This is the issue. The first point that has to be made is that any instructions on this type of "how do I use this normal thing with Linux" has to start with a statement along these lines:

 The following instructions are a method for getting your newly purchased/acquired <insert name of gadget> to work with Linux. These instructions may look daunting and to anybody non-technical they probably are daunting; that these instructions have to exist at all is entirely the fault of <insert manufacturer's name>. It would be great if you could contact <insert manufacturers name> and make it clear that you believe they should provide support for Ubuntu and Linux in general as well as for Apple OS and Microsoft Windows.

Maybe you prefer an open solution to every hardware use. Me too, I prefer choice. But where an open solution is lacking, any (Linux) solution is better than no solution at all. The average consumer buys on impulse. They see a thing, they like  a thing, they buy a thing. Said thing should then, like nowadays most any printer, just work when they plug it into their home computer, regardless of OS. Sure, they might need to read a manual and install some software, but they should not have to hack at stuff.

And it is the manufacturers who are at fault, not those who choose or have acquired Linux, and they and the Linux users should understand that. Manufacturers like Samsung have taken to including Linux software and instructions with their printers; surely everyone else should be too?

I'd go further than just that change to the "how to hack my gadget so it works with Ubuntu" pages, and suggest a campaign targetted at Apple et al. where we make our feelings clear. Maybe they don't care. Maybe they shouldn't care. But why not see if we can't make them care? If one Canadian can make United Airlines care, surely millions of Ubuntu users can make Apple cry...

KVM84 in the starting blocks

Jul 02, 2009 by Yann Hamon

Remember my post from March? Where, we're nearly there. Dustin announced a few days ago that he was expecting to push kvm84 in -updates next week. I've been beta-testing and chasing bugs on this for some time now, and I am pretty happy with this backport, and all the goodness it brings. So, a short list of features/bugfixes that I've noticed so far:

  • Disk speed seems a lot better:

    yhamon@yhamon-dev:~$ time cp SunStudio12ml-solaris-sparc-200709-pkg.tar.bz2  tmp
    real 0m12.528s


    yhamon@mirror:~$ time cp SunStudio12ml-solaris-sparc-200709-pkg.tar.bz2 tmp
    real 0m20.159s

    yhamon-dev being a VM running on a kvm84 host, and the other one on a standard kvm-62. Both hosts are similar in specs and similarly loaded. I wonder what could trigger such a significant change (cache?), but I ran this cp several times, an every time kvm84 would be significantly faster...

  • ACPI2 for windows guests: among other features, it means that windows guests will now be able to reboot "themselves". Until now, when triggering a reboot from a windows guest, it would just shut it down. Now that works fine, too.
  • Proper SMP support. In KVM-62, SMP support was quite broken, it would use a lot of CPU on the host - the network would also regularly crash with SMP guests, leaving them without connectivity. Now this seems to work correctly; I've been running SMP windows and Linux guests for a while, and it seems quite stable.
Speaking of Windows guests (believe me, I'd be happier without): I think I found a better way to install those. Instead of using virt-install like I documented a while ago , it is much easier to directly manually create the QCOW2 file:
qemu-img create -f qcow2 windows.qcow2 12G

And to create manually the XML definition file (copy it from a template - don't forget to change UUID and MAC address), and boot it directly with virsh. Here is an example of a libvirt XML file that I use.

This works fine; I had many issues before with virt-install, where the VM just wouldn't restart after the disk had been formatted.

World's most detailed fail

Jun 30, 2009 by Joseph Reeves

Picked up by the BBC here:

Global Digital Elevation Map covers 99% of the Earth's surface, and will be free to download and use...

"This is the most complete, consistent global digital elevation data yet made available to the world," said Woody Turner, Nasa programme scientist on the Aster mission.

What does it look like? This is pretty exciting stuff! I quickly click to the website, eagerly anticipating the gigabytes of free data that I'll be able to enjoy!

Microsoft OLE DB Provider for ODBC Drivers error '80040e4d'

[Microsoft][ODBC Microsoft Access Driver] Too many client tasks.

/index.asp, line 3

The most complete, consistent global digital elevation data yet made available to the world, failing to be provided by a Microsoft Access ODBC driver. NASA put folks on the moon, then chose Access to deliver an enormous dataset via the Internet. Fail.

Screenshot proof here.

The 100 papercuts and the fleshwound

Jun 29, 2009 by Yann Hamon

The 100 papercuts and the fleshwound.

I actually bought support because of this bug, and they fixed it for me by telling me to disable the wacom tablets in the xorg.conf file; I also share responsibility in this not being fixed yet as I didn't spend enough time to do bug tracking once I had a workaround. But someone recently blogged that important bugs that were not being given enough attention could be worth blogging... so... if any xorg developer is reading me... ;)