Posts Tagged With 'ubuntu'
On Monday, I lost my home directory on my primary development machine. I'd
had this machine for a couple of years but it was still beefy enough to be an
excellent development box. I've upgraded it several times with each new
Ubuntu release, and it was running Natty. I had decent sbuild and pbuilder
environments, and a bunch of virtual machines for many different flavors of
I'd also encrypted my home directory when I did the initial install. Under
Ubuntu, this creates an ecryptfs and does some mount magic after you
successfully log in. It's as close to FileVault as you can get on Ubuntu,
and I think it does a pretty good job without incurring much noticeable
overhead. Plus, with today's Ubuntu desktop installers, enabling an encrypted
home directory is just a trivial checkbox away.
To protect your home directory, ecryptfs creates a random hex passphrase that
is used to decrypt the contents of your home directory. To protect this
passphrase, it encrypts it with your login password. ecryptfs stores this
"wrapped" passphrase on disk in the ~/.ecryptfs/wrapped-passphrase file.
When you log in, ecryptfs uses your login password to decrypt
wrapped-passphrase, and then uses the crazy long hex number inside it to
decrypt your real home directory. Usually, this works seamlessly and you
never really see the guts of what's going on. The problem of course is that
if you ever lose your wrapped-passphrase file, you're screwed because
without that long hex number, your home directory cannot be decrypted. Yay
for security, boo for robustness!
When you do your initial installation and choose to encrypt your home
directory, you will be prompted to write down the long hex number, i.e. your
unwrapped passphrase. Here's the moral of the story. 1) You should do
this; 2) You …
Continue reading »
So, yesterday (June 21, 2011), six talented and motivated Python hackers from
the Washington DC area met at Panera Bread in downtown Silver Spring,
Maryland to sprint on PEP 382. This is a Python Enhancement Proposal to
introduce a better way for handling namespace packages, and our intent is to
get this feature landed in Python 3.3. Here then is a summary, from my own
spotty notes and memory, of how the sprint went.
First, just a brief outline of what the PEP does. For more details please
read the PEP itself, or join the newly resurrected import-sig for more
discussions. The PEP has two main purposes. First, it fixes the problem of
which package owns a namespace's __init__.py file,
e.g. zope/__init__.py for all the Zope packages. In essence, it eliminate
the need for these by introducing a new variant of .pth files to define a
namespace package. Thus, the zope.interfaces package would own
zope/zope-interfaces.pth and the zope.components package would own
zope/zope-components.pth. The presence of either .pth file is enough
to define the namespace package. There's no ambiguity or collision with these
files the way there is for zope/__init__.py. This aspect will be very
beneficial for Debian and Ubuntu.
Second, the PEP defines the one official way of defining namespace packages,
rather than the multitude of ad-hoc ways currently in use. With the pre-PEP
382 way, it was easy to get the details subtly wrong, and unless all
subpackages cooperated correctly, the packages would be broken. Now, all you
do is put a * in the .pth file and you're done.
Sounds easy, right? Well, Python's import machinery is pretty complex, and
there are actually two parallel implementations of it in Python 3.3, so
gaining traction on …
Continue reading »
TL;DR: Ubuntu 12.04 LTS will contain only Python 2.7 and 3.2, while Ubuntu
11.10 will contain Python 3.2, 2.7 and possibly 2.6, but possibly not.
Last week, I attended the Ubuntu Developer Summit in Budapest, Hungary.
These semi-annual events are open to everyone, and hundreds of people
participate both in person and remotely. Budapest's was called UDS-O, where
the 'O' stands for Oneiric Ocelot, the code name for Ubuntu 11.10, which
will be released in October 2011. This is where we did the majority of
planning for what changes, new features, and other developments you'll find in
the next version of Ubuntu. UDS-P will be held at the end of the year in
Orlando, Florida and will cover the as yet unnamed 12.04 release, which will
be a Long Term Support release.
LTS releases are special, because we make longer guarantees for official
support: 3 years on the desktop and 5 years on the server. Because of this,
we're making decisions now to ensure that 12.04 LTS is a stable, confident
platform for years to come.
I attended many sessions, and there is a lot of exciting stuff coming, but I
want to talk in some detail about one area that I'm deeply involved in.
What's going to happen with Python for Oneiric and 12.04 LTS?
First, a brief summary of where we are today. Natty Narwhal is the code
name for Ubuntu 11.04, which was released back in April and is the most recent
stable release. It is not an LTS though; the last LTS was Ubuntu 10.04 Lucid
Lynx, release back in October 2010. In Lucid, the default Python
(i.e. /usr/bin/python) is 2.6 and Python 2.7 is not officially …
Continue reading »
Ubuntu 11.04 (code name: Natty Narwhal) beta 2 was just released and the final
release is right around the corner. Canonical internal policy is that we
upgrade to the latest in-development release as soon as it goes beta, to help
with bug fixing, test, and quality assurance.
Now, I've been running Natty on my primary desktops (my two laptops) since
before alpha 1, and I've been very impressed with the stability of the core
OS. One of my laptops cannot run Unity though, so I've mostly been a classic
desktop user until recently. My other laptop can run Unity, but compiz and
the wireless driver were too unstable to be usable, that is until just before
beta 1. Still, I diligently updated both machines daily and at least on the
classic desktop, Natty was working great. (Now that beta 1 is out, the
wireless and compiz issues have been cleared up and it's working great too.)
The real test is my beefy workstation. This is a Dell Studio XPS 435MT 12GB,
quad-core i7-920, with an ATI Radeon HD 4670 graphics card, running
dual-headed into two Dell 20" 1600x1200 flat panel displays. During the
Maverick cycle I was a little too aggressive in upgrading it, because neither
the free nor the proprietary drivers were ready to handle this configuration
yet. I ended up with a system that either couldn't display any graphics, or
didn't support the dual heads. This did eventually all get resolved before
the final release, but it was kind of painful.
So this time, I was a little gun shy and wanted to do more testing before I
committed to upgrading this machine. Just before Natty beta 1, I dutifully
downloaded the daily liveCD ISO, and booted the machine from CD. On the
surface, things seemed promising …
Continue reading »
For the last couple of days I've been debugging a fun problem in the Ubuntu
tool called Jockey. Jockey is a tool for managing device drivers on Ubuntu.
It actually contains both a command-line and a graphical front-end, and a dbus
backend service that does all the work (with proper authentication, since it
modifies your system). None of that is terribly relevant to the problem,
although the dbus bit will come back to haunt us later.
What is important is that Jockey is a Python application, written using many
Python modules interfacing to low-level tools such as apt and dbus. The
original bug report was mighty confusing. Aside from not being reproducible
by myself and others, the actual exception made no fricken sense! Basically,
it was code like this that was throwing a TypeError:
_actions = 
# _actions gets appended to at various times and later...
for item in _actions[:]:
# do something
Everyone who reported the problem said the TypeError was getting thrown on
the for-statement line. The exception message indicated that Python was
getting some object that it was trying to convert to an integer, but was
failing. How could you possible get that exception when either making a copy
of a list or iterating over that copy? Was the list corrupted? Was it not
actually a list but some list-like object that was somehow returning
non-integers for its min and max indexes?
To make matters worse, this little code snippet was in Python's standard
library, in the subprocess module. A quick search of Python's bug
database did reveal some recent threads about changes here, made to ensure
that popen objects got properly cleaned up by the garbage collector if they
weren't cleaned up explicitly by the program. Note that we're using Python
2.7 here, and after some reading …
Continue reading »
My friends and family often ask me what I do at my job. It's easy to
understand when my one brother says he's a tax accountant, but not so easy
to explain the complex world of open source software development I live in.
Sometimes I say something to the effect: well, you know what Windows is, and
you know what the Mac is right? We're building a third alternative called
Ubuntu that is free, Linux-based and in most cases, much better. Mention
that you won't get viruses and it can easily breathe new life into that old
slow PC you shudder to turn on, and people at least nod their heads
enthusiastically, even if they don't fully get it.
I've been incredibly fortunate in my professional career, to have been able to
share the software I write with the world for almost 30 years. I started
working for a very cool research lab with the US Federal government while
still in high school. We had a UUCP connection and were on the early
Arpanet, and because we were funded by the US taxpayer, our software was not
subject to copyright. This meant that we could share our code with other
people on Usenet and elsewhere, collaborate with them, accept their
suggestions and improvements, and hopefully make their lives a little better,
just as others around the world did for us. It was free and open source
software before such terms were coined.
I've never had a "real job" in the sense of slaving away in a windowless cube
writing solely proprietary software that would never see the light of day.
Even the closed source shops I've worked at have been invested somehow in
free software, and with varying degrees of persuasion, have both benefited
from and contributed to the …
Continue reading »
I'm doing some work these days on trying to get Python 2.7 as the default
Python in the next version of Ubuntu, Maverick Meerkat (10.10). This work
will occasionally require me to break my machine by installing experimental
packages. That's a good and useful thing because I want to test various
potentially disruptive changes before I think about unleashing them on the
world. This is where virtual machines really shine!
To be efficient, I need a really fast turnaround from known good state, to
broken state, back to known good state. In the past, I've used VMware Fusion
on my Mac to create a VM, then take a live snapshot of the disk before making
my changes. It was really easy then to revert to the last known good
snapshot, try something else and iterate.
But lately Fusion has sprouted a nasty habit of freezing the host OS, such
that a hard reboot is necessary. This will inevitably cause havoc on the
host, by losing settings, trashing mail, corrupting VMs, etc. VMware can't
reproduce the problem but it happens every time to me, and it hurts, so I'm
not doing that any more :).
Back to my Lucid host and libvirt/kvm and the sanctuary of FLOSS. It's
really easy to create new VMs, and there are several ways of doing it, from
virt-manager to vmbuilder to straight up kvm (thanks Colin for some
recipes). The problem is that none of these are exactly fast to go from
bare metal to working Maverick VM with all the known good extras I need (like
openssh-server and bzr, plus my comfortable development environment).
I didn't find a really good fit for vmbuilder or the kvm commands, and I'm not
smart enough to use the libvirt command line tools, but I think …
Continue reading »
My friend Tim is working on a very cool Bazaar-backed wiki project and he
asked me to package it up for Ubuntu. I'm getting pretty good at packaging
Python projects, but I always like the practice because each time it gets a
little smoother. This one I managed to package in about 10 minutes so I
thought I'd outline the very easy process.
First of all, you want to have a good setup.py, and if you like to cargo
cult, you can start with this one. I highly recommend using
Distribute instead of setuptools, and in fact the former is what Ubuntu gives
you by default. I really like adding the distribute_setup.py which gives
you nice features like being able to do python setup.py test and many other
things. See lines 18 and 19 in the above referenced setup.py file.
The next thing you'll want is Andrew Straw's fine stdeb package, which you
can get on Ubuntu with sudo apt-get install python-stdeb. This package is
going to bootstrap your debian/ directory from your setup.py file.
It's not perfectly suited to the task (yet, Andrew assures me :), but we can
make it work!
These days, I host all of my packages in Bazaar on Launchpad, which is going
to make some of the following steps really easy. If you use a different
hosting site or a different version control system, you will have to build
your Ubuntu package using more traditional means. That's okay, once you have
your debian/ directory, it'll be fairly easy (but not as easy as described
here). If you do use Bazaar, you'll just want to make sure you have the
bzr-builddeb plugin. Just do sudo apt-get install bzr-builddeb on
Ubuntu and you should get everything you need.
Okay, so now you …
Continue reading »
Today I finally swapped my last Gentoo server for an Ubuntu 10.04 LTS
server. Gentoo has served me well over these many years, but with my emerge
updates growing to several pages (meaning, I was waaaay behind on updates with
almost no hope of catching up) it was long past time to switch. I'd moved my
internal server over to Ubuntu during the Karmic cycle, but that was a much
easier switch. This one was tougher because I had several interdependent
externally facing services: web, mail, sftp, and Mailman.
The real trick to making this go smoothly was to set up a virtual machine
in which to install, configure and progressively deploy the new services. My
primary desktop machine is a honkin' big i7-920 quad-core Dell with 12GB of
RAM, so it's perfectly suited for running lots of VMs. In fact, I have
several Ubuntu, Debian and even Windows VMs that I use during my normal
development of Ubuntu and Python. However, once I had the new server ready to
go, I wanted to be able to quickly swap it into the real hardware. So I
purchased a 160GB IDE drive (since the h/w it was going into was too old to
support SATA, but still perfectly good for a simple Linux server!) and a USB
drive enclosure. I dropped the new disk into the enclosure, mounted it on the
Ubuntu desktop and created a virtual machine using the USB drive as its virtio
It was then a pretty simple matter of installing Ubuntu 10.04 on this USB
drive-backed VM, giving the VM an IP address on my local network, and
installing all the services I wanted. I could even register the VM with
Landscape to easily keep it up-to-date as I took my sweet time …
Continue reading »