Update 2016-11-28: I've updated this article with new instructions!
sbuild is an excellent tool for locally building Ubuntu and Debian packages.
It fits into roughly the same problem space as the more popular pbuilder, but
for many reasons, I prefer sbuild. It's based on schroot to create chroot
environments for any distribution and version you might want. For example, I
have chroots for Ubuntu Oneiric, Natty, Maverick, and Lucid, Debian Sid,
Wheezy, and Squeeze, for both i386 and amd64. It uses an overlay filesystem
so you can easily set up the primary snapshot with whatever packages or
prerequisites you want, and the individual builds will create a new session
with an overlaid temporary filesystem on top of that, so the build results
will not affect your primary snapshot. sbuild can also be configured to save
the session depending on the success or failure of your build, which is
fantastic for debugging build failures. I've been told that Launchpad's build
farm uses a customized version of sbuild, and in my experience, if you can get
a package to build locally with sbuild, it will build fine in the main archive
or a PPA.
Right out of the box, sbuild will work great for individual package builds,
with very little configuration or setup. The Ubuntu Security Team's wiki
page has some excellent instructions for getting started (you can stop
reading when you get to UMT :).
One thing that sbuild doesn't do very well though, is help you build a stack
of packages. By that I mean, when you have a new package that itself has new
dependencies, you need to build those dependencies first, and then build your
new package based on those dependencies. Here's an example.
I'm working on bug 832864 and I wanted to see if I could build the …
Continue reading »
I'm starting a new musical project, which I'm calling OTONE and/or
ONOTE. Actually, I've been working on this project for several years
without realizing what I wanted to do with it. It coalesced in my mind when I
thought of the acronyms above. Here's what they stand for:
- The One Tune One Night Experiment (OTONE)
- The One Night One Tune Experiment (ONOTE)
I'm not yet sure what the difference between the two are yet (though see
below), but here's the idea behind the project.
If you're like me, you can easily sweat over a song and its recording forever,
tweaking the mix, or hearing another melody, or (worst of all) agonizing over
every word of a lyric that was like pulling teeth in the first place.
Sometimes you think if you just do one more take of the guitar, you can get it
perfect, or oh! it just needs a little bit of tamborine right there.
Sometimes the arrangement just doesn't sit quite right, or you know in your
gut that lurking out there somewhere there's a better way to get from the
bridge to the last chorus.
Well, I'm kind of frustrated with that because it can lead to never actually
finishing a song and getting it out there for folks to hear. At some point
you reach diminishing returns, where the little tweaks don't really improve
the song enough. Probably most importantly, the whole thing can put the
brakes on the creative process. I liken it to software maintenance
vs. creating a new project from scratch.
Software maintenance is important, useful, and can be fun, but the juices
really get flowing when you're starting a new project. You get this rush of
an idea and your fingers can't type fast enough to translate them into code.
It's …
Continue reading »
On Monday, I lost my home directory on my primary development machine. I'd
had this machine for a couple of years but it was still beefy enough to be an
excellent development box. I've upgraded it several times with each new
Ubuntu release, and it was running Natty. I had decent sbuild and pbuilder
environments, and a bunch of virtual machines for many different flavors of
OS.
I'd also encrypted my home directory when I did the initial install. Under
Ubuntu, this creates an ecryptfs and does some mount magic after you
successfully log in. It's as close to FileVault as you can get on Ubuntu,
and I think it does a pretty good job without incurring much noticeable
overhead. Plus, with today's Ubuntu desktop installers, enabling an encrypted
home directory is just a trivial checkbox away.
To protect your home directory, ecryptfs creates a random hex passphrase that
is used to decrypt the contents of your home directory. To protect this
passphrase, it encrypts it with your login password. ecryptfs stores this
"wrapped" passphrase on disk in the ~/.ecryptfs/wrapped-passphrase file.
When you log in, ecryptfs uses your login password to decrypt
wrapped-passphrase, and then uses the crazy long hex number inside it to
decrypt your real home directory. Usually, this works seamlessly and you
never really see the guts of what's going on. The problem of course is that
if you ever lose your wrapped-passphrase file, you're screwed because
without that long hex number, your home directory cannot be decrypted. Yay
for security, boo for robustness!
When you do your initial installation and choose to encrypt your home
directory, you will be prompted to write down the long hex number, i.e. your
unwrapped passphrase. Here's the moral of the story. 1) You should do
this; 2) You …
Continue reading »
So, yesterday (June 21, 2011), six talented and motivated Python hackers from
the Washington DC area met at Panera Bread in downtown Silver Spring,
Maryland to sprint on PEP 382. This is a Python Enhancement Proposal to
introduce a better way for handling namespace packages, and our intent is to
get this feature landed in Python 3.3. Here then is a summary, from my own
spotty notes and memory, of how the sprint went.
First, just a brief outline of what the PEP does. For more details please
read the PEP itself, or join the newly resurrected import-sig for more
discussions. The PEP has two main purposes. First, it fixes the problem of
which package owns a namespace's __init__.py file,
e.g. zope/__init__.py for all the Zope packages. In essence, it eliminate
the need for these by introducing a new variant of .pth files to define a
namespace package. Thus, the zope.interfaces package would own
zope/zope-interfaces.pth and the zope.components package would own
zope/zope-components.pth. The presence of either .pth file is enough
to define the namespace package. There's no ambiguity or collision with these
files the way there is for zope/__init__.py. This aspect will be very
beneficial for Debian and Ubuntu.
Second, the PEP defines the one official way of defining namespace packages,
rather than the multitude of ad-hoc ways currently in use. With the pre-PEP
382 way, it was easy to get the details subtly wrong, and unless all
subpackages cooperated correctly, the packages would be broken. Now, all you
do is put a * in the .pth file and you're done.
Sounds easy, right? Well, Python's import machinery is pretty complex, and
there are actually two parallel implementations of it in Python 3.3, so
gaining traction on …
Continue reading »
TL;DR: Ubuntu 12.04 LTS will contain only Python 2.7 and 3.2, while Ubuntu
11.10 will contain Python 3.2, 2.7 and possibly 2.6, but possibly not.
Last week, I attended the Ubuntu Developer Summit in Budapest, Hungary.
These semi-annual events are open to everyone, and hundreds of people
participate both in person and remotely. Budapest's was called UDS-O, where
the 'O' stands for Oneiric Ocelot, the code name for Ubuntu 11.10, which
will be released in October 2011. This is where we did the majority of
planning for what changes, new features, and other developments you'll find in
the next version of Ubuntu. UDS-P will be held at the end of the year in
Orlando, Florida and will cover the as yet unnamed 12.04 release, which will
be a Long Term Support release.
LTS releases are special, because we make longer guarantees for official
support: 3 years on the desktop and 5 years on the server. Because of this,
we're making decisions now to ensure that 12.04 LTS is a stable, confident
platform for years to come.
I attended many sessions, and there is a lot of exciting stuff coming, but I
want to talk in some detail about one area that I'm deeply involved in.
What's going to happen with Python for Oneiric and 12.04 LTS?
First, a brief summary of where we are today. Natty Narwhal is the code
name for Ubuntu 11.04, which was released back in April and is the most recent
stable release. It is not an LTS though; the last LTS was Ubuntu 10.04 Lucid
Lynx, release back in October 2010. In Lucid, the default Python
(i.e. /usr/bin/python) is 2.6 and Python 2.7 is not officially …
Continue reading »
Ubuntu 11.04 (code name: Natty Narwhal) beta 2 was just released and the final
release is right around the corner. Canonical internal policy is that we
upgrade to the latest in-development release as soon as it goes beta, to help
with bug fixing, test, and quality assurance.
Now, I've been running Natty on my primary desktops (my two laptops) since
before alpha 1, and I've been very impressed with the stability of the core
OS. One of my laptops cannot run Unity though, so I've mostly been a classic
desktop user until recently. My other laptop can run Unity, but compiz and
the wireless driver were too unstable to be usable, that is until just before
beta 1. Still, I diligently updated both machines daily and at least on the
classic desktop, Natty was working great. (Now that beta 1 is out, the
wireless and compiz issues have been cleared up and it's working great too.)
The real test is my beefy workstation. This is a Dell Studio XPS 435MT 12GB,
quad-core i7-920, with an ATI Radeon HD 4670 graphics card, running
dual-headed into two Dell 20" 1600x1200 flat panel displays. During the
Maverick cycle I was a little too aggressive in upgrading it, because neither
the free nor the proprietary drivers were ready to handle this configuration
yet. I ended up with a system that either couldn't display any graphics, or
didn't support the dual heads. This did eventually all get resolved before
the final release, but it was kind of painful.
So this time, I was a little gun shy and wanted to do more testing before I
committed to upgrading this machine. Just before Natty beta 1, I dutifully
downloaded the daily liveCD ISO, and booted the machine from CD. On the
surface, things seemed promising …
Continue reading »
I know that the Mailman 3 project is not alone in procrastinating getting
out a release of its major rewrite. It's hard work to finish a rewrite on
your own copious spare time. I was just chatting with Thomas Waldmann of the
Moin project on IRC, and he lamented a similar story about the Moin 2
release. Then he said something that really made me sit up straight:
<ThomasWaldmann> 11.11.11 would be a great date for something :)
Yes, it would! We have the 2011 Google Summer of Code happening soon
(students, you have until April 8th to submit your applications) so many
free and open source software projects will get some great code coming soon.
And November is far enough out that we can plan exactly what a "release"
means. Here's what I propose:
Let's make November 11, 2011 the "Great FLOSS Release Day". If you're working
on an open source project undergoing a major new version rewrite, plan on
doing your release on 11.11.11. It can be a beta or final release, but get
off your butts and make it happen! There's nothing like a good deadline to
motivate me, so Mailman 3 will be there. Add a comment here if you want your
project to be part of the event!
For the last couple of days I've been debugging a fun problem in the Ubuntu
tool called Jockey. Jockey is a tool for managing device drivers on Ubuntu.
It actually contains both a command-line and a graphical front-end, and a dbus
backend service that does all the work (with proper authentication, since it
modifies your system). None of that is terribly relevant to the problem,
although the dbus bit will come back to haunt us later.
What is important is that Jockey is a Python application, written using many
Python modules interfacing to low-level tools such as apt and dbus. The
original bug report was mighty confusing. Aside from not being reproducible
by myself and others, the actual exception made no fricken sense! Basically,
it was code like this that was throwing a TypeError:
_actions = []
# _actions gets appended to at various times and later...
for item in _actions[:]:
# do something
Everyone who reported the problem said the TypeError was getting thrown on
the for-statement line. The exception message indicated that Python was
getting some object that it was trying to convert to an integer, but was
failing. How could you possible get that exception when either making a copy
of a list or iterating over that copy? Was the list corrupted? Was it not
actually a list but some list-like object that was somehow returning
non-integers for its min and max indexes?
To make matters worse, this little code snippet was in Python's standard
library, in the subprocess module. A quick search of Python's bug
database did reveal some recent threads about changes here, made to ensure
that popen objects got properly cleaned up by the garbage collector if they
weren't cleaned up explicitly by the program. Note that we're using Python
2.7 here, and after some reading …
Continue reading »
My friends and family often ask me what I do at my job. It's easy to
understand when my one brother says he's a tax accountant, but not so easy
to explain the complex world of open source software development I live in.
Sometimes I say something to the effect: well, you know what Windows is, and
you know what the Mac is right? We're building a third alternative called
Ubuntu that is free, Linux-based and in most cases, much better. Mention
that you won't get viruses and it can easily breathe new life into that old
slow PC you shudder to turn on, and people at least nod their heads
enthusiastically, even if they don't fully get it.
I've been incredibly fortunate in my professional career, to have been able to
share the software I write with the world for almost 30 years. I started
working for a very cool research lab with the US Federal government while
still in high school. We had a UUCP connection and were on the early
Arpanet, and because we were funded by the US taxpayer, our software was not
subject to copyright. This meant that we could share our code with other
people on Usenet and elsewhere, collaborate with them, accept their
suggestions and improvements, and hopefully make their lives a little better,
just as others around the world did for us. It was free and open source
software before such terms were coined.
I've never had a "real job" in the sense of slaving away in a windowless cube
writing solely proprietary software that would never see the light of day.
Even the closed source shops I've worked at have been invested somehow in
free software, and with varying degrees of persuasion, have both benefited
from and contributed to the …
Continue reading »
Richard Jones is working on a talk for PyCon Australia and asked me
about the history of the Zen of Python, Tim Peters' eternal words of
wisdom, often quoted, on the essential truths of Python. At first, I couldn't
find a reference to the first publication of this list, but then I did a
better search of my archives and found that it was originally sent to the
python-list mailing list on June 4, 1999, under the subject "The Python
Way".
Interestingly enough, because I couldn't find that first reference
immediately, I went back into my archives and researched the "this" module.
Did you know that if you type the following at a modern Python interpreter,
you get the Zen of Python?
% python3 -c "import this"
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
The story behind "import this" is kind of funny, and occurred totally behind
the scenes, so I thought it might be interesting to relate how it happened …
Continue reading »