Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 48 min 35 sec ago

Russ Allbery: Review: Revenant Gun

6 hours 35 min ago

Review: Revenant Gun, by Yoon Ha Lee

Series: Machineries of Empire #3 Publisher: Solaris Copyright: 2018 ISBN: 1-78618-110-X Format: Kindle Pages: 400

This is the third book of the series that started with Ninefox Gambit, and it really can't be read out of order. Each book builds on the previous one and picks up where it left off in the story.

Jedao wakes up with the memory of being a first-year Shuos university student, in a strange room, with nothing to wear except a Kel uniform (wearing a Kel uniform when you aren't Kel is a very bad idea), and with considerably more muscle than he remembered. The first person he sees identifies himself as the Nirai hexarch, which is definitely not the sort of thing that happens to a first-year Shuos university student (even putting aside that the heptarchate is apparently now the hexarchate). The odd bits of knowledge he shouldn't have only make things stranger, as does the hexarch's assertion that his opponent has made off with most of his memories. And now, apparently he's expected to command a battle fleet in support of the last remaining legitimate hexarch.

The main story arc of Revenant Gun picks up nine years after the shattering events of Raven Strategem, although the chapters focusing on Kel Brezan are intermixed flashbacks sketching out the subsequent history. Given how much the political universe shifts in the first two books, I have to avoid most of the plot summary. But the focus shifts here from the primarily combat-oriented Ninefox Gambit and the Kel politics of Raven Strategem to broader-focus political maneuvering. Lee left a lot of broken pieces on the floor at the end of the last book; Revenant Gun is largely about putting them back together while taking care of a few critical remaining threats.

But, despite the political rumblings and continued war, I thought the best part of this book was a quiet thread about servitors (essentially Star Wars droids with roughly the same expected place in hexarchate society, except in smaller and less human forms). This follows Hemiola, one of three servitors stationed on an isolated moon and tasked with watching over the secret archives of one of the main players in this story. Over the course of the story, Hemiola's horizons expand drastically and unsettlingly, something it tries to make sense of by analogy with the episodic dramas it watches (and creates fan videos for, not that it shares those with anyone). The quietly subversive way that Cheris treats servitors from the start of this series is one of the best themes in it, and I was delighted to finally see the world from a servitor's point of view. I'd happily read a whole other book about Hemiola and the servitors (and there is a possible hook left for that at the end of this one).

The rest of the story involves more of the high-stakes strategic maneuvering that's characterized the series so far, this time (mostly) three-sided. We find out quite a bit more about what was behind the shape and political structure of the universe, a lot more about Kujen, and at least a bit more about the plan behind the upheaval in Raven Strategem, but most of the book is Jedao and Brezan trying to keep up with events, struggle through ethical challenges, and find the best of a very limited and unpleasant set of options. One thing I like about Lee's writing is that all of the characters are smart and observant, and very little of the plot is driven by stupid mistakes or poor communication.

I liked Revenant Gun better than any of the other books in this series. I think it almost came together as a great book, but didn't quite manage it, although I'm not sure what didn't work. One thing I can put a finger on is that Jedao's situation has a sexual dimension that's in-character and that fits the world but felt weirdly sudden and intrusive in the story. This may have been intentional (there's some reason to believe that it felt weirdly sudden and intrusive to Jedao), but it was disconcerting in a way that knocked the plot off-kilter, at least for me. The ending also felt oddly incomplete in some way, despite dealing with the major villain of the series. There are so many loose ends left: how to stabilize the government, how the new calendar would be managed, and what is going on with the servitors (as well as a new and nicely-handled addition to the political scene). The ending leaves them mostly unresolved, instead focusing on Jedao and his psychological state. I was somewhat interested in that, but more interested in other characters and in the politics, and wanted more of a higher-level conclusion.

Despite those flaws, though, this is still good magitech, with some interesting characters, some great bits with the servitors, good use of multiple perspectives, and a story that I found easier to follow and more straightforward than earlier books in the series. If you liked the previous books, you'll want to read this too. I'm hoping for a sequel someday that focuses entirely on the servitors, though.

Rating: 7 out of 10

Russ Allbery: Review: Grand Central Arena

18 December, 2018 - 09:58

Review: Grand Central Arena, by Ryk E. Spoor

Series: Arenaverse #1 Publisher: Baen Copyright: May 2010 ISBN: 1-4391-3355-7 Format: Mass market Pages: 671

Ariane Austin is an unlimited space obstacle racing pilot, recruited at the start of the book for the first human test of a faster-than-light drive. She was approached because previous automated tests experienced very strange effects, if they returned at all. The drive seems to work as expected, but all AIs, even less-intelligent ones, stopped working while the drive was engaged and the probe was outside normal space. The rules of space obstacle racing require manual control of the ship, making Ariane one of the few people qualified to be a pilot.

Ariane plus a crew of another seven are assembled. They include the scientist who invented the drive in the first place, and a somewhat suspicious person named Marc DuQuesne. (Pulp SF fans will immediately recognize the reference, which makes the combination of his past secrets and the in-universe explanation for his name rather unbelievable.) But when the drive is activated, rather than finding themselves in the open alternate dimension they expected, they find themselves inside a huge structure, near a model of their own solar system, with all of their companion AIs silenced.

Ryk E. Spoor is better known to old Usenet people as Sea Wasp. After all these years of seeing him in on-line SF communities, I wanted to read one of his books. I'm glad I finally did, since this was a lot of fun. Grand Central Arena starts as a Big Dumb Object story, as the characters try to figure out why their shortcut dimension is filled with a giant structure, but then turns into a fun first contact story and political caper when they meet the rest of the inhabitants. The characterization is a bit slapdash and the quality of the writing isn't anything special, but the plot moves right along. I stayed happily lost in the book for hours.

As you might guess from the title, the environment in which the intrepid human explorers find themselves is set up to provoke conflict. That conflict has a complex set of rules and a complex system of rule enforcement. Figuring out both is much of the plot of this book. The aliens come in satisfyingly pulpish variety, and this story has a better excuse than most for the necessary universal translator (although I do have to note that none of the aliens act particularly alien). There are a lot of twisty politics and factions to navigate, a satisfying and intriguing alien guide and possible ally, political and religious factions with believable world views, a surprisingly interesting justification for humans being able to punch above their weight, a ton of juicy questions (only some of which get answered), and some impressively grand architecture. Spoor's set pieces don't do that architecture as much justice as, say, Iain M. Banks would, but he still succeeds in provoking an occasional feeling of awe.

One particular highlight is that the various alien factions have different explanations for why the Arena exists, encourage the humans to take sides, and are not easily proven right or wrong. Spoor does a great job maintaining a core ambiguity in the fight between the alien factions. The humans are drawn to certain allies, by preference or accident or early assistance, but those allies may well be critically and dangerously wrong about the nature of the world in which they find themselves. As befits a political and religious argument that has gone on for centuries, all sides have strong arguments and well-worn rebuttals, and humans have no special advantage in sorting this out. This is not how this plot element is normally handled in SF. I found it a refreshing bit of additional complexity.

The biggest grumble I had about this book is that Spoor keeps resorting to physical combat to resolve climaxes. I know the word "arena" is right there in the title of the book, so maybe I shouldn't have expected anything else. But the twisty politics were so much more fun than the detailed descriptions of weapons or RPG-style combat scenes in which I could almost hear dice rolling in the background. Spoor even sets up the rules of challenges so that they don't need to involve physical violence, and uses that fact a few times, but keeps coming back to technology-aided slug-fests for most of the tense moments. I think this would have been a more interesting book if some of those scenes were replaced with more political trickiness.

That said, the physical confrontations are in genre for old-school space opera, which is definitely what Grand Central Arena is. It has that feel of an E.E. "Doc" Smith book (which is exactly what Spoor was going for from the dedication), thankfully without the creepy gender politics. The primary protagonist is a woman (without any skeevy comments), Spoor is aware of and comfortable with the range of options in human sexuality other than a man and a woman, and at no point does anyone get awarded a woman for their efforts. He didn't completely avoid all gender stereotypes (all the engineers are men; the other women are the medic and the biologist), but it was good enough to me to not feel irritated reading it. For throwback space opera, that's sadly unusual.

If you're in the mood for something Lensman-like but with modern sensibilities, and you aren't expecting too much of the writing or the characterization, give Grand Central Arena a try. It's not great literature, but it's a solid bit of entertainment. (Just be warned that most of the secrets of the Arena are not revealed by the end of the book, and will have to wait for sequels.) I'll probably keep reading the rest of the series.

Followed by Spheres of Influence.

Rating: 7 out of 10

Daniel Lange: Openssh taking minutes to become available, booting takes half an hour ... because your server waits for a few bytes of randomness

18 December, 2018 - 00:39

So, your machine now needs minutes to boot before you can ssh in where it used to be seconds before the Debian Buster update?


Linux 3.17 (2014-10-05) learnt a new syscall getrandom() that, well, gets bytes from the entropy pool. Glibc learnt about this with 2.25 (2017-02-05) and two tries and four years after the kernel, OpenSSL used that functionality from release 1.1.1 (2018-09-11). OpenSSH implemented this natively for the 7.8 release (2018-08-24) as well.

Now the getrandom() syscall will block1 if the kernel can't provide enough entropy. And that's frequenty the case during boot. Esp. with VMs that have no input devices or IO jitter to source the pseudo random number generator from.

First seen in the wild January 2017

I vividly remember not seeing my Alpine Linux VMs back on the net after the Alpine 3.5 upgrade. That was basically the same issue.

Systemd. Yeah.

Systemd makes this behaviour worse, see issue #4271, #4513 and #10621.
Basically as of now the entropy file saved as /var/lib/systemd/random-seed will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times when /var/lib/urandom/random-seed (that you still have laying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted.

#4167 is a re-opened discussion about systemd eating randomness early at boot (hashmaps in PID 0...). Some Debian folks participate in the recent discussion and it is worth reading if you want to learn about the mess that booting a Linux system has become.

While we're talking systemd ... #10676 also means systems will use RDRAND in the future despite Ted Ts'o's warning on RDRAND [ mirror and mirrored locally as 130905_Ted_Tso_on_RDRAND.pdf, 205kB as Google+ will be discontinued in April 2019].


Debian is seeing the same issue working up towards the Buster release, e.g. Bug #912087.

The typical issue is:

[    4.428797] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: data=ordered
[ 130.970863] random: crng init done

with delays up to tens of minutes on systems with very little external random sources.

This is what it should look like:

[    1.616819] random: fast init done
[    2.299314] random: crng init done

Check dmesg | grep -E "(rng|random)" to see how your systems are doing.

If this is not fully solved before the Buster release, I hope some of the below can end up in the release notes2.


You need to get entropy into the random pool earlier at boot. There are many ways to achieve this and - currently - all require action by the system administrator.

Kernel boot parameter

From kernel 4.19 (Debian Buster currently runs 4.18) you can set RANDOM_TRUST_CPU at compile time or random.trust_cpu=on on the kernel command line. This will make Intel / AMD system trust RDRAND and fill the entropy pool with it. See the warning from Ted T'so linked above.

Using a TPM

The Trusted Platform Module has an embedded random number generator that can be used. Of course you need to have one on your board for this to be useful. It's a hardware device.

Load the tpm-rng module (ideally from initrd) or compile it into the kernel (config HW_RANDOM_TPM). Now, the kernel does not "trust" the TPM RNG by default, so you need to add


to the kernel command line. 1000 means "trust", 0 means "don't use". So you can chose any value in between that works for you depending on how much you consider your TPM to be unbugged.


For Virtual Machines (VMs) you can forward entropy from the host (that should be running longer than the VMs and have enough entropy) via virtio_rng.

So on the host, you do:

kvm ... -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x7

and within the VM newer kernels should automatically load virtio_rng and use that.

You can confirm with dmesg as per above.

Or check:

# cat /sys/devices/virtual/misc/hw_random/rng_available
# cat /sys/devices/virtual/misc/hw_random/rng_current
virtio_rng.0 Patching systemd

The Fedora bugtracker has a bash / python script that replaces the systemd rnd seeding with a (better) working one. The script can also serve as a good starting point if you need to script your own solution, e.g. for reading from an entropy provider available within your (secure) network.


The wonderful Keith Packard and Bdale Garbee have developed a USB dongle, ChaosKey, that supplies entropy to the kernel. Hard- and software are open source.


Kernel 4.2 introduced jitterentropy_rng which will use the jitter in CPU timings to generate randomness.

modprobe jitterentropy_rng

This apparently needs a userspace daemon though (read: design mistake) so

apt install jitterentropy-rngd (available from Buster/testing).

The current version 1.0.8-3 installs nicely on Stretch. dpkg -i is your friend.

But - drumroll - that daemon doesn't seem to use the kernel module at all.

That's where I stopped looking at that solution. At least for now. There are extensive docs if you want to dig into this yourself.


apt install haveged

Haveged is a user-space daemon that gathers entropy though the timing jitter any CPU has. It will only run "late" in boot but may still get your openssh back online within seconds and not minutes.

It is also - to the best of my knowledge - not verified at all regarding the quality of randomness it generates. The haveged design and history page provides and interesting read and I wouldn't recommend haveged if you have alternatives. If you have none, haveged is a wonderful solution though as it works reliably. And unverified entropy is better than no entropy. Just forget this is 2018 .

  1. it will return with EAGAIN in the GRND_NONBLOCK use case 

  2. there is no Buster branch in the release notes repository yet (2018-12-17) 

Norbert Preining: Git and Autotools – a hate relation?

17 December, 2018 - 07:07

For a project where I contribute I started to rewrite the build system using autotools (autoconf/automake), since this is one of the most popular build systems as far as I know. The actual conversion turned out to be not too complicated. But it turned out that in modern world of git(hub), it seems that the autotools have lost their place.

Source of all the problems is the dreaded

WARNING: 'aclocal-1.16' is missing on your system.
         You should only need it if you modified 'acinclude.m4' or
         '' or m4 files included by ''.

which, when you search for it, appears all over projects hosted on some instance of git. The reason is rather simple: git does not preserve timestamps, and autotools try to rebuild each and everything from the most basic files upward if there is a timestamp squeeze.

Consequences of this PITA is that:

  • Every developer needs to install the whole autotools stack, and before running anything always has to do a autoreconf -i
  • github generated tarballs might have arbitrary time stamps, so release management via git tags is borken, and users who want to install the software package might see the same errors

If anyone has a working solution that fixes the above two problems I would be grateful to hear about it.

So where to go from here in the modern world? The answer is (for me) meson and ninja: no intermediate generation of files, no generated files that are necessary for distribution (configure,, no bunch of maintainer mode, distribution mode, extra complicated targets, readable source files, … For the named project I also created a meson/ninja build system and it turned out to be much easier to handle.

Bottomline for me: give up on autotools if you want to use git – meson/ninja is preferred.

Matthew Palmer: pwnedkeys: who has the keys to *your* kingdom?

17 December, 2018 - 07:00

I am extremely pleased to announce the public release of – a database of compromised asymmetric encryption keys. I hope this will become the go-to resource for anyone interested in avoiding the re-use of known-insecure keys. If you have a need, or a desire, to check whether a key you’re using, or being asked to accept, is potentially in the hands of an adversary, I would encourage you to take a look.


By now, most people in the IT industry are aware of the potential weaknesses of passwords, especially short or re-used passwords. Using a password which is too short (or, more technically, with “insufficient entropy”) leaves us open to brute force attacks, while re-using the same password on multiple sites invites a credential stuffing attack.

It is rare, however, that anyone thinks about the “quality” of RSA or ECC keys that we use with the same degree of caution. There are so many possible keys, all of which are “high quality” (and thus not subject to “brute force”), that we don’t imagine that anyone could ever compromise a private key except by actually taking a copy of it off our hard drives.

There is a unique risk with the use of asymmetric cryptography, though. Every time you want someone to encrypt something to you, or verify a signature you’ve created, you need to tell them your public key. While someone can’t calculate your private key from your public key, the public key does have enough information in it to be able to identify your private key, if someone ever comes across it.

So what?

The risk here is that, in many cases, a public key truly is public. Every time your browser connects to a HTTPS-protected website, the web server sends a copy of the site’s public key (embedded in the SSL certificate). Similarly, when you connect to an SSH server, you get the server’s public key as part of the connection process. Some services provide a way for anyone to query a user’s public keys.

Once someone has your public key, it can act like an “index” into a database of private keys that they might already have. This is only a problem, of course, if someone happens to have your private key in their stash. The bad news is that there are a lot of private keys already out there, that have either been compromised by various means (accident or malice), or perhaps generated by a weak RNG.

When you’re generating keys, you usually don’t have to worry. The chances of accidentally generating a key that someone else already has is as close to zero as makes no difference. Where you need to be worried is when you’re accepting public keys from other people. Unlike a “weak” password, you can’t tell a known-compromised key just by looking at it. Even if you saw the private key, it would look just as secure as any other key. You cannot know whether a public key you’re being asked to accept is associated with a known-compromised private key. Or you couldn’t, until came along.

The solution!

The purpose of is to try and collect every private key that’s ever gotten “out there” into the public, and warn people off using them ever again. Don’t think that people don’t re-use these compromised keys, either. One of the “Debian weak keys” was used in an SSL certificate that was issued in 2016, some eight years after the vulnerability was made public!

My hope is that will come to be seen as a worthwhile resource for anyone who accepts public keys, and wants to know that they’re not signing themselves up for a security breach in the future.

Iustin Pop: Why on earth are 512e HDDs used?

17 December, 2018 - 06:45

I guess backwards compatibility is just another form of entropy. So this is all expected, but still…

Case in point, 512e hard drives. Wikipedia has a nice article on this, so I’ll skip the intro, and just go directly to my main point.

It’s 2018. End of 2018, more precisely. Linux has long supported 4K sectors, Windows since Win10 (OK, so recent), but my sampling on existing HDDs:

  • All of Western Digital’s hard drives under its own brand (WD rainbow series) is using 512e (or 512n, for ≤4TB or so); not even the (recently retired) “Gold” series, top of the top, provides 4Kn.
  • HGST has a large variety for the same size/basic model, but in Switzerland at least, all the 4Kn variants seem to be rare than unicorns; whereas the 512e are available in stock all over the place.

I tried to find HDDs ≥8TB with 4Kn (and ideally ISE), but no luck; mostly “zero available in stock, no availability at supplier, no orders”. Sure, Switzerland is a small market, but at the same time, exact same size/model but with 512e is either available, or already on order. I’ve seen at one shop a 512e model where the order backlog was around 200 HDDs.

I guess customer demand is what drives the actual stocking of models. So why on earth are people still buying hard drives using 512e format? Is it because whatever virtualisation backend they use doesn’t support yet 4Kn (at all or well)? I’ve seen some mentions of this, but they seemed to be from about 2 years ago.

Or is it maybe because most of HDDs are still used as boot drives? Very unlikely, I think, especially at high-end, where the main/only advantage is humongous size (for the price).

I also haven’t seen any valid performance comparisons (if any). I suppose since Linux knows the physical sector size, it can always do I/O in proper sizes, but still, a crutch is a crutch. And 4Kn allows going above 2TB limit for the old-style MBR partition tables, if that matters.

Side-note: to my surprise, the (somewhat old now) SSDs I have in my machine are supposedly “512n”, although everybody knows their minimal block size is actually much, much larger - on the order of 128KB-512KB. I/O size is not same as erase block size, but still, one could think they are more truthful and will report their proper physical block size.

Anyway, rant over. Let’s not increase the entropy even more…

Gregor Herrmann: RC bugs 2018/49-50

17 December, 2018 - 01:32

as mentioned in my last blog post, I attended the Bug Squashing Party in bern two weeks ago. I think alltogether we were quite productive, as seen on the list of usertagged bugs. – here's my personal list of release-critical bugs that I touched at the BSP or afterwards:

  • #860523 – python-dogtail: "python-dogtail: /bin/sniff fails to start on a merged-/usr system"
    apply patch found in the BTS, QA upload
  • #873997 – src:openjpeg2: "FTBFS with Java 9 due to -source/-target only"
    apply patch from the BTS, upload to DELAYED/5
  • #879624 – xorg: "xorg: After upgrade to buster: system doesnt start x-server anymore but stop reacting"
    lower severity
  • #884658 – dkms: "dkms: Should really depends on dpkg-dev for dpkg-architecture"
    add a comment to the bug log
  • #886120 – ctpp2: "makes ctpp2 randomly FTBFS, syntax errors, hides problems"
    apply patch from the BTS, upload to DELAYED/5
  • #886836 – libgtkmm-2.4-dev,libgtkmm-2.4-doc: "libgtkmm-2.4-dev,libgtkmm-2.4-doc: both ship usr/share/doc/libgtkmm-2.4-dev/examples/*"
    propose a patch
  • #887602 – dia: "dia: Detect freetype via pkg-config"
    add patch from upstream merge request, upload to DELAYED/5
  • #890595 – phpmyadmin: "phpmyadmin: warnings when running under php 7.2, apparently fixed by new upstream series 4.7.x"
    prepare a debdiff
  • #892121 – src:libxml-saxon-xslt2-perl: "libxml-saxon-xslt2-perl FTBFS with libsaxonhe-java"
    upload with patch from racke (pkg-perl)
  • #892444 – src:ttfautohint: "ttfautohint: Please use 'pkg-config' to find FreeType 2"
    add patch from Hilko Bengen in BTS, upload to DELAYED/5, then fixed in a maintainer upload
  • #898752 – src:node-webpack: "node-webpack: FTBFS and autopkgtest failure on 32-bit"
    apply proposal from BTS, upload to DELAYED/5
  • #900395 – xserver-xorg-input-all: "xserver-xorg-input-all: keyboard no longer working after dist-upgrade"
    downgrade and tag moreinfo
  • #906187 – facter-dev: "facter-dev: missing Depends: libfacter3.11.0 (= ${binary:Version})"
    add missing Depends, upload to DELAYED/5
  • #906977 – src:parley: "parley: FTBFS in buster/sid ('QItemSelection' does not name a type)"
    add patch from upstream git, upload to DELAYED/5
  • #907975 – libf2fs-format-dev: "libf2fs-format-dev: missing Breaks+Replaces: libf2fs-dev (<< 1.11)"
    add missing Breaks+Replaces, upload to DELAYED/5
  • #908147 – cups-browsed: "restarting cups-browsed deleted print jobs"
    try to reproduce
  • #911324 – carton: "carton: Carton fails due to missing Menlo::CLI::Compat dependency"
    package missing new dependencies and depend on them (pkg-perl)
  • #912046 – ebtables: "ebtables provides the same executables as iptables packages without using alternatives"
    add a comment to the bug log
  • #912380 – man-db: "endless looping, breaks dpkg"
    add a comment to the bug log
  • #912685 – src:net-snmp: "debian/rules is not binNMU safe"
    add a comment to the bug log
  • #913179 – libprelude: "libprelude: FTBFS with glibc 2.28; cherrypicked patches attached"
    take patch from BTS, upload to DELAYED/10
  • #915297 – src:libtest-bdd-cucumber-perl: "libtest-bdd-cucumber-perl: FTBFS: README.pod: No such file or directory"
    remove override from debian/rules (pkg-perl)
  • #915806 – src:jabref: "jabref FTBFS"
    add build-dep and jar to classpath (pkg-java)

Craig Small: WordPress 5.0.1

16 December, 2018 - 06:43

While I missed the WordPress 5.0 release, it was only a few more days before there was a security release out.

So WordPress 5.0.1 will be available in Debian soon. This is both a security update from 5.0.1 and a huge feature update from the 4.9.x versions to the 5.0 versions.

The WordPress website, in their 5.0 announcement describe all the changes better, but one of the main things is the new editor (which I’m using as I write this).  It’s certainly cleaner, or perhaps more sparse. I’m not sure if I like it yet.

The security fixes (there are 7) are the usual things you expect from a WordPress security update. The usual XSS and permission problems type stuff.

I have also in the 5.0.1 Debian package removed the build dependency to libphp-phpmailer. The issue with that package is there won’t be any more security updates for the version in Debian. WordPress has an embedded version of it which *I hope* they maintain. There is an issue about the phpmailer in WordPress, so hopefully it gets fixed soon.

Dirk Eddelbuettel: linl 0.0.3: Micro release

16 December, 2018 - 06:12

Our linl package for writing LaTeX letter with (R)markdown had a fairly minor release today, following up on the previous release well over a year ago. This version just contains one change which Mark van der Loo provided a few months ago with a clean PR. As another user was just bitten the same issue when using an included letterhead – which was fixed but unreleased – we decided it was time for a release. So there it is.

linl makes it easy to write letters in markdown, with some extra bells and whistles thanks to some cleverness chiefly by Aaron.

Here is screenshot of the vignette showing the simple input for some moderately fancy output:

The NEWS entry follows:

Changes in linl version 0.0.3 (2018-12-15)
  • Correct LaTeX double loading of package color with different options (Mark van der Loo in #18 fixing #17).

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the linl page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Gunnar Wolf: Tecnicatura Universitaria en Software Libre: First bunch of graduates!

16 December, 2018 - 04:32

December starts for our family in Argentina, and in our second day here, I was invited to Facultad de Ingeniería y Ciencias Hídricas (FICH) of Universidad Nacional del Litoral (UNL). FICH-UNL has a short (3 year), distance-learning career called Tecnicatura Universitaria en Software Libre (TUSL).

This career opened in 2015, and today we had the graduation exams for three of its students — It's no small feat for a recently created career to start graduating their first bunch! And we had one for each of TUSL's "exit tracks" (Administration, development and education).

The topics presented by the students were:

  1. An introductory manual for performing migrations and installations of free software-based systems
  2. Design and implementation of a steganography tool for end-users
  3. A Lego system implementation for AppBuilder
  4. I will try to link soon; the TUSL staff is quite well aligned to freedom, transparency and responsibilty.

Ingo Juergensmann: Adding NVMe to Server

16 December, 2018 - 00:45

My server runs on a RAID10 of 4x WD RAID 2 TB disks. Basically those disks are fast enough to cope with the disk load of the virtual machines (VMs). But since many users moved away from Facebook and Google, my Friendica installation on Nerdica.Net has a growing user count putting a large disk I/O load with many small reads & writes on the disks, resulting a slowing down the general disk I/O for all the VMs and the server itself. On mdraid-sync-Sunday this month the server needed two full days to sync its RAID10.

So the idea was to remove the high disk I/O load from the rotational disks the something different. For that reason I bought a Samsung Pro 970 512 GB NVMe disk and a matching PCIe 3.0 card to be put into my server in the colocation. On Thursday the Samsung has been installed by the rrbone staff in the colocation. I moved the PostgreSQL and MySQL databases from the RAID10 to the NVMe disk and restarted services again.

Here are some results from Munin monitoring: 

Disk Utilization

Here you can see how the disk utilization dropped after NVMe installation. The red coloured bar symbolizes the average utilization on RAID10 disks and the green bar symbolizes the same RAID10 after the databases were moved to the NVMe disk. There's roughly 20% less utilization now, whch is good.

Disk Latency

Here you can see the same coloured bars for the disk latency. As you can see the latency dropped by 1/3 now.

CPU I/O wait

The most significant graph is maybe the CPU graph where you can see a large portion of iowait of the CPUs. This is no longer true as there is apparently no significant iowait anymore thanks to the low latency and high IOPS nature of SSD/NVMe disks.

Overall I cannot confirm that adding the NVMe disk results in a significant faster page load of Friendica or Mastodon, maybe because other measurements like Redis/Memcached or pgbouncer already helped a lot before the NVMe disk, but it helps a lot with general disk I/O load and improving disk speeds inside of the VMs, like for my regular backups and such.

Ah, one thing to report is: in a quick test pgbench reported >2200 tps on NVMe now. That at least is a real speed improvement, maybe by order of 10 or so.

Kategorie: DebianTags: DebianServerHardware

Petter Reinholdtsen: Learn to program with Minetest on Debian

15 December, 2018 - 21:30

A fun way to learn how to program Python is to follow the instructions in the book "Learn to program with Minecraft", which introduces programming in Python to people who like to play with Minecraft. The book uses a Python library to talk to a TCP/IP socket with an API accepting build instructions and providing information about the current players in a Minecraft world. The TCP/IP API was first created for the Minecraft implementation for Raspberry Pi, and has since been ported to some server versions of Minecraft. The book contain recipes for those using Windows, MacOSX and Raspian. But a little known fact is that you can follow the same recipes using the free software construction game Minetest.

There is a Minetest module implementing the same API, making it possible to use the Python programs coded to talk to Minecraft with Minetest too. I uploaded this module to Debian two weeks ago, and as soon as it clears the FTP masters NEW queue, learning to program Python with Minetest on Debian will be a simple 'apt install' away. The Debian package is maintained as part of the Debian Games team, and the packaging rules are currently located under 'unfinished' on Salsa.

You will most likely need to install several of the Minetest modules in Debian for the examples included with the library to work well, as there are several blocks used by the example scripts that are provided via modules in Minetest. Without the required blocks, a simple stone block is used instead. My initial testing with a analog clock did not get gold arms as instructed in the python library, but instead used stone arms.

I tried to find a way to add the API to the desktop version of Minecraft, but were unable to find any working recipes. The recipes I found are only working with a standalone Minecraft server setup. Are there any options to use with the normal desktop version?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Norbert Preining: TeX Live/Debian updates 20181214

15 December, 2018 - 12:58

Another month passed, and the (hoepfully) last upload for this year brings updates to the binaries, to the usual big set of macro and font packages, and some interesting and hopefully useful changes.

The version for the binary packages is 2018.20181214.49410-1 and is based on svn revision 49410. That means we get:

  • support some primitives from pdftex in xetex
  • dviout updates
  • kpathsea change for brace expansion
  • various memory fixes, return value fixes etc

The version of the macro/font packages is 2018.20181214-1 and contains the usual set of updated and new packages, see below for the complete list. There is one interesting functional change, we have replaced the . in the various TEXMF* variables with TEXMFDOTDIR, which in turn is defined to .. By itself no functional changes, but it allows users to redefine TEXMFDOTDIR to say .// for certain projects, which will make kpathsea automatically find all files in the current directory and below.

Now for the full list of updates and new packages. Enjoy!

New packages

cweb-old, econ-bst, eqexpl, icon-appr, makecookbook, modeles-factures-belges-assocs, pdftex-quiet, pst-venn, tablvar, tikzlings, xindex.

Updated packages

a2ping, abnt, abntex2, acmart, acrotex, addlines, adigraph, amsmath, animate, auto-pst-pdf-lua, awesomebox, babel, babel-german, beamer, beamertheme-focus, beebe, bib2gls, biblatex-abnt, biblatex-archaeology, biblatex-ext, biblatex-gb7714-2015, biblatex-publist, bibleref, bidi, censor, changelog, colortbl, convbkmk, covington, cryptocode, cweb, dashundergaps, datatool, datetime2-russian, datetime2-samin, diffcoeff, dynkin-diagrams, ebgaramond, enumitem, fancyvrb, glossaries-extra, grabbox, graphicxsp, hagenberg-thesis, hyperref, hyperxmp, ifluatex, japanese-otf-uptex, japanese-otf-uptex-nonfree, jigsaw, jlreq, jnuexam, jsclasses, ketcindy, knowledge, kpathsea, l3build, l3experimental, l3kernel, latex, latex2man, libertinus-otf, lipsum, luatexko, lwarp, mcf2graph, memoir, modeles-factures-belges-assocs, oberdiek, pdfx, platex, platex-tools, plautopatch, polexpr, pst-eucl, pst-fractal, pst-func, pst-math, pst-moire, pstricks, pstricks-add, ptex2pdf, quran, rec-thy, register, reledmac, rutitlepage, sourceserifpro, srdp-mathematik, suftesi, svg, tcolorbox, tex4ht, texdate, thesis-ekf, thesis-qom, tikz-cd, tikzducks, tikzmarmots, tlshell, todonotes, tools, toptesi, ucsmonograph, univie-ling, uplatex, widows-and-orphans, witharrows, xepersian, xetexref, xindex, xstring, xurl, zhlineskip.

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, November 2018

15 December, 2018 - 05:50

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In November, about 209 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

In November we had a few more hours available to dispatch than we had contributors willing and able to do the work and thus we are actively looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The number of sponsored hours stayed the same at 212 hours per month but we actually lost two sponsors and gained a new one (silver level).

The security tracker currently lists 30 packages with a known CVE and the dla-needed.txt file has 32 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Eddy Petri&#537;or: rust for cortex-m7 baremetal

15 December, 2018 - 04:20
Update 14 December 2018: After the release of stable 1.31.0 (aka 2018 Edition), it is no longer necessary to switch to the nightly channel to get access to thumb7em-none-eabi / Cortex-M4 and Cortex-M7 components. Updated examples and commands accordingly.
For more details on embedded development using Rust, the official Rust embedded docs site is the place to go, in particular, you can start with The embedded Rust book.
This is a reminder for myself, if you want to install Rust for a baremetal Cortex-M7 target, this seems to be a tier 3 platform:

Higlighting the relevant part:

Target std rustc cargo notes ... msp430-none-elf * 16-bit MSP430 microcontrollers sparc64-unknown-netbsd ✓ ✓ NetBSD/sparc64 thumbv6m-none-eabi * Bare Cortex-M0, M0+, M1 thumbv7em-none-eabi *

Bare Cortex-M4, M7 thumbv7em-none-eabihf * Bare Cortex-M4F, M7F, FPU, hardfloat thumbv7m-none-eabi * Bare Cortex-M3 ... x86_64-unknown-openbsd ✓ ✓ 64-bit OpenBSD
In order to enable the relevant support, use the nightly build and use stable >= 1.31.0 and add the relevant target:
eddy@feodora:~/usr/src/rust-uc$ rustup show
Default host: x86_64-unknown-linux-gnu

installed toolchains

nightly-x86_64-unknown-linux-gnu (default)

active toolchain

nightly-x86_64-unknown-linux-gnu (default)
rustc 1.28.0-nightly (cb20f68d0 2018-05-21)eddy@feodora:~/usr/src/rust$ rustup show
Default host: x86_64-unknown-linux-gnu

stable-x86_64-unknown-linux-gnu (default)
rustc 1.31.0 (abe02cefd 2018-12-04)
If not using nightly, switch to that:

eddy@feodora:~/usr/src/rust-uc$ rustup default nightly-x86_64-unknown-linux-gnu
info: using existing install for 'nightly-x86_64-unknown-linux-gnu'
info: default toolchain set to 'nightly-x86_64-unknown-linux-gnu'

  nightly-x86_64-unknown-linux-gnu unchanged - rustc 1.28.0-nightly (cb20f68d0 2018-05-21)
Add the needed target:
eddy@feodora:~/usr/src/rust$ rustup target add thumbv7em-none-eabi
info: downloading component 'rust-std' for 'thumbv7em-none-eabi'
info: installing component 'rust-std' for 'thumbv7em-none-eabi'
eddy@feodora:~/usr/src/rust$ rustup show
Default host: x86_64-unknown-linux-gnu

installed targets for active toolchain


active toolchain

stable-x86_64-unknown-linux-gnu (default)
rustc 1.31.0 (abe02cefd 2018-12-04)Then compile with --target.

Ben Hutchings: Debian LTS work, November 2018

15 December, 2018 - 02:18

I was assigned 20 hours of work by Freexian's Debian LTS initiative and worked all those hours.

I prepared and released another stable update for Linux 3.16 (3.16.61), but have not yet included this in a Debian upload.

I updated the firmware-nonfree package to fix security issues in wifi firmware and to provide additional firmware that may be requested by drivers in Linux 4.9. I issued DLA 1573-1 to describe this update.

I worked on documenting a bug in the installer that delays installation of updates to the kernel. The installer can only be updated as part of a point release, and these are not done after the LTS team takes on responsibility for a release. Laura Arjona drafted an addition to the Errata section of the official jessie installer page and I reviewed this. I also updated the LTS/Installing wiki page.

I also participated in various discussions on the debian-lts mailing list.

Molly de Blanc: The OSD and user freedom

14 December, 2018 - 02:50
Some background reading

The relationship between open source and free software is fraught with people arguing about meanings and value. In spite of all the things we’ve built up around open source and free software, they reduce down to both being about software freedom.

Open source is about software freedom. It has been the case since “open source” was created.

In 1986 the Four Freedoms of Free Software (4Fs) were written. In 1998 Netscape set its source code free. Later that year a group of people got together and Christine Peterson suggested that, to avoid ambiguity, there was a “need for a better name” than free software. She suggested open source after open source intelligence. The name stuck and 20 years later we argue about whether software freedom matters to open source, because too many global users of the term have forgotten (or never knew) that some people just wanted another way to say software that ensures the 4Fs.

Once there was a term, the term needed a formal definition: how to we describe what open source is? That’s where the Open Source Definition (OSD) comes in.

The OSD is a set of ten points that describe what an open source license looks like. The OSD came from the Debian Free Software Guidelines. The DFSG themselves were created to “determine if a work is free” and ought to be considered a way of describing the 4Fs.

Back to the present

I believe that the OSD is about user freedom. This is an abstraction from “open source is about free software.” As I eluded to earlier, this is an intuition I have, a thing I believe, and an argument I’m have a very hard time trying to make.

I think of free software as software that exhibits or embodies software freedom — it’s software created using licenses that ensure the things attached to them protect the 4Fs. This is all a tool, a useful tool, for protecting user freedom.

The line that connects the OSD and user freedom is not a short one: the OSD defines open source -> open source is about software freedom -> software freedom is a tool to protect user freedom. I think this is, however, a very valuable reduction we can make. The OSD is another tool in our tool box when we’re trying to protect the freedom of users of computers and computing technology.

Why does this matter (now)?

I would argue that this has always mattered, and we’ve done a bad job of talking about it. I want to talk about this now because its become increasingly clear that people simply never understood (or even heard of) the connection between user freedom and open source.

I’ve been meaning to write about this for a while, and I think it’s important context for everything else I say and write about in relation to the philosophy behind free and open source software (FOSS).

FOSS is a tool. It’s not a tool about developmental models or corporate enablement — though some people and projects have benefited from the kinds of development made possible through sharing source code, and some companies have created very financially successful models based on it as well. In both historical and contemporary contexts, software freedom is at the heart of open source. It’s not about corporate benefit, it’s not about money, and it’s not even really about development. Methods of development are tools being used to protect software freedom, which in turn is a tool to protect user freedom. User freedom, and what we get from that, is what’s valuable.

Side note

At some future point, I’ll address why user freedom matters, but in the mean time, here are some talks I gave (with Karen Sandler) on the topic.

Joachim Breitner: Thoughts on bootstrapping GHC

13 December, 2018 - 20:02

I am returning from the reproducible builds summit 2018 in Paris. The latest hottest thing within the reproducible-builds project seems to be bootstrapping: How can we build a whole operating system from just and only source code, using very little, or even no, binary seeds or auto-generated files. This is actually concern that is somewhat orthogonal to reproducibility: Bootstrappable builds help me in trusting programs that I built, while reproducible builds help me in trusting programs that others built.

And while they make good progress bootstrapping a full system from just a C compiler written in Scheme, and a Scheme interpreter written in C, that can build each other (Janneke’s mes project), and there are plans to build that on top of stage0, which starts with a 280 bytes of binary, the situation looks pretty bad when it comes to Haskell.

Unreachable GHC

The problem is that contemporary Haskell has only one viable implementation, GHC. And GHC, written in contemporary Haskell, needs GHC to be build. So essentially everybody out there either just downloads a binary distribution of GHC. Or they build GHC from source, using a possibly older (but not much older) version of GHC that they already have. Even distributions like Debian do nothing different: When they build the GHC package, the builders use, well, the GHC package.

There are other Haskell implementations out there. But if they are mature and active developed, then they are implemented in Haskell themselves, often even using advanced features that only GHC provides. And even those are insufficient to build GHC itself, let alone the some old and abandoned Haskell implementations.

In all these cases, at some point an untrusted binary is used. This is very unsatisfying. What can we do? I don’t have the answers, but please allow me to outline some venues of attack.

Retracing history

Obviously, even GHC does not exist since the beginning of time, and the first versions surely were built using something else than GHC. The oldest version of GHC for which we can find a release on the GHC web page is version 0.29 from July 1996. But the installation instructions write:

GHC 0.26 doesn't build with HBC. (It could, but we haven't put in the effort to maintain it.)

GHC 0.26 is best built with itself, GHC 0.26. We heartily recommend it. GHC 0.26 can certainly be built with GHC 0.23 or 0.24, and with some earlier versions, with some effort.

GHC has never been built with compilers other than GHC and HBC.

HBC is a Haskell compiler where we find the sources of one random version only thanks to It is written in C, so that should be the solution: Compile HBC, use it to compile GHC-0.29, and then step for step build every (major) version of GHC until today.

The problem is that it is non-trivial to build software from the 90s using today's compilers. I briefly looked at the HBC code base, and had to change some files from using varargs.h to stdargs.v, and this is surely just one of many similar stumbling blocks trying to build that tools. Oh, and even the hbc source state

# To get everything done: make universe
# It is impossible to make from scratch.
# You must have a running lmlc, to
# recompile it (of course).

At this point I ran out of time.

Going back, but doing it differently

Another approach is to go back in time, to some old version of GHC, but maybe not all the way to the beginning, and then try to use another, officially unsupported, Haskell compiler to build GHC. This is what rekado tried to do in 2017: He use the most contemporary implementation of Haskell in C, the Hugs interpreter. Using this, he compiled nhc98 (yet another abandoned Haskell implementation), with the hope of building GHC with nhc98. He made impressive progress back then, but ran into a problem where the runtime crashed. Maybe someone is interested in picking up up from there?

Removing, simplifying, extending, in the present.

Both approaches so far focus on building an old version of GHC. This adds complexity: other tools (the shell, make, yacc etc.) may behave different now in a way that causes hard to debug problems. So maybe it is more fun and more rewarding to focus on today’s GHC? (At this point I am starting to hypothesize).

I said before that no other existing Haskell implementation can compile today’s GHC code base, because of features like mutually recursive modules, the foreign function interface etc. And also other existing Haskell implementations often come with a different, smaller set of standard libraries, but GHC assumes base, so we would have to build that as well...

But we don’t need to build it all. Surely there is much code in base that is not used by GHC. Also, much code in GHC that we do not need to build GHC, and . So by removing that, we reduce the amount of Haskell code that we need to feed to the other implementation.

The remaining code might use some features that are not supported by our bootstrapping implementation. Mutually recursive module could be manually merged. GADTs that are only used for additional type safety could be replaced by normal ones, which might make some pattern matches incomplete. Syntactic sugar can be desugared. By simplifying the code base in that way, one might be able a fork of GHC that is within reach of the likes of Hugs or nhc98.

And if there are features that are hard to remove, maybe we can extend the bootstrapping compiler or interpreter to support them? For example, it was mostly trivial to extend Hugs with support for the # symbol in names -- and we can be pragmatic and just allow it always, since we don’t need a standards conforming implementation, but merely one that works on the GHC code base. But how much would we have to implement? Probably this will be more fun in Haskell than in C, so maybe extending nhc98 would be more viable?

Help from beyond Haskell?

Or maybe it is time to create a new Haskell compiler from scratch, written in something other than Haskell? Maybe some other language that is reasonably pleasant to write a compiler in (Ocaml? Scala?), but that has the bootstrappability story already sorted out somehow.

But in the end, all variants come down to the same problem: Writing a Haskell compiler for full, contemporary Haskell as used by GHC is hard and really a lot of work -- if it were not, there would at least be implementations in Haskell out there. And as long as nobody comes along and does that work, I fear that we will continue to be unable to build our nice Haskell ecosystem from scratch. Which I find somewhat dissatisfying.

Junichi Uekawa: Already December.

13 December, 2018 - 18:39
Already December. Nice. I tried using tramp for a while but I am back to mosh. tramp is not usable when ssh connection is not reliable.

Keith Packard: newt

13 December, 2018 - 14:55
Newt: A Tiny Embeddable Python Subset

I've been helping teach robotics programming to students in grades 5 and 6 for a number of years. The class uses Lego models for the mechanical bits, and a variety of development environments, including Robolab and Lego Logo on both Apple ][ and older Macintosh systems. Those environments are quite good, but when the Apple ][ equipment died, I decided to try exposing the students to an Arduino environment so that they could get another view of programming languages.

The Arduino environment has produced mixed results. The general nature of a full C++ compiler and the standard Arduino libraries means that building even simple robots requires a considerable typing, including a lot of punctuation and upper case letters. Further, the edit/compile/test process is quite long making fixing errors slow. On the positive side, many of the students have gone on to use Arduinos in science research projects for middle and upper school (grades 7-12).

In other environments, I've seen Python used as an effective teaching language; the direct interactive nature invites exploration and provides rapid feedback for the students. It seems like a pretty good language to consider for early education -- "real" enough to be useful in other projects, but simpler than C++/Arduino has been. However, I haven't found a version of Python that seems suitable for the smaller microcontrollers I'm comfortable building hardware with.

How Much Python Do We Need?

Python is a pretty large language in embedded terms, but there's actually very little I want to try and present to the students in our short class (about 6 hours of language introduction and another 30 hours or so of project work). In particular, all we're using on the Arduino are:

  • Numeric values
  • Loops and function calls
  • Digital and analog I/O

Remembering my childhood Z-80 machine with its BASIC interpreter, I decided to think along those lines in terms of capabilities. I think I can afford more than 8kB of memory for the implementation, and I really do want to have "real" functions, including lexical scoping and recursion.

I'd love to make this work on our existing Arduino Duemilanove compatible boards. Those have only 32kB of flash and 2kB of RAM, so that might be a stretch...

What to Include

Exploring Python, I think there's a reasonable subset that can be built here. Included in that are:

  • Lists, numbers and string types
  • Global functions
  • For/While/If control structures.
What to Exclude

It's hard to describe all that hasn't been included, but here's some major items:

  • Objects, Dictionaries, Sets
  • Comprehensions
  • Generators (with the exception of range)
  • All numeric types aside from single-precision float

Newt is implemented in C, using flex and bison. It includes the incremental mark/sweep compacting GC system I developed for my small scheme interpreter last year. That provides a relatively simple to use and efficient memory system.

The Newt “Compiler”

Instead of directly executing a token stream as my old BASIC interpreter did, Newt is compiling to a byte coded virtual machine. Of course, we have no memory, so we don't generate a parse tree and perform optimizations on that. Instead, code is generated directly in the grammar productions.

The Newt “Virtual Machine”

With the source compiled to byte codes, execution is pretty simple -- read a byte code, execute some actions related to it. To keep things simple, the virtual machine has a single accumulator register and a stack of other values.

Global and local variables are stored in 'frames', with each frame implemented as a linked list of atom/value pairs. This isn't terribly efficient in space or time, but was quick to implement the required Python semantics for things like 'global'.

Lists and tuples are simple arrays in memory, just like C Python. I use the same sizing heuristic for lists that Python does; no sense inventing something new for that. Strings are C strings.

When calling a non-builtin function, a new frame is constructed that includes all of the formal names. Those get assigned values from the provided actuals and then the instructions in the function are executed. As new locals are discovered, the frame is extended to include them.


Any new language implementation really wants to have a test suite to ensure that the desired semantics are implemented correctly. One huge advantage for Newt is that we can cross-check the test suite by running it with Python.

Current Status

I think Newt is largely functionally complete at this point; I just finished adding the limited for statement capabilities this evening. I'm sure there are a lot of bugs to work out, and I expect to discover additional missing functionality as we go along.

I'm doing all of my development and testing on my regular x86 laptop, so I don't know how big the system will end up on the target yet.

I've written 4836 lines of code for the implementation and another 65 lines of Python for simple test cases. When compiled -Os for x86_64, the system is about 36kB of text and another few bytes of initialized data.


The source code is available from my server at, and also at github It is licensed under the GPLv2 (or later version).


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้