January 26, 2022

Russell Coker

Australia/NZ Linux Meetings

I am going to start a new Linux focused FOSS online meeting for people in Australia and nearby areas. People can join from anywhere but the aim will be to support people in nearby areas.

To cover the time zone range for Australia this requires a meeting on a weekend, I’m thinking of the first Saturday of the month at 1PM Melbourne/Sydney time, that would be 10AM in WA and 3PM in NZ. We may have corner cases of daylight savings starting and ending on different days, but that shouldn’t be a big deal as I think those times can vary by an hour either way without being too inconvenient for anyone.

Note that I describe the meeting as Linux focused because my plans include having a meeting dedicated to different versions of BSD Unix and a meeting dedicated to the HURD. But those meetings will be mainly for Linux people to learn about other Unix-like OSs.

One focus I want to have for the meetings is hands-on work, live demonstrations, and short highly time relevant talks. There are more lectures on YouTube than anyone could watch in a lifetime (see the Linux.conf.au channel for some good ones [1]). So I want to run events that give benefits that people can’t gain from watching YouTube on their own.

Russell Stuart and I have been kicking around ideas for this for a while. I think that the solution is to just do it. I know that Saturday won’t work for everyone (no day will) but it will work for many people. I am happy to discuss changing the start time by an hour or two if that seems likely to get more people. But I’m not particularly interested in trying to make it convenient for people in Hawaii or India, my idea is for an Australia/NZ focused event. I would be more than happy to share lecture notes etc with people in other countries who run similar events. As an aside I’d be happy to give a talk for an online meeting at a Hawaiian LUG as the timezone is good for me.

Please pencil in 1PM Melbourne time on the 5th of Feb for the first meeting. The meeting requirements will be a PC with good Internet access running a recent web browser and a ssh client for the hands-on stuff. A microphone or webcam is NOT required, any questions you wish to ask can be done with text if that’s what you prefer.

Suggestions for the name of the group are welcome.

26 January, 2022 06:44AM by etbe

January 23, 2022

Matthieu Caneill

Debsources, python3, and funky file names

Rumors are running that python2 is not a thing anymore.

Well, I'm certainly late to the party, but I'm happy to report that sources.debian.org is now running python3.

Wait, it wasn't?

Back when development started, python3 was very much a real language, but it was hard to adopt because it was not supported by many libraries. So python2 was chosen, meaning print-based debugging was used in lieu of print()-based debugging, and str were bytes, not unicode.

And things were working just fine. One day python2 EOL was announced, with a date far in the future. Far enough to procrastinate for a long time. Combine this with a codebase that is stable enough to not see many commits, and the fact that Debsources is a volunteer-based project that happens at best on week-ends, and you end up with a dormant software and a missed deadline.

But, as dormant as the codebase is, the instance hosted at sources.debian.org is very popular and gets 200k to 500k hits per day. Largely enough to be worth a proper maintenance and a transition to python3.

Funky file names

While transitioning to python3 and juggling left and right with str, bytes and unicode for internal objects, files, database entries and HTTP content, I stumbled upon a bug that has been there since day 1.

Quick recap if you're unfamiliar with this tool: Debsources displays the content of the source packages in the Debian archive. In other words, it's a bit like GitHub, but for the Debian source code.

And some pieces of software out there, that ended up in Debian packages, happen to contain files whose names can't be decoded to UTF-8. Interestingly enough, there's no such thing as a standard for file names: with a few exceptions that vary by operating system, any sequence of bytes can be a legit file name. And some sequences of bytes are not valid UTF-8.

Of course those files are rare, and using ASCII characters to name a file is a much more common practice than using bytes in a non-UTF-8 character encoding. But when you deal with almost 100 billion files on which you have no control (those files come from free software projects, and make their way into Debian without any renaming), it happens.

Now back to the bug: when trying to display such a file through the web interface, it would crash because it can't convert the file name to UTF-8, which is needed for the HTML representation of the page.

Bugfix

An often valid approach when trying to represent invalid UTF-8 content is to ignore errors, and replace them with ? or . This is what Debsources actually does to display non-UTF-8 file content.

Unfortunately, this best-effort approach is not suitable for file names, as file names are also identifiers in Debsources: among other places, they are part of URLs. If an URL were to use placeholder characters to replace those bytes, there would be no deterministic way to match it with a file on disk anymore.

The representation of binary data into text is a known problem. Multiple lossless solutions exist, such as base64 and its variants, but URLs looking like https://sources.debian.org/src/Y293c2F5LzMuMDMtOS4yL2Nvd3NheS8= are not readable at all compared to https://sources.debian.org/src/cowsay/3.03-9.2/cowsay/. Plus, not backwards-compatible with all existing links.

The solution I chose is to use double-percent encoding: this allows the representation of any byte in an URL, while keeping allowed characters unchanged - and preventing CGI gateways from trying to decode non-UTF-8 bytes. This is the best of both worlds: regular file names get to appear normally and are human-readable, and funky file names only have percent signs and hex numbers where needed.

Here is an example of such an URL: https://sources.debian.org/src/aspell-is/0.51-0-4/%25EDslenska.alias/. Notice the %25ED to represent the percentage symbol itself (%25) followed by an invalid UTF-8 byte (%ED).

Transitioning to this was quite a challenge, as those file names don't only appear in URLs, but also in web pages themselves, log files, database tables, etc. And everything was done with str: made sense in python2 when str were bytes, but not much in python3.

What are those files? What's their network?

I was wondering too. Let's list them!

import os

with open('non-utf-8-paths.bin', 'wb') as f:
    for root, folders, files in os.walk(b'/srv/sources.debian.org/sources/'):
        for path in folders + files:
            try:
                path.decode('utf-8')
            except UnicodeDecodeError:
                f.write(root + b'/' + path + b'\n')

Running this on the Debsources main instance, which hosts pretty much all Debian packages that were part of a Debian release, I could find 307 files (among a total of almost 100 billion files).

Without looking deep into them, they seem to fall into 2 categories:

  • File names that are not valid UTF-8, but are valid in a different charset. Not all software is developed in English or on UTF-8 systems.
  • File names that can't be decoded to UTF-8 on purpose, to be used as input to test suites, and assert resilience of the software to non-UTF-8 data.

That last point hits home, as it was clearly lacking in Debsources. A funky file name is now part of its test suite. ;)

23 January, 2022 11:00PM

Antoine Beaupré

Switching from OpenNTPd to Chrony

A friend recently reminded me of the existence of chrony, a "versatile implementation of the Network Time Protocol (NTP)". The excellent introduction is worth quoting in full:

It can synchronise the system clock with NTP servers, reference clocks (e.g. GPS receiver), and manual input using wristwatch and keyboard. It can also operate as an NTPv4 (RFC 5905) server and peer to provide a time service to other computers in the network.

It is designed to perform well in a wide range of conditions, including intermittent network connections, heavily congested networks, changing temperatures (ordinary computer clocks are sensitive to temperature), and systems that do not run continuosly, or run on a virtual machine.

Typical accuracy between two machines synchronised over the Internet is within a few milliseconds; on a LAN, accuracy is typically in tens of microseconds. With hardware timestamping, or a hardware reference clock, sub-microsecond accuracy may be possible.

Now that's already great documentation right there. What it is, why it's good, and what to expect from it. I want more. They have a very handy comparison table between chrony, ntp and openntpd.

My problem with OpenNTPd

Following concerns surrounding the security (and complexity) of the venerable ntp program, I have, a long time ago, switched to using openntpd on all my computers. I hadn't thought about it until I recently noticed a lot of noise on one of my servers:

jan 18 10:09:49 curie ntpd[1069]: adjusting local clock by -1.604366s
jan 18 10:08:18 curie ntpd[1069]: adjusting local clock by -1.577608s
jan 18 10:05:02 curie ntpd[1069]: adjusting local clock by -1.574683s
jan 18 10:04:00 curie ntpd[1069]: adjusting local clock by -1.573240s
jan 18 10:02:26 curie ntpd[1069]: adjusting local clock by -1.569592s

You read that right, openntpd was constantly rewinding the clock, sometimes in less than two minutes. The above log was taken while doing diagnostics, looking at the last 30 minutes of logs. So, on average, one 1.5 seconds rewind per 6 minutes!

That might be due to a dying real time clock (RTC) or some other hardware problem. I know for a fact that the CMOS battery on that computer (curie) died and I wasn't able to replace it (!). So that's partly garbage-in, garbage-out here. But still, I was curious to see how chrony would behave... (Spoiler: much better.)

But I also had trouble on another workstation, that one a much more recent machine (angela). First, it seems OpenNTPd would just fail at boot time:

anarcat@angela:~(main)$ sudo systemctl status openntpd
● openntpd.service - OpenNTPd Network Time Protocol
     Loaded: loaded (/lib/systemd/system/openntpd.service; enabled; vendor pres>
     Active: inactive (dead) since Sun 2022-01-23 09:54:03 EST; 6h ago
       Docs: man:openntpd(8)
    Process: 3291 ExecStartPre=/usr/sbin/ntpd -n $DAEMON_OPTS (code=exited, sta>
    Process: 3294 ExecStart=/usr/sbin/ntpd $DAEMON_OPTS (code=exited, status=0/>
   Main PID: 3298 (code=exited, status=0/SUCCESS)
        CPU: 34ms

jan 23 09:54:03 angela systemd[1]: Starting OpenNTPd Network Time Protocol...
jan 23 09:54:03 angela ntpd[3291]: configuration OK
jan 23 09:54:03 angela ntpd[3297]: ntp engine ready
jan 23 09:54:03 angela ntpd[3297]: ntp: recvfrom: Permission denied
jan 23 09:54:03 angela ntpd[3294]: Terminating
jan 23 09:54:03 angela systemd[1]: Started OpenNTPd Network Time Protocol.
jan 23 09:54:03 angela systemd[1]: openntpd.service: Succeeded.

After a restart, somehow it worked, but it took a long time to sync the clock. At first, it would just not consider any peer at all:

anarcat@angela:~(main)$ sudo ntpctl -s all
0/20 peers valid, clock unsynced

peer
   wt tl st  next  poll          offset       delay      jitter
159.203.8.72 from pool 0.debian.pool.ntp.org
    1  5  2    6s    6s             ---- peer not valid ----
138.197.135.239 from pool 0.debian.pool.ntp.org
    1  5  2    6s    7s             ---- peer not valid ----
216.197.156.83 from pool 0.debian.pool.ntp.org
    1  4  1    2s    9s             ---- peer not valid ----
142.114.187.107 from pool 0.debian.pool.ntp.org
    1  5  2    5s    6s             ---- peer not valid ----
216.6.2.70 from pool 1.debian.pool.ntp.org
    1  4  2    2s    8s             ---- peer not valid ----
207.34.49.172 from pool 1.debian.pool.ntp.org
    1  4  2    0s    5s             ---- peer not valid ----
198.27.76.102 from pool 1.debian.pool.ntp.org
    1  5  2    5s    5s             ---- peer not valid ----
158.69.254.196 from pool 1.debian.pool.ntp.org
    1  4  3    1s    6s             ---- peer not valid ----
149.56.121.16 from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ----
162.159.200.123 from pool 2.debian.pool.ntp.org
    1  4  3    1s    6s             ---- peer not valid ----
206.108.0.131 from pool 2.debian.pool.ntp.org
    1  4  1    6s    9s             ---- peer not valid ----
205.206.70.40 from pool 2.debian.pool.ntp.org
    1  5  2    8s    9s             ---- peer not valid ----
2001:678:8::123 from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ----
2606:4700:f1::1 from pool 2.debian.pool.ntp.org
    1  4  3    2s    6s             ---- peer not valid ----
2607:5300:205:200::1991 from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ----
2607:5300:201:3100::345c from pool 2.debian.pool.ntp.org
    1  4  4    1s    6s             ---- peer not valid ----
209.115.181.110 from pool 3.debian.pool.ntp.org
    1  5  2    5s    6s             ---- peer not valid ----
205.206.70.42 from pool 3.debian.pool.ntp.org
    1  4  2    0s    6s             ---- peer not valid ----
68.69.221.61 from pool 3.debian.pool.ntp.org
    1  4  1    2s    9s             ---- peer not valid ----
162.159.200.1 from pool 3.debian.pool.ntp.org
    1  4  3    4s    7s             ---- peer not valid ----

Then it would accept them, but still wouldn't sync the clock:

anarcat@angela:~(main)$ sudo ntpctl -s all
20/20 peers valid, clock unsynced

peer
   wt tl st  next  poll          offset       delay      jitter
159.203.8.72 from pool 0.debian.pool.ntp.org
    1  8  2    5s    6s         0.672ms    13.507ms     0.442ms
138.197.135.239 from pool 0.debian.pool.ntp.org
    1  7  2    4s    8s         1.260ms    13.388ms     0.494ms
216.197.156.83 from pool 0.debian.pool.ntp.org
    1  7  1    3s    5s        -0.390ms    47.641ms     1.537ms
142.114.187.107 from pool 0.debian.pool.ntp.org
    1  7  2    1s    6s        -0.573ms    15.012ms     1.845ms
216.6.2.70 from pool 1.debian.pool.ntp.org
    1  7  2    3s    8s        -0.178ms    21.691ms     1.807ms
207.34.49.172 from pool 1.debian.pool.ntp.org
    1  7  2    4s    8s        -5.742ms    70.040ms     1.656ms
198.27.76.102 from pool 1.debian.pool.ntp.org
    1  7  2    0s    7s         0.170ms    21.035ms     1.914ms
158.69.254.196 from pool 1.debian.pool.ntp.org
    1  7  3    5s    8s        -2.626ms    20.862ms     2.032ms
149.56.121.16 from pool 2.debian.pool.ntp.org
    1  7  2    6s    8s         0.123ms    20.758ms     2.248ms
162.159.200.123 from pool 2.debian.pool.ntp.org
    1  8  3    4s    5s         2.043ms    14.138ms     1.675ms
206.108.0.131 from pool 2.debian.pool.ntp.org
    1  6  1    0s    7s        -0.027ms    14.189ms     2.206ms
205.206.70.40 from pool 2.debian.pool.ntp.org
    1  7  2    1s    5s        -1.777ms    53.459ms     1.865ms
2001:678:8::123 from pool 2.debian.pool.ntp.org
    1  6  2    1s    8s         0.195ms    14.572ms     2.624ms
2606:4700:f1::1 from pool 2.debian.pool.ntp.org
    1  7  3    6s    9s         2.068ms    14.102ms     1.767ms
2607:5300:205:200::1991 from pool 2.debian.pool.ntp.org
    1  6  2    4s    9s         0.254ms    21.471ms     2.120ms
2607:5300:201:3100::345c from pool 2.debian.pool.ntp.org
    1  7  4    5s    9s        -1.706ms    21.030ms     1.849ms
209.115.181.110 from pool 3.debian.pool.ntp.org
    1  7  2    0s    7s         8.907ms    75.070ms     2.095ms
205.206.70.42 from pool 3.debian.pool.ntp.org
    1  7  2    6s    9s        -1.729ms    53.823ms     2.193ms
68.69.221.61 from pool 3.debian.pool.ntp.org
    1  7  1    1s    7s        -1.265ms    46.355ms     4.171ms
162.159.200.1 from pool 3.debian.pool.ntp.org
    1  7  3    4s    8s         1.732ms    35.792ms     2.228ms

It took a solid five minutes to sync the clock, even though the peers were considered valid within a few seconds:

jan 23 15:58:41 angela systemd[1]: Started OpenNTPd Network Time Protocol.
jan 23 15:58:58 angela ntpd[84086]: peer 142.114.187.107 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 198.27.76.102 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 207.34.49.172 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 209.115.181.110 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 159.203.8.72 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 138.197.135.239 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 162.159.200.123 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 2607:5300:201:3100::345c now valid
jan 23 15:59:00 angela ntpd[84086]: peer 2606:4700:f1::1 now valid
jan 23 15:59:00 angela ntpd[84086]: peer 158.69.254.196 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 216.6.2.70 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 68.69.221.61 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 205.206.70.40 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 205.206.70.42 now valid
jan 23 15:59:02 angela ntpd[84086]: peer 162.159.200.1 now valid
jan 23 15:59:04 angela ntpd[84086]: peer 216.197.156.83 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 206.108.0.131 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 2001:678:8::123 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 149.56.121.16 now valid
jan 23 15:59:07 angela ntpd[84086]: peer 2607:5300:205:200::1991 now valid
jan 23 16:03:47 angela ntpd[84086]: clock is now synced

That seems kind of odd. It was also frustrating to have very little information from ntpctl about the state of the daemon. I understand it's designed to be minimal, but it could inform me on his known offset, for example. It does tell me about the offset with the different peers, but not as clearly as one would expect. It's also unclear how it disciplines the RTC at all.

Compared to chrony

Now compare with chrony:

jan 23 16:07:16 angela systemd[1]: Starting chrony, an NTP client/server...
jan 23 16:07:16 angela chronyd[87765]: chronyd version 4.0 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
jan 23 16:07:16 angela chronyd[87765]: Initial frequency 3.814 ppm
jan 23 16:07:16 angela chronyd[87765]: Using right/UTC timezone to obtain leap second data
jan 23 16:07:16 angela chronyd[87765]: Loaded seccomp filter
jan 23 16:07:16 angela systemd[1]: Started chrony, an NTP client/server.
jan 23 16:07:21 angela chronyd[87765]: Selected source 206.108.0.131 (2.debian.pool.ntp.org)
jan 23 16:07:21 angela chronyd[87765]: System clock TAI offset set to 37 seconds

First, you'll notice there's none of that "clock synced" nonsense, it picks a source, and then... it's just done. Because the clock on this computer is not drifting that much, and openntpd had (presumably) just sync'd it anyways. And indeed, if we look at detailed stats from the powerful chronyc client:

anarcat@angela:~(main)$ sudo chronyc tracking
Reference ID    : CE6C0083 (ntp1.torix.ca)
Stratum         : 2
Ref time (UTC)  : Sun Jan 23 21:07:21 2022
System time     : 0.000000311 seconds slow of NTP time
Last offset     : +0.000807989 seconds
RMS offset      : 0.000807989 seconds
Frequency       : 3.814 ppm fast
Residual freq   : -24.434 ppm
Skew            : 1000000.000 ppm
Root delay      : 0.013200894 seconds
Root dispersion : 65.357254028 seconds
Update interval : 1.4 seconds
Leap status     : Normal

We see that we are nanoseconds away from NTP time. That was ran very quickly after starting the server (literally in the same second as chrony picked a source), so stats are a bit weird (e.g. the Skew is huge). After a minute or two, it looks more reasonable:

Reference ID    : CE6C0083 (ntp1.torix.ca)
Stratum         : 2
Ref time (UTC)  : Sun Jan 23 21:09:32 2022
System time     : 0.000487002 seconds slow of NTP time
Last offset     : -0.000332960 seconds
RMS offset      : 0.000751204 seconds
Frequency       : 3.536 ppm fast
Residual freq   : +0.016 ppm
Skew            : 3.707 ppm
Root delay      : 0.013363549 seconds
Root dispersion : 0.000324015 seconds
Update interval : 65.0 seconds
Leap status     : Normal

Now it's learning how good or bad the RTC clock is ("Frequency"), and is smoothly adjusting the System time to follow the average offset (RMS offset, more or less). You'll also notice the Update interval has risen, and will keep expanding as chrony learns more about the internal clock, so it doesn't need to constantly poll the NTP servers to sync the clock. In the above, we're 487 micro seconds (less than a milisecond!) away from NTP time.

(People interested in the explanation of every single one of those fields can read the excellent chronyc manpage. That thing made me want to nerd out on NTP again!)

On the machine with the bad clock, chrony also did a 1.5 second adjustment, but just once, at startup:

jan 18 11:54:33 curie chronyd[2148399]: Selected source 206.108.0.133 (2.debian.pool.ntp.org) 
jan 18 11:54:33 curie chronyd[2148399]: System clock wrong by -1.606546 seconds 
jan 18 11:54:31 curie chronyd[2148399]: System clock was stepped by -1.606546 seconds 
jan 18 11:54:31 curie chronyd[2148399]: System clock TAI offset set to 37 seconds 

Then it would still struggle to keep the clock in sync, but not as badly as openntpd. Here's the offset a few minutes after that above startup:

System time     : 0.000375352 seconds slow of NTP time

And again a few seconds later:

System time     : 0.001793046 seconds slow of NTP time

I don't currently have access to that machine, and will update this post with the latest status, but so far I've had a very good experience with chrony on that machine, which is a testament to its resilience, and it also just works on my other machines as well.

Extras

On top of "just working" (as demonstrated above), I feel that chrony's feature set is so much superior... Here's an excerpt of the extras in chrony, taken from comparison table:

  • source frequency tracking
  • source state restore from file
  • temperature compensation
  • ready for next NTP era (year 2036)
  • replace unreachable / falseticker servers
  • aware of jitter
  • RTC drift tracking
  • RTC trimming
  • Restore time from file w/o RTC
  • leap seconds correction, in slew mode
  • drops root privileges

I even understand some of that stuff. I think.

So kudos to the chrony folks, I'm switching.

Caveats

One thing to keep in mind in the above, however is that it's quite possible chrony does as bad of a job as openntpd on that old machine, and just doesn't tell me about it. For example, here's another log sample from another server (marcos):

jan 23 11:13:25 marcos ntpd[1976694]: adjusting clock frequency by 0.451035 to -16.420273ppm

I get those basically every day, which seems to show that it's at least trying to keep track of the hardware clock.

In other words, it's quite possible I have no idea what I'm talking about and you definitely need to take this article with a grain of salt. I'm not an NTP expert.

Switching to chrony

Because the default configuration in chrony (at least as shipped in Debian) is sane (good default peers, no open network by default), installing it is as simple as:

apt install chrony

And because it somehow conflicts with openntpd, that also takes care of removing that cruft as well.

23 January, 2022 09:55PM

January 22, 2022

hackergotchi for Steve Kemp

Steve Kemp

Visiting the UK was difficult, but worth it

So in my previous post I mentioned that we were going to spend the Christmas period in the UK, which we did.

We spent a couple of days there, meeting my parents, and family. We also persuaded my sister to drive us to Scarborough so that we could hang out on the beach for an afternoon.

Finland has lots of lakes, but it doesn't have proper waves. So it was surprisingly good just to wade in the sea and see waves! Unfortunately our child was a wee bit too scared to ride on a donkey!

Unfortunately upon our return to Finland we all tested positive for COVID-19, me first, then the child, and about three days later my wife. We had negative tests in advance of our flights home, so we figure that either the tests were broken, or we were infected in the airplane/airport.

Thankfully things weren't too bad, we stayed indoors for the appropriate length of time, and a combination of a couple of neighbours and online shopping meant we didn't run out of food.

Since I've been back home I've been automating AWS activities with aws-utils, and updating my simple host-automation system, marionette.

Marionette is something that was inspired by puppet, the configuration management utility, but it runs upon localhost only. Despite the small number of integrated primitives it actually works surprisingly well, and although I don't expect it will ever become popular it was an interesting research project.

The aws-utilities? They were specifically put together because I've worked in a few places where infrastructure is setup with terraform, or cloudformation, but there are always the odd thing that is configured manually. Typically we'll have an openvpn gateway which uses a manually maintained IP allow-list, or some admin-server which has a security-group maintained somewhat manually.

Having the ability to update a bunch of rules with your external IP, as a single command, across a number of AWS accounts/roles, and a number of security-groups is an enormous time-saver when your home IP changes.

I'd quite like to add more things to that collection, but there's no particular rush.

22 January, 2022 10:00AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Goodbye Nexus 5

I've blogged a few times already about my Nexus 5, the Android device I have/had been using for 8 years. Sadly, it died a few weeks ago, when the WiFi chip stopped working. I could probably have attempted a mainboard swap, but at this point, getting a new device seemed like the best choice.

In a world where most Android devices are EOL after less than 3 years, it is amazing I was able to keep this device for so long, always running the latest Android version with the latest security patch. The Nexus 5 originally shipped with Android 4.4 and when it broke, I was running Android 11, with the November security patch! I'm very grateful to the FOSS Android community that made this possible, especially the LineageOS community.

I've replaced my Nexus 5 by a used Pixel 3a, mostly because of the similar form factor, relatively affordable price and the presence of a headphone jack. Google also makes flashing a custom ROM easy, although I had more trouble with this than I first expected.

The first Pixel 3a I bought on eBay was a scam: I ordered an "Open Box" phone and it arrived all scratched1 and with a broken rear camera. The second one I got (from the Amazon Renewed program) arrived in perfect condition, but happened to be a Verizon model. As I found out, Verizon locks the bootloader on their phones, making it impossible to install LineageOS2. The vendor was kind enough to let me return it.

As they say, third time's the charm. This time around, I explicitly bought a phone on eBay listed with a unlocked bootloader. I'm very satisfied with my purchase, but all in all, dealing with all the returns and the shipping was exhausting.

Hopefully this phone will last as long as my Nexus 5!


  1. There was literally a whole layer missing at the back, as if someone had sanded the phone... 

  2. Apparently, and "Unlocked phone" means it is "SIM unlocked", i.e. you can use it with any carrier. What I should have been looking for is a "Factory Unlocked phone", one where the bootloader isn't locked :L 

22 January, 2022 04:30AM by Louis-Philippe Véronneau

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.0.2 on CRAN: Updates

The second release of the still fairly new qlcal package arrivied at CRAN today.

qlcal is based on the calendaring subset of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more.

This release brings a further package simplification from removing a few more files not needed for just calendaring, as well as an update 2022 calendar for China from the just-release 1.25 version of QuantLib.

Changes in version 0.0.2 (2022-01-21)

  • Further minimize set of files needed for calendaring

  • Update China calendar from QuantLib 1.25 release

See the project page and package documentation for more details, and more examples.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 January, 2022 03:03AM

January 21, 2022

hackergotchi for Neil McGovern

Neil McGovern

Further investments in desktop Linux

This was originally posted on the GNOME Foundation news feed

The GNOME Foundation was supported during 2020-2021 by a grant from Endless Network which funded the Community Engagement Challenge, strategy consultancy with the board, and a contribution towards our general running costs. At the end of last year we had a portion of this grant remaining, and after the success of our work in previous years directly funding developer and infrastructure work on GTK and Flathub, we wanted to see whether we could use these funds to invest in GNOME and the wider Linux desktop platform.

We’re very pleased to announce that we got approval to launch three parallel contractor engagements, which started over the past few weeks. These projects aim to improve our developer experience, make more applications available on the GNOME platform, and move towards equitable and sustainable revenue models for developers within our ecosystem. Thanks again to Endless Network for their support on these initiatives.

Flathub – Verified apps, donations and subscriptions (Codethink and James Westman)

This project is described in detail on the Flathub Discourse but goal is to add a process to verify first-party apps on Flathub (ie uploaded by a developer or an authorised representative) and then make it possible for those developers to collect donations or subscriptions from users of their applications. We also plan to publish a separate repository that contains only these verified first-party uploads (without any of the community contributed applications), as well as providing a repository with only free and open source applications, allowing users to choose what they are comfortable installing and running on their system.

Creating the user and developer login system to manage your apps will also set us up well for future enhancements, such managing tokens for direct binary uploads (eg from a CI/CD system hosted elsewhere, as is already done with Mozilla Firefox and OBS) and making it easier to publish apps from systems such as Electron which can be hard to use within a flatpak-builder sandbox. For updates on this project you can follow the Discourse thread, check out the work board on GitHub or join us on Matrix.

PWAs – Integrating Progressive Web Apps in GNOME (Phaedrus Leeds)

While everyone agrees that native applications can provide the best experience on the GNOME desktop, the web platform, and particularly PWAs (Progressive Web Apps) which are designed to be downloadable as apps and offer offline functionality, makes it possible for us to offer equivalent experiences to other platforms for app publishers who have not specifically targeted GNOME. This allows us to attract and retain users by giving them the choice of using applications from a wider range of publishers than are currently directly targeting the Linux desktop.

The first phase of the GNOME PWA project involves adding back support to Software for web apps backed by GNOME Web, and making this possible when Web is packaged as a Flatpak.  So far some preparatory pull requests have been merged in Web and libportal to enable this work, and development is ongoing to get the feature branches ready for review.

Discussions are also in progress with the Design team on how best to display the web apps in Software and on the user interface for web apps installed from a browser. There has also been discussion among various stakeholders about what web apps should be included as available with Software, and how they can provide supplemental value to users without taking priority over apps native to GNOME.

Finally, technical discussion is ongoing in the portal issue tracker to ensure that the implementation of a new dynamic launcher portal meets all security and robustness requirements, and is potentially useful not just to GNOME Web but Chromium and any other app that may want to install desktop launchers. Adding support for the launcher portal in upstream Chromium, to facilitate Chromium-based browsers packaged as a Flatpak, and adding support for Chromium-based web apps in Software are stretch goals for the project should time permit.

GTK4 / Adwaita – To support the adoption of Gtk4 by the community (Emmanuele Bassi)

With the release of GTK4 and renewed interest in GTK as a toolkit, we want to continue improving the developer experience and ease of use of GTK and ensure we have a complete and competitive offering for developers considering using our platform. This involves identifying missing functionality or UI elements that applications need to move to GTK4, as well as informing the community about the new widgets and functionality available.

We have been working on documentation and bug fixes for GTK in preparation for the GNOME 42 release and have also started looking at the missing widgets and API in Libadwaita, in preparation for the next release. The next steps are to work with the Design team and the Libadwaita maintainers and identify and implement missing widgets that did not make the cut for the 1.0 release.

In the meantime, we have also worked on writing a beginners tutorial for the GNOME developers documentation, including GTK and Libadwaita widgets so that newcomers to the platform can easily move between the Interface Guidelines and the API references of various libraries. To increase the outreach of the effort, Emmanuele has been streaming it on Twitch, and published the VOD on YouTube as well. 

21 January, 2022 03:31PM by Neil McGovern

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Homebrewing recipes

Looking at my blog, it seems I haven't written anything about homebrewing in a while. In fact, the last time I did was when I had a carboy blow out on me in the middle of the night...

Fear not, I haven't stopped brewing since then. I have in fact decided to publish my homebrew recipes. Not on this blog though, as it would get pretty repetitive.

So here are my recipes. So far, I've brewed around 30 different beers!

The format is pretty simple (no fancy HTML, just plain markdown) and although I'm not the most scientific brewer, you should be able to replicate some of those if that's what you want to try.

Cheers!

21 January, 2022 05:15AM by Louis-Philippe Véronneau

Montreal Subway Foot Traffic Data, 2021 edition

For the third time now, I've asked Société de Transport de Montréal, Montreal's transit agency, for the foot traffic data of Montreal's subway. I think this has become an annual thing now :)

The original blog post and the 2019-2020 edition can be read here:

By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic.

Licences

  • The subway map displayed on this page, the original dataset and my modified dataset are licenced under CCO 1.0: they are in the public domain.

  • The R code I wrote is licensed under the GPLv3+. It's pretty much the same code as last year. I've also added a converter script this time around. I takes the manually cleaned 2021 source data and turns it into something that can be merged with the global dataset. I had one last year and deleted it, for some reason...

21 January, 2022 05:00AM by Louis-Philippe Véronneau

Reproducible Builds (diffoscope)

diffoscope 201 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 201. This version includes the following changes:

[ Chris Lamb ]
* If the debian.deb822 module raises any exception on import, re-raise it as
  an ImportError instead. This should fix diffoscope on some Fedora systems.
  Thanks to Mattia Rizzolo for suggesting this particular solution.
  (Closes: reproducible-builds/diffoscope#300)

[ Zbigniew Jędrzejewski-Szmek ]
* Fix json detection with file-5.41-3.fc36.x86_64.

You find out more by visiting the project homepage.

21 January, 2022 12:00AM

January 20, 2022

Sven Hoexter

Running OpenWRT x86 in qemu

Sometimes it's nice for testing purpose to have the OpenWRT userland available locally. Since there is an x86 build available one can just run it within qemu.

wget https://downloads.openwrt.org/releases/21.02.1/targets/x86/64/openwrt-21.02.1-x86-64-generic-squashfs-combined.img.gz
gunzip openwrt-21.02.1-x86-64-generic-squashfs-combined.img.gz
qemu-img convert -f raw -O qcow2 openwrt-21.02.1-x86-64-generic-squashfs-combined.img openwrt-21.02.1.qcow2
qemu-img resize openwrt-21.02.1.qcow2 200M
qemu-system-x86_64 -M q35 \
  -drive file=openwrt-21.02.1.qcow2,id=d0,if=none,bus=0,unit=0 \
  -device ide-hd,drive=d0,bus=ide.0 -nic user,hostfwd=tcp::5556-:22
# you've to change the network configuration to retrieve an IP via
# dhcp for the lan bridge br-lan
vi /etc/config/network
  - change option proto 'static' to 'dhcp'
  - remove IP address and netmask setting
/etc/init.d/network restart
# now you should've an ip out of 10.0.2.0/24
ssh root@localhost -p 5556
# remember ICMP does not work but otherwise you should have
# IP networking available
opkg update
opkg install curl

20 January, 2022 08:20PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.15: Regular Update

A new release 0.4.15 of RQuantLib arrived at CRAN earlier today, and has been uploaded to Debian as well.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

The release of RQuantLib comes four months after the previous release, and brings a momitor update for the just-released QuantLib 1.2.5 version along with a few small cleanups to calendars and daycounters.

Changes in RQuantLib version 0.4.15 (2022-01-19)

  • Changes in RQuantLib code:

    • Calendar support has been updated and completed to current QuantLib standards (Dirk in #161)

    • More daycounters have been added (Kai Lin in #163 fixing #162, #164)

    • The bonds pricers were update to changes in QuantLib 1.25 (Dirk)

  • Changes in RQuantLib package and setup:

    • Some package metadata was removed from the README.md (Dirk)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 January, 2022 12:44PM

Caleb Adepitan

I'm Thinking About You Right Now!

Just in case you stumbled on this incidentally and you wonder “Who in the seven fat worlds is this mysterious...?” Ha! That was what I was thinking about you you were thinking about me. You gerrit!?

I heard you listening to my thoughts; I listened to yours too. I wonder if you heard me too.

I will like to talk, today, about what it is I do at Debian as an Outreachy Intern under the JavaScript team. I woke up this morning and decided to bore you with so much details. I must have woken up glorified!

A Broader View

My sole role at Debian alongside my teammate, aided by our mentors, is to facilitate the Node.js 16 and Webpack 5 Transitioning. What exactly does that mean?

Node.js 16, as of the time of this writing, is the active LTS release from the Node.js developers while Webpack 5 is also the current release from the Webpack developers. At Debian we have to work towards supporting these packages.

Debian as an OS comes with a package manager coined Advanced Package Tool or simply APT on which command-line programs specific to Debian and it's many-flavored distributions, apt, apt-get, apt-cache are based. This means before the conception of yarn and npm, the typical JavaScript developer's package managers, apt has been. Debian unlike yarn and npm, ideally, only supports one version of a software at any point in time and on edge cases may have to support an extra one as noted in this chat between my mentor and a member.

To provide support for Webpack 5 and Node.js 16 which as regards to Debian are currently in experimental and only can be migrated to unstable after our transitioning, we have to test, reverse build, report and fix bugs till a certain level of compatibility has been attained with dependent packages currently in unstable.

Webpack and Node.js have thier respective dependencies, but there are certain software and packages also dependent on Webpack and/or Node.js, these are termed as reverse-dependencies. We have to test and build these reverse-dependencies, report and fix bugs and incompatibilities with the new versions of Webpack and Node.js. For reverse-dependent packages not yet supporting Webpack 5 and/or Node.js 16, we'll open an issue in form of a feature-request in upstream repository asking for Webpack 5 and/or Node.js 16 support.

Ideally, Debian manages a repository of all supported packages on a GitLab–managed–Git–based VCS. For JavaScript packages maintained by the JS Team, the home of those packages sits at https://salsa.debian.org/js-team/.

Supported packages are pulled from upstream repository, mostly GitHub, using some certian packaging tools provided by Debian. The pulled source cannot be directly modified else it will break build. So there exists a dedicated folder named “debian” where certain cofiguration files, scripts and rules to convey to the debian package builder live at. In some cases, source code needs to be modified; these are done via patching which means the modifications won't live in the source but in a dedicated patch file inside the debian/patches/ folder. The modifications are diffed line by line with the original source (just as with git) and the result is output in a file managed by debian utility tool, Quilt. The contents of the debian folder are instructions on how to build the source into binaries or an installable archive .deb (like Java's .jar or Android's .apk).

Understanding Debian Software Release Cycle

There are quite some interesting things about the software release cycle at Debian to get familiar with. Listed here are some release repositories alongside thier codenames as of Debian 11:

  1. Unstable (Sid)
  2. Testing (Bookworm)
  3. Stable (Bullseye)
  4. Old stable (Buster)
  5. Old old stable (Stretch)

Ha! Isn't it ironic that unstable is the only one with a stable codename?

Some of these, if not all, have codenames subject to change after every new release and/or migration. Only unstable which is referred to as Sid never changes. The current stable release which is Debian 11 is codenamed Bullseye. The next stable release which will be Debian 12 will be codenamed Bookworm because the current testing repository will me migrated to stable and released as Debian 12. The previous stable release which was Debian 10, now old stable, was codenamed Buster. To better understand Debian releases you may take a look at this wiki that completely defines them.

Basically, as explained by one of my mentors remixed in my own words, experimental software are migrated to unstable after (as I said earlier) they have attained a certain level of compatibility with dependent software. They remain in unstable for a long period of time undergoing testing, autopkgtest tests, regression tests, etc. At this point bugs are reported and fixed to a satisfactory level. The unstable repository is then migrated to testing where release-critical bugs are reported and fixed to a satisfactory level where one can comfortably say “testing is almost stable”, and voila (!), testing is released as a Debian stable version. This happens roughly every two years.

Some months before a new stable release, a soft freeze is turned on such that no new versions or transitions should be uploaded to unstable. Only fixes will be uploaded at this point. In like 4-6 weeks before the release, a hard freeze is turned on that completely disallows uploading to unstable, not even fixes. In due time, testing becomes the new stable release and freeze is lifted.

References

  1. Packaging pre-requisites
  2. Working with chroots
  3. Sbuild (clean builds)
  4. Updating a Debian Package by Abraham Raji

20 January, 2022 10:17AM by Caleb Adepitan

Russ Allbery

DocKnot 7.01

Continuing to flush out bugs in the recent changes to my static web site generator.

I had missed some Unicode implications for how output from external programs was handled, and also missed Unicode decoding of the output from Pod::Thread, since Pod::Simple always encodes its output even if that output is to a scalar. I also missed an implication for how symlinks were handled in Path::Iterator::Rule, causing docknot spin to fail to copy files into the output tree that were symlinks in the input tree. Both of those bugs are fixed in this release.

I also fixed a minor output issue from the \size command, which was using SI units when it meant IEC units.

You can get the latest release from CPAN or from the DocKnot distribution page.

20 January, 2022 05:17AM

January 19, 2022

Joerg Jaspert

Funny CPU usage

Munin plugin and it’s CPU usage (shell fixup)

So at work we do have a munin server running, and one of the graphs we do for every system is a network statistics one with a resolution of 1 second. That’s a simple enough script to have, and it is working nicely - on 98% of our machines. You just don’t notice the data gatherer at all, so that we also have some other graphs done with a 1 second resolution. For some, this really helps.

Basics

The basic code for this is simple. There is a bunch of stuff to start the background gathering, some to print out the config, and some to hand out the data when munin wants it. Plenty standard.

The interesting bit that goes wrong and uses too much CPU on one Linux Distribution is this:

run_acquire() {
   echo "$$" > ${pidfile}

   while :; do
     TSTAMP=$(date +%s)
     echo ${IFACE}_tx.value ${TSTAMP}:$(cat /sys/class/net/${IFACE}/statistics/tx_bytes ) >> ${cache}
     echo ${IFACE}_rx.value ${TSTAMP}:$(cat /sys/class/net/${IFACE}/statistics/rx_bytes ) >> ${cache}
     # Sleep for the rest of the second
     sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))
   done
}

That code works, and none of Debian wheezy, stretch and buster as well as RedHat 6 or 7 shows anything, it just works, no noticable load generated.

Now, Oracle Linux 7 thinks differently. The above code run there generates between 8 and 15% CPU usage (on fairly recent Intel CPUs, but that shouldn’t matter). (CPU usage measured with the highly accurate use of top and looking what it tells…)

Whyever.

Fixing

Ok, well, the code above isn’t all the nicest shell, actually. There is room for improvement. But beware, the older the bash, the less one can fix it.

  • So, first of, there are two useless uses of cat. Bash can do that for us, just use the $(< /PATH/TO/FILE ) way.
  • Oh, Bash5 knows the epoch directly, we can replace the date call for the timestamp and use ${EPOCHSECONDS}
  • Too bad Bash4 can’t do that. But hey, it’s builtin printf can help out, a nice TSTAMP=$(printf ‘%(%s)T\n’ -1) works.
  • Unfortunately, Bash4.2 and later, not 4.1, and meh, we have a 4.1 system, so that has to stay with the date call there.

Taking that, we end up with 3 different possible versions, depending on the Bash on the system.

obtain5() {
  ## Purest bash version, Bash can tell us epochs directly
  echo ${IFACE}_tx.value ${EPOCHSECONDS}:$(</sys/class/net/${IFACE}/statistics/tx_bytes) >> ${cache}
  echo ${IFACE}_rx.value ${EPOCHSECONDS}:$(</sys/class/net/${IFACE}/statistics/rx_bytes) >> ${cache}
  # Sleep for the rest of the second
  sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))
}

obtain42() {
  ## Bash cant tell us epochs directly, but the builtin printf can
  TSTAMP=$(printf '%(%s)T\n' -1)
  echo ${IFACE}_tx.value ${TSTAMP}:$(</sys/class/net/${IFACE}/statistics/tx_bytes) >> ${cache}
  echo ${IFACE}_rx.value ${TSTAMP}:$(</sys/class/net/${IFACE}/statistics/rx_bytes) >> ${cache}
  # Sleep for the rest of the second
  sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))
}

obtain41() {
  ## Bash needs help from a tool to get epoch, means one exec() all the time
  TSTAMP=$(date +%s)
  echo ${IFACE}_tx.value ${TSTAMP}:$(</sys/class/net/${IFACE}/statistics/tx_bytes) >> ${cache}
  echo ${IFACE}_rx.value ${TSTAMP}:$(</sys/class/net/${IFACE}/statistics/rx_bytes) >> ${cache}
  # Sleep for the rest of the second
  sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))
}

run_acquire() {
   echo "$$" > ${pidfile}

   case ${BASH_VERSINFO[0]} in
     5) while :; do
          obtain5
        done
        ;;
     4) if [[ ${BASHVERSION[1]} -ge 2 ]]; then
          while :; do
            obtain42
          done
        else
          while :; do
            obtain41
          done
        fi
        ;;
   esac
}

Does it help?

Oh yes, it does. Oracle Linux 7 appears to use Bash 4.2, so uses obtain42 and hey, removing one date and two cat calls, and it has a sane CPU usage of 0 (again, highly accurate number generated from top…). Appears OL7 is doing heck-what-do-i-know extra, when calling other tools, for whatever gains, removing that does help (who would have thought).

(None of RedHat or Oracle Linux has SELinux turned on, so that one shouldn’t bite. But it is clear OL7 doing something extra for everything that bash spawns.)

19 January, 2022 08:56PM

January 18, 2022

hackergotchi for Joey Hess

Joey Hess

January 17, 2022

Russ Allbery

DocKnot 7.00

The recent 6.01 release of my static web site generator was kind of a buggy mess, which uncovered a bunch of holes in my test suite and immediately turned up problems when I tried to use it to rebuild my actual web site. Most of the problems were Unicode-related; this release hopefully sorts out Unicode properly and handles it consistently.

Other bugs fixed include processing of old-style pointers in a spin input tree, several rather obvious bugs in the new docknot release command, and a few long-standing issues with docknot dist that should make its results more consistent and reliable.

I also got on a roll and finished the Path::Tiny transition in DocKnot, so now (nearly) all paths are internally represented as Path::Tiny objects. This meant changing some APIs, hence the version bump to 7.00.

For anyone who still does a lot of Perl, I highly recommend the Path::Tiny module. If you also write Python, you will be reminded (in a good way) of Python's pathlib module, which I now use whenever possible.

You can get the latest version of DocKnot from CPAN or from its distribution page.

17 January, 2022 09:41PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Different types of Backups

In my previous post, I explained how I recently set up backups for my home server to be synced using Amazon's services. I received a (correct) comment on that by Iustin Pop which pointed out that while it is reasonably cheap to upload data into Amazon's offering, the reverse -- extracting data -- is not as cheap.

He is right, in that extracting data from S3 Glacier Deep Archive costs over an order of magnitude more than it costs to store it there on a monthly basis -- in my case, I expect to have to pay somewhere in the vicinity of 300-400 USD for a full restore. However, I do not consider this to be a major problem, as these backups are only to fulfill the rarer of the two types of backups cases.

There are two reasons why you should have backups.

The first is the most common one: "oops, I shouldn't have deleted that file". This happens reasonably often; people will occasionally delete or edit a file that they did not mean to, and then they will want to recover their data. At my first job, a significant part of my job was to handle recovery requests from users who had accidentally deleted a file that they still needed.

Ideally, backups to handle this type of situation are easily accessible to end users, and are performed reasonably frequently. A system that automatically creates and deletes filesystem snapshots (such as the zfsnap script for ZFS snapshots, which I use on my server) works well. The crucial bit here is to ensure that it is easier to copy an older version of a file than it is to start again from scratch -- if a user must file a support request that may or may not be answered within a day or so, it is likely they will not do so for a file they were working on for only half a day, which means they lose half a day of work in such a case. If, on the other hand, they can just go into the snapshots directory themselves and it takes them all of two minutes to copy their file, then they will also do that for files they only created half an hour ago, so they don't even lose half an hour of work and can get right back to it. This means that backup strategies to mitigate the "oops I lost a file" case ideally do not involve off-site file storage, and instead are performed online.

The second case is the much rarer one, but (when required) has the much bigger impact: "oops the building burned down". Variants of this can involve things like lightning strikes, thieves, earth quakes, and the like; in all cases, the point is that you want to be able to recover all your files, even if every piece of equipment you own is no longer usable.

That being the case, you will first need to replace that equipment, which is not going to be cheap, and it is also not going to be an overnight thing. In order to still be useful after you lost all your equipment, they must also be stored off-site, and should preferably be offline backups, too. Since replacing your equipment is going to cost you time and money, it's fine if restoring the backups is going to take a while -- you can't really restore from backup any time soon anyway. And since you will lose a number of days of content that you can't create when you can only fall back on your off-site backups, it's fine if you also lose a few days of content that you will have to re-create.

All in all, the two types of backups have opposing requirements: "oops I lost a file" backups should be performed often and should be easily available; "oops I lost my building" backups should not be easily available, and are ideally done less often, so you don't pay a high amount of money for storage of your off-sites.

In my opinion, if you have good "lost my file" backups, then it's also fine if the recovery of your backups are a bit more expensive. You don't expect to have to ever pay for these; you may end up with a situation where you don't have a choice, and then you'll be happy that the choice is there, but as long as you can reasonably pay for the worst case scenario of a full restore, it's not a case you should be worried about much.

As such, and given that a full restore from Amazon Storage Gateway is going to be somewhere between 300 and 400 USD for my case -- a price I can afford, although it's not something I want to pay every day -- I don't think it's a major issue that extracting data is significantly more expensive than uploading data.

But of course, this is something everyone should consider for themselves...

17 January, 2022 03:43PM

hackergotchi for Matthew Garrett

Matthew Garrett

Boot Guard and PSB have user-hostile defaults

Compromising an OS without it being detectable is hard. Modern operating systems support the imposition of a security policy or the launch of some sort of monitoring agent sufficient early in boot that even if you compromise the OS, you're probably going to have left some sort of detectable trace[1]. You can avoid this by attacking the lower layers - if you compromise the bootloader then it can just hotpatch a backdoor into the kernel before executing it, for instance.

This is avoided via one of two mechanisms. Measured boot (such as TPM-based Trusted Boot) makes a tamper-proof cryptographic record of what the system booted, with each component in turn creating a measurement of the next component in the boot chain. If a component is tampered with, its measurement will be different. This can be used to either prevent the release of a cryptographic secret if the boot chain is modified (for instance, using the TPM to encrypt the disk encryption key), or can be used to attest the boot state to another device which can tell you whether you're safe or not. The other approach is verified boot (such as UEFI Secure Boot), where each component in the boot chain verifies the next component before executing it. If the verification fails, execution halts.

In both cases, each component in the boot chain measures and/or verifies the next. But something needs to be the first link in this chain, and traditionally this was the system firmware. Which means you could tamper with the system firmware and subvert the entire process - either have the firmware patch the bootloader in RAM after measuring or verifying it, or just load a modified bootloader and lie about the measurements or ignore the verification. Attackers had already been targeting the firmware (Hacking Team had something along these lines, although this was pre-secure boot so just dropped a rootkit into the OS), and given a well-implemented measured and verified boot chain, the firmware becomes an even more attractive target.

Intel's Boot Guard and AMD's Platform Secure Boot attempt to solve this problem by moving the validation of the core system firmware to an (approximately) immutable environment. Intel's solution involves the Management Engine, a separate x86 core integrated into the motherboard chipset. The ME's boot ROM verifies a signature on its firmware before executing it, and once the ME is up it verifies that the system firmware's bootblock is signed using a public key that corresponds to a hash blown into one-time programmable fuses in the chipset. What happens next depends on policy - it can either prevent the system from booting, allow the system to boot to recover the firmware but automatically shut it down after a while, or flag the failure but allow the system to boot anyway. Most policies will also involve a measurement of the bootblock being pushed into the TPM.

AMD's Platform Secure Boot is slightly different. Rather than the root of trust living in the motherboard chipset, it's in AMD's Platform Security Processor which is incorporated directly onto the CPU die. Similar to Boot Guard, the PSP has ROM that verifies the PSP's own firmware, and then that firmware verifies the system firmware signature against a set of blown fuses in the CPU. If that fails, system boot is halted. I'm having trouble finding decent technical documentation about PSB, and what I have found doesn't mention measuring anything into the TPM - if this is the case, PSB only implements verified boot, not measured boot.

What's the practical upshot of this? The first is that you can't replace the system firmware with anything that doesn't have a valid signature, which effectively means you're locked into firmware the vendor chooses to sign. This prevents replacing the system firmware with either a replacement implementation (such as Coreboot) or a modified version of the original implementation (such as firmware that disables locking of CPU functionality or removes hardware allowlists). In this respect, enforcing system firmware verification works against the user rather than benefiting them.
Of course, it also prevents an attacker from doing the same thing, but while this is a real threat to some users, I think it's hard to say that it's a realistic threat for most users.

The problem is that vendors are shipping with Boot Guard and (increasingly) PSB enabled by default. In the AMD case this causes another problem - because the fuses are in the CPU itself, a CPU that's had PSB enabled is no longer compatible with any motherboards running firmware that wasn't signed with the same key. If a user wants to upgrade their system's CPU, they're effectively unable to sell the old one. But in both scenarios, the user's ability to control what their system is running is reduced.

As I said, the threat that these technologies seek to protect against is real. If you're a large company that handles a lot of sensitive data, you should probably worry about it. If you're a journalist or an activist dealing with governments that have a track record of targeting people like you, it should probably be part of your threat model. But otherwise, the probability of you being hit by a purely userland attack is so ludicrously high compared to you being targeted this way that it's just not a big deal.

I think there's a more reasonable tradeoff than where we've ended up. Tying things like disk encryption secrets to TPM state means that if the system firmware is measured into the TPM prior to being executed, we can at least detect that the firmware has been tampered with. In this case nothing prevents the firmware being modified, there's just a record in your TPM that it's no longer the same as it was when you encrypted the secret. So, here's what I'd suggest:

1) The default behaviour of technologies like Boot Guard or PSB should be to measure the firmware signing key and whether the firmware has a valid signature into PCR 7 (the TPM register that is also used to record which UEFI Secure Boot signing key is used to verify the bootloader).
2) If the PCR 7 value changes, the disk encryption key release will be blocked, and the user will be redirected to a key recovery process. This should include remote attestation, allowing the user to be informed that their firmware signing situation has changed.
3) Tooling should be provided to switch the policy from merely measuring to verifying, and users at meaningful risk of firmware-based attacks should be encouraged to make use of this tooling

This would allow users to replace their system firmware at will, at the cost of having to re-seal their disk encryption keys against the new TPM measurements. It would provide enough information that, in the (unlikely for most users) scenario that their firmware has actually been modified without their knowledge, they can identify that. And it would allow users who are at high risk to switch to a higher security state, and for hardware that is explicitly intended to be resilient against attacks to have different defaults.

This is frustratingly close to possible with Boot Guard, but I don't think it's quite there. Before you've blown the Boot Guard fuses, the Boot Guard policy can be read out of flash. This means that you can drop a Boot Guard configuration into flash telling the ME to measure the firmware but not prevent it from running. But there are two problems remaining:

1) The measurement is made into PCR 0, and PCR 0 changes every time your firmware is updated. That makes it a bad default for sealing encryption keys.
2) It doesn't look like the policy is measured before being enforced. This means that an attacker can simply reflash modified firmware with a policy that disables measurement and then make a fake measurement that makes it look like the firmware is ok.

Fixing this seems simple enough - the Boot Guard policy should always be measured, and measurements of the policy and the signing key should be made into a PCR other than PCR 0. If an attacker modified the policy, the PCR value would change. If an attacker modified the firmware without modifying the policy, the PCR value would also change. People who are at high risk would run an app that would blow the Boot Guard policy into fuses rather than just relying on the copy in flash, and enable verification as well as measurement. Now if an attacker tampers with the firmware, the system simply refuses to boot and the attacker doesn't get anything.

Things are harder on the AMD side. I can't find any indication that PSB supports measuring the firmware at all, which obviously makes this approach impossible. I'm somewhat surprised by that, and so wouldn't be surprised if it does do a measurement somewhere. If it doesn't, there's a rather more significant problem - if a system has a socketed CPU, and someone has sufficient physical access to replace the firmware, they can just swap out the CPU as well with one that doesn't have PSB enabled. Under normal circumstances the system firmware can detect this and prompt the user, but given that the attacker has just replaced the firmware we can assume that they'd do so with firmware that doesn't decide to tell the user what just happened. In the absence of better documentation, it's extremely hard to say that PSB actually provides meaningful security benefits.

So, overall: I think Boot Guard protects against a real-world attack that matters to a small but important set of targets. I think most of its benefits could be provided in a way that still gave users control over their system firmware, while also permitting high-risk targets to opt-in to stronger guarantees. Based on what's publicly documented about PSB, it's hard to say that it provides real-world security benefits for anyone at present. In both cases, what's actually shipping reduces the control people have over their systems, and should be considered user-hostile.

[1] Assuming that someone's both turning this on and actually looking at the data produced

comment count unavailable comments

17 January, 2022 04:37AM

January 16, 2022

hackergotchi for Chris Lamb

Chris Lamb

Favourite films of 2021

In my four most recent posts, I went over the memoirs and biographies, the non-fiction, the fiction and the 'classic' novels that I enjoyed reading the most in 2021. But in the very last of my 2021 roundup posts, I'll be going over some of my favourite movies. (Saying that, these are perhaps less of my 'favourite films' than the ones worth remarking on — after all, nobody needs to hear that The Godfather is a good movie.)

It's probably helpful to remark you that I took a self-directed course in film history in 2021, based around the first volume of Roger Ebert's The Great Movies. This collection of 100-odd movie essays aims to “make a tour of the landmarks of the first century of cinema,” and I watched all but a handul before the year was out. I am slowly making my way through volume two in 2022. This tome was tremendously useful, and not simply due to the background context that Ebert added to each film: it also brought me into contact with films I would have hardly come through some other means. Would I have ever discovered the sly comedy of Trouble in Paradise (1932) or the touching proto-realism of L'Atalante (1934) any other way? It also helped me to 'get around' to watching films I may have put off watching forever — the influential Battleship Potemkin (1925), for instance, and the ur-epic Lawrence of Arabia (1962) spring to mind here.

Choosing a 'worst' film is perhaps more difficult than choosing the best. There are first those that left me completely dry (Ready or Not, Written on the Wind, etc.), and those that were simply poorly executed. And there are those that failed to meet their own high opinions of themselves, such as the 'made for Reddit' Tenet (2020) or the inscrutable Vanilla Sky (2001) — the latter being an almost perfect example of late-20th century cultural exhaustion.

But I must save my most severe judgement for those films where I took a visceral dislike how their subjects were portrayed. The sexually problematic Sixteen Candles (1984) and the pseudo-Catholic vigilantism of The Boondock Saints (1999) both spring to mind here, the latter of which combines so many things I dislike into such a short running time I'd need an entire essay to adequately express how much I disliked it.

§

Dogtooth (2009)

A father, a mother, a brother and two sisters live in a large and affluent house behind a very high wall and an always-locked gate. Only the father ever leaves the property, driving to the factory that he happens to own. Dogtooth goes far beyond any allusion to Josef Fritzl's cellar, though, as the children's education is a grotesque parody of home-schooling. Here, the parents deliberately teach their children the wrong meaning of words (e.g. a yellow flower is called a 'zombie'), all of which renders the outside world utterly meaningless and unreadable, and completely mystifying its very existence. It is this creepy strangeness within a 'regular' family unit in Dogtooth that is both socially and epistemically horrific, and I'll say nothing here of its sexual elements as well.

Despite its cold, inscrutable and deadpan surreality, Dogtooth invites all manner of potential interpretations. Is this film about the artificiality of the nuclear family that the West insists is the benchmark of normality? Or is it, as I prefer to believe, something more visceral altogether: an allegory for the various forms of ontological violence wrought by fascism, as well a sobering nod towards some of fascism's inherent appeals? (Perhaps it is both. In 1972, French poststructuralists Gilles and Félix Guattari wrote Anti-Oedipus, which plays with the idea of the family unit as a metaphor for the authoritarian state.) The Greek-language Dogtooth, elegantly shot, thankfully provides no easy answers.

§

Holy Motors (2012)

There is an infamous scene in Un Chien Andalou, the 1929 film collaboration between Luis Buñuel and famed artist Salvador Dalí. A young woman is cornered in her own apartment by a threatening man, and she reaches for a tennis racquet in self-defence. But the man suddenly picks up two nearby ropes and drags into the frame two large grand pianos... each leaden with a dead donkey, a stone tablet, a pumpkin and a bewildered priest.

This bizarre sketch serves as a better introduction to Leos Carax's Holy Motors than any elementary outline of its plot, which ostensibly follows 24 hours in the life of a man who must play a number of extremely diverse roles around Paris... all for no apparent reason. (And is he even a man?) Surrealism as an art movement gets a pretty bad wrap these days, and perhaps justifiably so. But Holy Motors and Un Chien Andalou serve as a good reminder that surrealism can be, well, 'good, actually'. And if not quite high art, Holy Motors at least demonstrates that surrealism can still unnerving and hilariously funny. Indeed, recalling the whimsy of the plot to a close friend, the tears of laughter came unbidden to my eyes once again. ("And then the limousines...!")

Still, it is unclear how Holy Motors truly refreshes surrealism for the twenty-first century. Surrealism was, in part, a reaction to the mechanical and unfeeling brutality of World War I and ultimately sought to release the creative potential of the unconscious mind. Holy Motors cannot be responding to another continental conflagration, and so it appears to me to be some kind of commentary on the roles we exhibit in an era of 'post-postmodernity': a sketch on our age of performative authenticity, perhaps, or an idle doodle on the function and psychosocial function of work.

Or perhaps not. After all, this film was produced in a time that offers the near-universal availability of mind-altering substances, and this certainly changes the context in which this film was both created. And, how can I put it, was intended to be watched.

§

Manchester by the Sea (2016)

An absolutely devastating portrayal of a character who is unable to forgive himself and is hesitant to engage with anyone ever again. It features a near-ideal balance between portraying unrecoverable anguish and tender warmth, and is paradoxically grandiose in its subtle intimacy. The mechanics of life led me to watch this lying on a bed in a chain hotel by Heathrow Airport, and if this colourless circumstance blunted the film's emotional impact on me, I am probably thankful for it. Indeed, I find myself reduced in this review to fatuously recalling my favourite interactions instead of providing any real commentary. You could write a whole essay about one particular incident: its surfaces, subtexts and angles... all despite nothing of any substance ever being communicated. Truly stunning.

§

McCabe & Mrs. Miller (1971)

Roger Ebert called this movie “one of the saddest films I have ever seen, filled with a yearning for love and home that will not ever come.” But whilst it is difficult to disagree with his sentiment, Ebert's choice of “sad” is somehow not quite the right word. Indeed, I've long regretted that our dictionaries don't have more nuanced blends of tragedy and sadness; perhaps the Ancient Greeks can loan us some.

Nevertheless, the plot of this film is of a gambler and a prostitute who become business partners in a new and remote mining town called Presbyterian Church. However, as their town and enterprise booms, it comes to the attention of a large mining corporation who want to bully or buy their way into the action. What makes this film stand out is not the plot itself, however, but its mood and tone — the town and its inhabitants seem to be thrown together out of raw lumber, covered alternatively in mud or frozen ice, and their days (and their personalities) are both short and dark in equal measure.

As a brief aside, if you haven't seen a Roger Altman film before, this has all the trappings of being a good introduction. As Ebert went on to observe: “This is not the kind of movie where the characters are introduced. They are all already here.” Furthermore, we can see some of Altman's trademark conversations that overlap, a superb handling of ensemble casts, and a quietly subversive view of the tyranny of 'genre'... and the latter in a time when the appetite for revisionist portrays of the West was not very strong. All of these 'Altmanian' trademarks can be ordered in much stronger measures in his later films: in particular, his comedy-drama Nashville (1975) has 24 main characters, and my jejune interpretation of Gosford Park (2001) is that it is purposefully designed to poke fun those who take a reductionist view of 'genre', or at least on the audience's expectations. (In this case, an Edwardian-era English murder mystery in the style of Agatha Christie, but where no real murder or detection really takes place.)

On the other hand, McCabe & Mrs. Miller is actually a poor introduction to Altman. The story is told in a suitable deliberate and slow tempo, and the two stars of the film are shown thoroughly defrocked of any 'star status', in both the visual and moral dimensions. All of these traits are, however, this film's strength, adding up to a credible, fascinating and riveting portrayal of the old West.

§

Detour (1945)

Detour was filmed in less than a week, and it's difficult to decide — out of the actors and the screenplay — which is its weakest point.... Yet it still somehow seemed to drag me in.

The plot revolves around luckless Al who is hitchhiking to California. Al gets a lift from a man called Haskell who quickly falls down dead from a heart attack. Al quickly buries the body and takes Haskell's money, car and identification, believing that the police will believe Al murdered him. An unstable element is soon introduced in the guise of Vera, who, through a set of coincidences that stretches credulity, knows that this 'new' Haskell (ie. Al pretending to be him) is not who he seems. Vera then attaches herself to Al in order to blackmail him, and the world starts to spin out of his control.

It must be understood that none of this is executed very well. Rather, what makes Detour so interesting to watch is that its 'errors' lend a distinctively creepy and unnatural hue to the film. Indeed, in the early twentieth century, Sigmund Freud used the word unheimlich to describe the experience of something that is not simply mysterious, but something creepy in a strangely familiar way. This is almost the perfect description of watching Detour — its eerie nature means that we are not only frequently second-guessed about where the film is going, but are often uncertain whether we are watching the usual objective perspective offered by cinema.

In particular, are all the ham-fisted segues, stilted dialogue and inscrutable character motivations actually a product of Al inventing a story for the viewer? Did he murder Haskell after all, despite the film 'showing' us that Haskell died of natural causes? In other words, are we watching what Al wants us to believe? Regardless of the answers to these questions, the film succeeds precisely because of its accidental or inadvertent choices, so it is an implicit reminder that seeking the director's original intention in any piece of art is a complete mirage. Detour is certainly not a good film, but it just might be a great one. (It is a short film too, and, out of copyright, it is available online for free.)

§

Safe (1995)

Safe is a subtly disturbing film about an upper-middle-class housewife who begins to complain about vague symptoms of illness. Initially claiming that she “doesn't feel right,” Carol starts to have unexplained headaches, a dry cough and nosebleeds, and eventually begins to have trouble breathing. Carol's family doctor treats her concerns with little care, and suggests to her husband that she sees a psychiatrist.

Yet Carol's episodes soon escalate. For example, as a 'homemaker' and with nothing else to occupy her, Carol's orders a new couch for a party. But when the store delivers the wrong one (although it is not altogether clear that they did), Carol has a near breakdown. Unsure where to turn, an 'allergist' tells Carol she has "Environmental Illness," and so Carol eventually checks herself into a new-age commune filled with alternative therapies.

On the surface, Safe is thus a film about the increasing about of pesticides and chemicals in our lives, something that was clearly felt far more viscerally in the 1990s. But it is also a film about how lack of genuine healthcare for women must be seen as a critical factor in the rise of crank medicine. (Indeed, it made for something of an uncomfortable watch during the coronavirus lockdown.) More interestingly, however, Safe gently-yet-critically examines the psychosocial causes that may be aggravating Carol's illnesses, including her vacant marriage, her hollow friends and the 'empty calorie' stimulus of suburbia. None of this should be especially new to anyone: the gendered Victorian term 'hysterical' is often all but spoken throughout this film, and perhaps from the very invention of modern medicine, women's symptoms have often regularly minimised or outright dismissed. (Hilary Mantel's 2003 memoir, Giving Up the Ghost is especially harrowing on this.)

As I opened this review, the film is subtle in its messaging. Just to take one example from many, the sound of the cars is always just a fraction too loud: there's a scene where a group is eating dinner with a road in the background, and the total effect can be seen as representing the toxic fumes of modernity invading our social lives and health. I won't spoiler the conclusion of this quietly devasting film, but don't expect a happy ending.

§

The Driver (1978)

Critics grossly misunderstood The Driver when it was first released. They interpreted the cold and unemotional affect of the characters with the lack of developmental depth, instead of representing their dissociation from the society around them. This reading was encouraged by the fact that the principal actors aren't given real names and are instead known simply by their archetypes instead: 'The Driver', 'The Detective', 'The Player' and so on. This sort of quasi-Jungian erudition is common in many crime films today (Reservoir Dogs, Kill Bill, Layer Cake, Fight Club), so the critics' misconceptions were entirely reasonable in 1978.

The plot of The Driver involves the eponymous Driver, a noted getaway driver for robberies in Los Angeles. His exceptional talent has far prevented him from being captured thus far, so the Detective attempts to catch the Driver by pardoning another gang if they help convict the Driver via a set-up robbery. To give himself an edge, however, The Driver seeks help from the femme fatale 'Player' in order to mislead the Detective.

If this all sounds eerily familiar, you would not be far wrong. The film was essentially remade by Nicolas Winding Refn as Drive (2011) and in Edgar Wright's 2017 Baby Driver. Yet The Driver offers something that these neon-noir variants do not. In particular, the car chases around Los Angeles are some of the most captivating I've seen: they aren't thrilling in the sense of tyre squeals, explosions and flying boxes, but rather the vehicles come across like wild animals hunting one another. This feels especially so when the police are hunting The Driver, which feels less like a low-stakes game of cat and mouse than a pack of feral animals working together — a gang who will tear apart their prey if they find him. In contrast to the undercar neon glow of the Fast & Furious franchise, the urban realism backdrop of the The Driver's LA metropolis contributes to a sincere feeling of artistic fidelity as well.

To be sure, most of this is present in the truly-excellent Drive, where the chase scenes do really communicate a credible sense of stakes. But the substitution of The Driver's grit with Drive's soft neon tilts it slightly towards that common affliction of crime movies: style over substance. Nevertheless, I can highly recommend watching The Driver and Drive together, as it can tell you a lot about the disconnected socioeconomic practices of the 1980s compared to the 2010s. More than that, however, the pseudo-1980s synthwave soundtrack of Drive captures something crucial to analysing the world of today. In particular, these 'sounds from the past filtered through the present' bring to mind the increasing role of nostalgia for lost futures in the culture of today, where temporality and pop culture references are almost-exclusively citational and commemorational.

§

The Souvenir (2019)

The ostensible outline of this quietly understated film follows a shy but ambitious film student who falls into an emotionally fraught relationship with a charismatic but untrustworthy older man. But that doesn't quite cover the plot at all, for not only is The Souvenir a film about a young artist who is inspired, derailed and ultimately strengthened by a toxic relationship, it is also partly a coming-of-age drama, a subtle portrait of class and, finally, a film about the making of a film.

Still, one of the geniuses of this truly heartbreaking movie is that none of these many elements crowds out the other. It never, ever feels rushed. Indeed, there are many scenes where the camera simply 'sits there' and quietly observes what is going on. Other films might smother themselves through references to 18th-century oil paintings, but The Souvenir somehow evades this too. And there's a certain ring of credibility to the story as well, no doubt in part due to the fact it is based on director Joanna Hogg's own experiences at film school. A beautifully observed and multi-layered film; I'll be happy if the sequel is one-half as good.

§

The Wrestler (2008)

Randy 'The Ram' Robinson is long past his prime, but he is still rarin' to go in the local pro-wrestling circuit. Yet after a brutal beating that seriously threatens his health, Randy hangs up his tights and pursues a serious relationship... and even tries to reconnect with his estranged daughter. But Randy can't resist the lure of the ring, and readies himself for a comeback.

The stage is thus set for Darren Aronofsky's The Wrestler, which is essentially about what drives Randy back to the ring. To be sure, Randy derives much of his money from wrestling as well as his 'fitness', self-image, self-esteem and self-worth. Oh, it's no use insisting that wrestling is fake, for the sport is, needless to say, Randy's identity; it's not for nothing that this film is called The Wrestler.

In a number of ways, The Sound of Metal (2019) is both a reaction to (and a quiet remake of) The Wrestler, if only because both movies utilise 'cool' professions to explore such questions of identity. But perhaps simply when The Wrestler was produced makes it the superior film. Indeed, the role of time feels very important for the Wrestler. In the first instance, time is clearly taking its toll on Randy's body, but I felt it more strongly in the sense this was very much a pre-2008 film, released on the cliff-edge of the global financial crisis, and the concomitant precarity of the 2010s.

Indeed, it is curious to consider that you couldn't make The Wrestler today, although not because the relationship to work has changed in any fundamentalway. (Indeed, isn't it somewhat depressing the realise that, since the start of the pandemic and the 'work from home' trend to one side, we now require even more people to wreck their bodies and mental health to cover their bills?) No, what I mean to say here is that, post-2016, you cannot portray wrestling on-screen without, how can I put it, unwelcome connotations. All of which then reminds me of Minari's notorious red hat...

But I digress. The Wrestler is a grittily stark darkly humorous look into the life of a desperate man and a sorrowful world, all through one tragic profession.

§

Thief (1981)

Frank is an expert professional safecracker and specialises in high-profile diamond heists. He plans to use his ill-gotten gains to retire from crime and build a life for himself with a wife and kids, so he signs on with a top gangster for one last big score. This, of course, could be the plot to any number of heist movies, but Thief does something different. Similar to The Wrestler and The Driver (see above) and a number of other films that I watched this year, Thief seems to be saying about our relationship to work and family in modernity and postmodernity.

Indeed, the 'heist film', we are told, is an understudied genre, but part of the pleasure of watching these films is said to arise from how they portray our desired relationship to work. In particular, Frank's desire to pull off that last big job feels less about the money it would bring him, but a displacement from (or proxy for) fulfilling some deep-down desire to have a family or indeed any relationship at all. Because in theory, of course, Frank could enter into a fulfilling long-term relationship right away, without stealing millions of dollars in diamonds... but that's kinda the entire point: Frank needing just one more theft is an excuse to not pursue a relationship and put it off indefinitely in favour of 'work'. (And being Federal crimes, it also means Frank cannot put down meaningful roots in a community.) All this is communicated extremely subtly in the justly-lauded lowkey diner scene, by far the best scene in the movie.

The visual aesthetic of Thief is as if you set The Warriors (1979) in a similarly-filthy Chicago, with the Xenophon-inspired plot of The Warriors replaced with an almost deliberate lack of plot development... and the allure of The Warriors' fantastical criminal gangs (with their alluringly well-defined social identities) substituted by a bunch of amoral individuals with no solidarity beyond the immediate moment. A tale of our time, perhaps.

I should warn you that the ending of Thief is famously weak, but this is a gritty, intelligent and strangely credible heist movie before you get there.

§

Uncut Gems (2019)

The most exhausting film I've seen in years; the cinematic equivalent of four cups of double espresso, I didn't even bother even trying to sleep after downing Uncut Gems late one night. Directed by the two Safdie Brothers, it often felt like I was watching two films that had been made at the same time. (Or do I mean two films at 2X speed?)

No, whatever clumsy metaphor you choose to adopt, the unavoidable effect of this film's finely-tuned chaos is an uncompromising and anxiety-inducing piece of cinema. The plot follows Howard as a man lost to his countless vices — mostly gambling with a significant side hustle in adultery, but you get the distinct impression he would be happy with anything that will give him another high. A true junkie's junkie, you might say. You know right from the beginning it's going to end in some kind of disaster, the only question remaining is precisely how and what.

Portrayed by an (almost unrecognisable) Adam Sandler, there's an uncanny sense of distance in the emotional chasm between 'Sandler-as-junkie' and 'Sandler-as-regular-star-of-goofy-comedies'. Yet instead of being distracting and reducing the film's affect, this possibly-deliberate intertextuality somehow adds to the masterfully-controlled mayhem. My heart races just at the memory. Oof.

§

Woman in the Dunes (1964)

I ended up watching three films that feature sand this year: Denis Villeneuve's Dune (2021), Lawrence of Arabia (1962) and Woman in the Dunes. But it is this last 1964 film by Hiroshi Teshigahara that will stick in my mind in the years to come. Sure, there is none of the Medician intrigue of Dune or the Super Panavision-70 of Lawrence of Arabia (or its quasi-orientalist score, itself likely stolen from Anton Bruckner's 6th Symphony), but Woman in the Dunes doesn't have to assert its confidence so boldly, and it reveals the enormity of its plot slowly and deliberately instead. Woman in the Dunes never rushes to get to the film's central dilemma, and it uncovers its terror in little hints and insights, all whilst establishing the daily rhythm of life.

Woman in the Dunes has something of the uncanny horror as Dogtooth (see above), as well as its broad range of potential interpretations. Both films permit a wide array of readings, without resorting to being deliberately obscurantist or being just plain random — it is perhaps this reason why I enjoyed them so much. It is true that asking 'So what does the sand mean?' sounds tediously sophomoric shorn of any context, but it somehow applies to this thoughtfully self-contained piece of cinema.

§

A Quiet Place (2018)

Although A Quiet Place was not actually one of the best films I saw this year, I'm including it here as it is certainly one of the better 'mainstream' Hollywood franchises I came across. Not only is the film very ably constructed and engages on a visceral level, I should point out that it is rare that I can empathise with the peril of conventional horror movies (and perhaps prefer to focus on its cultural and political aesthetics), but I did here.

The conceit of this particular post-apocalyptic world is that a family is forced to live in almost complete silence while hiding from creatures that hunt by sound alone. Still, A Quiet Place engages on an intellectual level too, and this probably works in tandem with the pure 'horrorific' elements and make it stick into your mind. In particular, and to my mind at least, A Quiet Place a deeply American conservative film below the surface: it exalts the family structure and a certain kind of sacrifice for your family. (The music often had a passacaglia-like strain too, forming a tombeau for America.) Moreover, you survive in this dystopia by staying quiet — that is to say, by staying stoic — suggesting that in the wake of any conflict that might beset the world, the best thing to do is to keep quiet. Even communicating with your loved ones can be deadly to both of you, so not emote, acquiesce quietly to your fate, and don't, whatever you do, speak up. (Or join a union.)

I could go on, but The Quiet Place is more than this. It's taut and brief, and despite cinema being an increasingly visual medium, it encourages its audience to develop a new relationship with sound.

16 January, 2022 06:44PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Backing up my home server with Bacula and Amazon Storage Gateway

I have a home server.

Initially conceived and sized so I could digitize my (rather sizeable) DVD collection, I started using it for other things; I added a few play VMs on it, started using it as a destination for the deja-dup-based backups of my laptop and the time machine-based ones of the various macs in the house, and used it as the primary location of all the photos I've taken with my cameras over the years (currently taking up somewhere around 500G) as well as those that were taking at our wedding (another 100G). To add to that, I've copied the data that my wife had on various older laptops and external hard drives onto this home server as well, so that we don't lose the data should something happen to one or more of these bits of older hardware.

Needless to say, the server was running full, so a few months ago I replaced the 4x2T hard drives that I originally put in the server with 4x6T ones, and there was much rejoicing.

But then I started considering what I was doing. Originally, the intent was for the server to contain DVD rips of my collection; if I were to lose the server, I could always re-rip the collection and recover that way (unless something happened that caused me to lose both at the same time, of course, but I consider that sufficiently unlikely that I don't want to worry about it). Much of the new data on the server, however, cannot be recovered like that; if the server dies, I lose my photos forever, with no way of recovering them. Obviously that can't be okay.

So I started looking at options to create backups of my data, preferably in ways that make it easily doable for me to automate the backups -- because backups that have to be initiated are backups that will be forgotten, and backups that are forgotten are backups that don't exist. So let's not try that.

When I was still self-employed in Belgium and running a consultancy business, I sold a number of lower-end tape libraries for which I then configured bacula, and I preferred a solution that would be similar to that without costing an arm and a leg. I did have a look at a few second-hand tape libraries, but even second hand these are still way outside what I can budget for this kind of thing, so that was out too.

After looking at a few solutions that seemed very hackish and would require quite a bit of handholding (which I don't think is a good idea), I remembered that a few years ago, I had a look at the Amazon Storage Gateway for a customer. This gateway provides a virtual tape library with 10 drives and 3200 slots (half of which are import/export slots) over iSCSI. The idea is that you install the VM on a local machine, you connect it to your Amazon account, you connect your backup software to it over iSCSI, and then it syncs the data that you write to Amazon S3, with the ability to archive data to S3 Glacier or S3 Glacier Deep Archive. I didn't end up using it at the time because it required a VMWare virtualization infrastructure (which I'm not interested in), but I found out that these days, they also provide VM images for Linux KVM-based virtual machines (amongst others), so that changes things significantly.

After making a few calculations, I figured out that for the amount of data that I would need to back up, I would require a monthly budget of somewhere between 10 and 20 USD if the bulk of the data would be on S3 Glacier Deep Archive. This is well within my means, so I gave it a try.

The VM's technical requirements state that you need to assign four vCPUs and 16GiB of RAM, which just so happens to be the exact amount of RAM and CPU that my physical home server has. Obviously we can't do that. I tried getting away with 4GiB and 2 vCPUs, but that didn't work; the backup failed out after about 500G out of 2T had been written, due to the VM running out of resources. On the VM's console I found complaints that it required more memory, and I saw it mention something in the vicinity of 7GiB instead, so I decided to try again, this time with 8GiB of RAM rather than 4. This worked, and the backup was successful.

As far as bacula is concerned, the tape library is just a (very big...) normal tape library, and I got data throughput of about 30M/s while the VM's upload buffer hadn't run full yet, with things slowing down to pretty much my Internet line speed when it had. With those speeds, Bacula finished the backup successfully in "1 day 6 hours 43 mins 45 secs", although the storage gateway was still uploading things to S3 Glacier for a few hours after that.

All in all, this seems like a viable backup solution for large(r) amounts of data, although I haven't yet tried to perform a restore.

16 January, 2022 09:06AM

Russell Coker

SSD Endurance

I previously wrote about the issue of swap potentially breaking SSD [1]. My conclusion was that swap wouldn’t be a problem as no normally operating systems that I run had swap using any significant fraction of total disk writes. In that post the most writes I could see was 128GB written per day on a 120G Intel SSD (writing the entire device once a day).

My post about swap and SSD was based on the assumption that you could get many thousands of writes to the entire device which was incorrect. Here’s a background on the terminology from WD [2]. So in the case of the 120G Intel SSD I was doing over 1 DWPD (Drive Writes Per Day) which is in the middle of the range of SSD capability, Intel doesn’t specify the DWPD or TBW (Tera Bytes Written) for that device.

The most expensive and high end NVMe device sold by my local computer store is the Samsung 980 Pro which has a warranty of 150TBW for the 250G device and 600TBW for the 1TB device [3]. That means that the system which used to have an Intel SSD would have exceeded the warranty in 3 years if it had a 250G device.

My current workstation has been up for just over 7 days and has averaged 110GB written per day. It has some light VM use and the occasional kernel compile, a fairly typical developer workstation. It’s storage is 2*Crucial 1TB NVMe devices in a BTRFS RAID-1, the NVMe devices are the old series of Crucial ones and are rated for 200TBW which means that they can be expected to last for 5 years under the current load. This isn’t a real problem for me as the performance of those devices is lower than I hoped for so I will buy faster ones before they are 5yo anyway.

My home server (and my wife’s workstation) is averaging 325GB per day on the SSDs used for the RAID-1 BTRFS filesystem for root and for most data that is written much (including VMs). The SSDs are 500G Samsung 850 EVOs [4] which are rated at 150TBW which means just over a year of expected lifetime. The SSDs are much more than a year old, I think Samsung stopped selling them more than a year ago. Between the 2 SSDs SMART reports 18 uncorrectable errors and “btrfs device stats” reports 55 errors on one of them. I’m not about to immediately replace them, but it appears that they are well past their prime.

The server which runs my blog (among many other things) is averaging over 1TB written per day. It currently has a RAID-1 of hard drives for all storage but it’s previous incarnation (which probably had about the same amount of writes) had a RAID-1 of “enterprise” SSDs for the most written data. After a few years of running like that (and some time running with someone else’s load before it) the SSDs became extremely slow (sustained writes of 15MB/s) and started getting errors. So that’s a pair of SSDs that were burned out.

Conclusion

The amounts of data being written are steadily increasing. Recent machines with more RAM can decrease storage usage in some situations but that doesn’t compare to the increased use of checksummed and logged filesystems, VMs, databases for local storage, and other things that multiply writes. The amount of writes allowed under warranty isn’t increasing much and there are new technologies for larger SSD storage that decrease the DWPD rating of the underlying hardware.

For the systems I own it seems that they are all going to exceed the rated TBW for the SSDs before I have other reasons to replace them, and they aren’t particularly high usage systems. A mail server for a large number of users would hit it much earlier.

RAID of SSDs is a really good thing. Replacement of SSDs is something that should be planned for and a way of swapping SSDs to less important uses is also good (my parents have some SSDs that are too small for my current use but which work well for them). Another thing to consider is that if you have a server with spare drive bays you could put some extra SSDs in to spread the wear among a larger RAID-10 array. Instead of having a 2*SSD BTRFS RAID-1 for a server you could have 6*SSD to get a 3* longer lifetime than a regular RAID-1 before the SSDs wear out (BTRFS supports this sort of thing).

Based on these calculations and the small number of errors I’ve seen on my home server I’ll add a 480G SSD I have lying around to the array to spread the load and keep it running for a while longer.

16 January, 2022 05:33AM by etbe

Russ Allbery

Review: The Brightest Fell

Review: The Brightest Fell, by Seanan McGuire

Series: October Daye #11
Publisher: DAW
Copyright: 2017
ISBN: 0-698-18352-5
Format: Kindle
Pages: 353

This is the eleventh book in the October Daye urban fantasy series, not counting various novellas and side stories. You really cannot start here, particularly given how many ties this book has to the rest of the series.

I would like to claim there's some sort of plan or strategy in how I read long series, but there are just a lot of books to read and then I get distracted and three years have gone by. The advantage of those pauses, at least for writing reviews, is that I return to the series with fresh eyes and more points of comparison. My first thought this time around was "oh, these books aren't that well written, are they," followed shortly thereafter by staying up past midnight reading just one more chapter.

Plot summaries are essentially impossible this deep into a series, when even the names of the involved characters can be a bit of a spoiler. What I can say is that we finally get the long-awaited confrontation between Toby and her mother, although it comes in an unexpected (and unsatisfying) form. This fills in a few of the gaps in Toby's childhood, although there's not much there we didn't already know. It fills in considerably more details about the rest of Toby's family, most notably her pure-blood sister.

The writing is indeed not great. This series is showing some of the signs I've seen in other authors (Mercedes Lackey, for instance) who wrote too many books per year to do each of them justice. I have complained before about McGuire's tendency to reuse the same basic plot structure, and this instance seemed particularly egregious. The book opens with Toby enjoying herself and her found family, feeling like they can finally relax. Then something horrible happens to people she cares about, forcing her to go solve the problem. This in theory requires her to work out some sort of puzzle, but in practice is fairly linear and obvious because, although I love Toby as a character, she can't puzzle her way out of a wet sack. Everything is (mostly) fixed in the end, but there's a high cost to pay, and everyone ends the book with more trauma.

The best books of this series are the ones where McGuire manages to break with this formula. This is not one of them. The plot is literally on magical rails, since The Brightest Fell skips even pretending that Toby is an actual detective (although it establishes that she's apparently still working as one in the human world, a detail that I find baffling) and gives her a plot compass that tells her where to go. I don't really mind this since I read this series for emotional catharsis rather than Toby's ingenuity, but alas that's mostly missing here as well. There is a resolution of sorts, but it's the partial and conditional kind that doesn't include awful people getting their just deserts.

This is also not a good series entry for world-building. McGuire has apparently been dropping hints for this plot back at least as far as Ashes of Honor. I like that sort of long-term texture to series like this, but the unfortunate impact on this book is a lot of revisiting of previous settings and very little in the way of new world-building. The bit with the pixies was very good; I wanted more of that, not the trip to an Ashes of Honor setting to pick up a loose end, or yet another significant scene in Borderland Books.

As an aside, I wish authors would not put real people into their books as characters, even when it's with permission as I'm sure it was here. It's understandable to write a prominent local business into a story as part of the local color (although even then I would rather it not be a significant setting in the story), but having the actual owner and staff show up, even in brief cameos, feels creepy and weird to me. It also comes with some serious risks because real people are not characters under the author's control. (All the content warnings for that link, which is a news story from three years after this book was published.)

So, with all those complaints, why did I stay up late reading just one more chapter? Part of the answer is that McGuire writes very grabby books, at least for me. Toby is a full-speed-ahead character who is constantly making things happen, and although the writing in this book had more than the usual amount of throat-clearing and rehashing of the same internal monologue, the plot still moved along at a reasonable clip. Another part of the answer is that I am all-in on these characters: I like them, I want them to be happy, and I want to know what's going to happen next. It helps that McGuire has slowly added characters over the course of a long series and given most of them a chance to shine. It helps even more that I like all of them as people, and I like the style of banter that McGuire writes. Also, significant screen time for the Luidaeg is never a bad thing.

I think this was the weakest entry in the series in a while. It wrapped up some loose ends that I wasn't that interested in wrapping up, introduced a new conflict that it doesn't resolve, spent a bunch of time with a highly unpleasant character I didn't enjoy reading about, didn't break much new world-building ground, and needed way more faerie court politics. But some of the banter was excellent, the pixies and the Luidaeg were great, and I still care a lot about these characters. I am definitely still reading.

Followed by Nights and Silences.

Continuing a pattern from Once Broken Faith, the ebook version of The Brightest Fell includes a bonus novella. (I'm not sure if it's also present in the print version.)

"Of Things Unknown": As is usual for the short fiction in this series, this is a side story from the perspective of someone other than Toby. In this case, that's April O'Leary, first introduced all the way back in A Local Habitation, and the novella focuses on loose ends from that novel. Loose ends are apparently the theme of this book.

This was... fine. I like April, I enjoyed reading a story from her perspective, and I'm always curious to see how Toby looks from the outside. I thought the plot was strained and the resolution a bit too easy and painless, and I was not entirely convinced by April's internal thought processes. It felt like McGuire left some potential for greater plot complications on the table here, and I found it hard to shake the impression that this story was patching an error that McGuire felt she'd made in the much earlier novel. But it was nice to have an unambiguously happy ending after the more conditional ending of the main story. (6)

Rating: 6 out of 10

16 January, 2022 03:06AM

DocKnot 6.01

This release of my static site generator and software release manager finishes incorporating the last piece of my old release script that I was still using: copying a new software release into a software distribution archive tree, updating symlinks, updating the version database used to generate my web pages, and archiving the old version.

I also added a new docknot update-spin command that updates an input tree for the spin static site generator, fixing any deprecations or changes in the input format. Currently, all this does is convert the old-style *.rpod pointer files to new-style *.spin pointers.

This release also has a few other minor bug fixes, including for an embarrassing bug that required docknot spin be run from a package source tree because it tried to load per-package metadata (even though it doesn't use that data).

You can get the latest release from CPAN or from the DocKnot distribution page.

16 January, 2022 01:34AM

January 14, 2022

hackergotchi for Debian Social Team

Debian Social Team

Some site updates

  • Pleroma has been updated to version 2.4.1. We also suffered some downtime during the 11th of January. Upgrading to the latest version fixed our issues.
  • Peertube has been upgraded to version 4.0.0.
  • Jitsi Meet has been upgraded to version 2.0.6726.
  • Mjolnr has been upgraded to 1.2.1.
  • Our upgrade to bullseye is complete, we haven’t encountered any problems upgrading to bullseye \o/.

14 January, 2022 06:01PM by highvoltage

hackergotchi for Norbert Preining

Norbert Preining

Future of “my” packages in Debian

After having been (again) demoted (timed perfectly to my round birthday!) based on flimsy arguments, I have been forced to rethink the level of contribution I want to do for Debian. Considering in particular that I have switched my main desktop to dual-boot into Arch Linux (all on the same btrfs fs with subvolumes, great!) and have run Arch now for several days exclusively, I think it is time to review the packages I am somehow responsible for (full list of packages).

After about 20 years in Debian, time to send off quite some stuff that has accumulated over time.

KDE/Plasma, frameworks, Gears, and related packages

All these packages are group maintained, so there is not much to worry about. Furthermore, a few new faces have joined the team and are actively working on the packages, although mostly on Qt6. I guess that with me not taking action, frameworks, gears, and plasma will fall back over time (frameworks: Debian 5.88 versus current 5.90, gears: Debian 21.08 versus current 21.12, plasma uptodate at the moment).

With respect to my packages on OBS, they will probably also go stale over time. Using Arch nowadays I lack the development tools necessary to build Debian packages, and above all, the motivation.

I am sorry for all those who have learned to rely on my OBS packages over the last years, bringing modern and uptodate KDE/Plasma to Debian/stable, please direct your complaints at the responsible entities in Debian.

Cinnamon

As I have written already here, I have reduced my involvement quite a lot, and nowadays Fabio and Joshua are doing the work. But both are not even DM (AFAIR) and I am the only one doing uploads (I got DM upload permissions for it). But I am not sure how long I will continue doing this. This also means that in the near future, Cinnamon will also go stale.

TeX related packages

Hilmar has DM upload permissions and is very actively caring for the packages, so I don’t see any source of concern here. New packages will need to find a new uploader, though. With myself also being part of upstream, I can surely help out in the future with difficult problems.

Calibre and related packages

Yokota-san (another DM I have sponsored) has DM upload permissions and is very actively caring for the packages, so also here there is not much of concern.

Onedrive

This is already badly outdated, and I recommend using the OBS builds which are current and provide binaries for Ubuntu and Debian for various versions.

ROCm

Here fortunately a new generation of developers has taken over maintenance and everything is going smoothly, much better than I could have done, yeah to that!

Qalculate related packages

These are group maintained, but unfortunately nobody else but me has touched the repos for quite some time. I fear that the packages will go stale rather soon.

isync/mbsync

I have recently salvaged this package, and use it daily, but I guess it needs to be orphaned sooner or later.

CafeOBJ

While I am also part of upstream here, I guess it will be orphaned.

Julia

Julia is group maintained, but unfortunately nobody else but me has touched the repo for quite some time, and we are already far behind the normal releases (and julia got removed from testing). While go stale/orphaned. I recommend installing upstream binaries.

python-mechanize

Another package that is group maintained in the Python team, but with only me as uploader I guess it will go stale and effectively be orphaned soon.

xxhash

Has already by orphaned.

qpdfview

No upstream development, so not much to do, but will be orphaned, too.


14 January, 2022 02:17AM by Norbert Preining

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.8: Updated, Strict Headers

rcpp logo

The Rcpp team is thrilled to share the news of the newest release 1.0.8 of Rcpp which hit CRAN today, and has already been uploaded to Debian as well. Windows and macOS builds should appear at CRAN in the next few days. This release continues with the six-months cycle started with release 1.0.5 in July 2020. As a reminder, interim ‘dev’ or ‘rc’ releases will alwasys be available in the Rcpp drat repo; this cycle there were once again seven (!!) – times two as we also tested the modified header (more below). These rolling release tend to work just as well, and are also fully tested against all reverse-dependencies.

Rcpp has become the most popular way of enhancing R with C or C++ code. Right now, around 2478 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 242 in BioConductor.

This release finally brings a change we have worked on quite a bit over the last few months. The idea of enforcing the setting of STRICT_R_HEADERS was prososed years ago in 2016 and again in 2018. But making such a chance against a widely-deployed code base has repurcussions, and we were not ready then. Last April, this was revisited in issue #1158. Over the course of numerous lengthy runs of tests of a changed Rcpp package against (essentially) all reverse-dependencies (i.e. packages which use Rcpp) we identified ninetyfour packages in total which needed a change. We provided either a patch we emailed, or a GitHub pull request, to all ninetyfour. And we are happy to say that eighty cases were resolved via a new CRAN upload, with a seven more having merged the pull request but not yet uploaded.

Hence, we could make the case to CRAN (who were always CC’ed on the monthly ‘nag’ emails we sent to maintainers of packages needing a change) that an upload was warranted. And after a brief period for their checks and inspection, our January 11 release of Rcpp 1.0.8 arrived on CRAN on January 13.

So with that, a big and heartfelt Thank You! to all eighty maintainers for updating their packages to permit this change at the Rcpp end, to CRAN for the extra checking, and to everybody else who I bugged with the numerous emails and updated to the seemingly never-ending issue #1158. We all got this done, and that is a Good Thing (TM).

Other than the aforementioned change which will not automatically set STRICT_R_HEADERS (unless opted out which one can), a number of nice pull request by a number of contributors are included in this release:

  • Iñaki generalized use of finalizers for external pointers in #1180
  • Kevin ensured include paths are always quoted in #1189
  • Dirk added new headers to allow a more fine-grained choice of Rcpp feature for faster builds in #1191
  • Travers Ching extended the function signature generator to allow for a default R argument in #1184 and #1187
  • Dirk extended documentation, removed old example code, updated references and refreshed CI setup in several PRs (see below)

The full list of details follows.

Changes in Rcpp release version 1.0.8 (2022-01-11)

  • Changes in Rcpp API:

    • STRICT_R_HEADERS is now enabled by default, see extensive discussion in #1158 closing #898.

    • A new #define allows default setting of finalizer calls for external pointers (Iñaki in #1180 closing #1108).

    • Rcpp:::CxxFlags() now quotes the include path generated, (Kevin in #1189 closing #1188).

    • New header files Rcpp/Light, Rcpp/Lighter, Rcpp/Lightest and default Rcpp/Rcpp for fine-grained access to features (and compilation time) (Dirk #1191 addressing #1168).

  • Changes in Rcpp Attributes:

    • A new option signature allows customization of function signatures (Travers Ching in #1184 and #1187 fixing #1182)
  • Changes in Rcpp Documentation:

    • The Rcpp FAQ has a new entry on how not to grow a vector (Dirk in #1167).

    • Some long-spurious calls to RNGSope have been removed from examples (Dirk in #1173 closing #1172).

    • DOI reference in the bibtex files have been updated per JSS request (Dirk in #1186).

  • Changes in Rcpp Deployment:

    • Some continuous integration components have been updated (Dirk in #1174, #1181, and #1190).

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2822 previous questions.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 January, 2022 01:03AM

Reproducible Builds (diffoscope)

diffoscope 200 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 200. This version includes the following changes:

* Even if a Sphinx .inv inventory file is labelled "The remainder of this
  file is compressed using zlib", it might not actually be. In this case,
  don't traceback, and simply return the original content.
  (Closes: reproducible-builds/diffoscope#299)
* Update "X has been modified after NT_GNU_BUILD_ID has been applied" message
  to, for instance, not duplicating the full filename in the primary
  diffoscope's output.

You find out more by visiting the project homepage.

14 January, 2022 12:00AM

January 13, 2022

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (November and December 2021)

The following contributors got their Debian Developer accounts in the last two months:

  • Douglas Andrew Torrance (dtorrance)
  • Mark Lee Garrett (lee)

The following contributors were added as Debian Maintainers in the last two months:

  • Lukas Matthias Märdian
  • Paulo Roberto Alves de Oliveira
  • Sergio Almeida Cipriano Junior
  • Julien Lamy
  • Kristian Nielsen
  • Jeremy Paul Arnold Sowden
  • Jussi Tapio Pakkanen
  • Marius Gripsgard
  • Martin Budaj
  • Peymaneh
  • Tommi Petteri Höynälänmaa

Congratulations!

13 January, 2022 04:00PM by Jean-Pierre Giraud

hackergotchi for Daniel Lange

Daniel Lange

Leveling the playing field for non-native speakers

Wordle game

screenshort of bash, grep and pipes

 

Update

24.01.2022: What I love about the community is the playful creativity that inspires a game like Wordle and that in turn inspires others to create fun tools around it:

Robert Reichel has reverse engineered the Wordle application, so in case you want to play tomorrow's word today .. you can. Or have that one guess "Genius" solution experience.

JP Fosterson created a Wordle helper that is very much the Python version of my grep-foo above. In case you play regularly and can use a hand.

And Tom Lockwood wrote a Wordle solver also in Python. He blogged about it and ... is pondering to rewrite things in Rust:

I’ve decided to explore Rust for this, and so far what was taking 1GB of RAM in Python is taking, literally 1MB in Rust!

Welcome to 2022.

13 January, 2022 09:00AM by Daniel Lange

January 12, 2022

hackergotchi for Michael Prokop

Michael Prokop

Revisiting 2021

*

Uhm yeah, so this shirt didn’t age well. :) Mainly to recall what happened, I’m once again revisiting my previous year (previous edition: 2020).

2021 was quite challenging overall. It started with four weeks of distance learning at school. Luckily at least at school things got back to "some kind of normal" afterwards. The lockdowns turned out to be an excellent opportunity for practising Geocaching though, and that’s what I started to do with my family. It’s a great way to grab some fresh air, get to know new areas, and spend time with family and friends – I plan to continue doing this. :)

We bought a family season ticket for Freibäder (open-air baths) in Graz; this turned out to be a great investment – I enjoyed the open air swimming with family, as well as going for swimming laps on my own very much, and plan to do the same in 2022. Due to the lockdowns and the pandemics, the weekly Badminton sessions sadly didn’t really take place, so I pushed towards the above-mentioned outdoor swimming and also some running; with my family we managed to do some cycling, inline skating and even practiced some boulder climbing.

For obvious reasons plenty of concerts I was looking forward didn’t take place. With my parents we at least managed to attend a concert performance of Puccinis Tosca with Jonas Kaufmann at Schloßbergbühne Kasematten/Graz, and with the kids we saw "Robin Hood" in Oper Graz and "Pippi Langstrumpf" at Studiobühne of Oper Graz. The lack of concerts and rehearsals once again and still severely impacts my playing the drums, including at HTU BigBand Graz. :-/

Grml-wise we managed to publish release 2021.07, codename JauKerl. Debian-wise we got version 11 AKA bullseye released as new stable release in August.

For 2021 I planned to and also managed to minimize buying (new) physical stuff, except for books and other reading stuff. Speaking of reading, 2021 was nice — I managed to finish more than 100 books (see “Mein Lesejahr 2021“), and I’d like to keep the reading pace.

Now let’s hope for better times in 2022!

12 January, 2022 05:30PM by mika

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Training apps

I've been using various training apps (and their associated web sites) since 2010 now, forward-porting data to give me twelve years of logs. (My primary migration path has been CardioTrianer → Endomondo → Strava.) However, it strikes me that they're just becoming worse and worse, and I think I've figured out why: What I want is a training site with some social functions, but what companies are creating are social networks. Not social networks about training; just social networks.

To be a bit more concrete: I want something that's essentially a database. I want to quickly search for workouts in a given area and of a given length, and then bring up key information, compare, contrast, get proper graphs (not something where you can't see the difference between 3:00/km and 4:00/km!), and so on. (There's a long, long list of features and bugs I'd like to get fixed, but I won't list them all.)

But Strava is laser-focused on what's happened recently; there's a stream with my own workouts and my friends', just like a social network, and that's pretty much the main focus (and they have not even tried to solve the stream problem; if I have a friend that's too active, I don't see anything else). I'd need to pay a very steep price (roughly $110/year, the same price of a used GPS watch!) to even get a calendar; without that, I need to go back through a super-slow pagination UI 20 and 20 workouts at a time to even see something older.

Garmin Connect is somewhat better here; at least I can query on length and such. (Not so strange; Garmin is in the business of selling devices, not making social networks.) But it's very oriented around one specific watch brand, and it's far from perfect either. My big issue is that nobody's even trying, it seems. But I guess there's simply no money in that.

12 January, 2022 05:27PM

January 11, 2022

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

ThinkPad AMD Debian

After a hiatus of 6 years, it was nice to be back with the ThinkPad. This blog post briefly touches upon my impressions with the current generation ThinkPad T14 Gen2 AMD variant.

ThinkPad T14 Gen2 AMD

ThinkPad T14 Gen2 AMD

Lenovo

It took 8 weeks to get my hands on the machine. Given the pandemic, restrictions and uncertainities, not sure if I should call it an ontime delivery. This was a CTO - Customise-to-order; so was nice to get rid of things I really didn’t care/use much. On the other side, it also meant I could save on some power. It also came comparatively cheaper overall.

  • No fingerprint reader
  • No Touch screen

There’s still parts where Lenovo could improve. Or less frustate a customer. I don’t understand why a company would provide a full customization option on their portal, while at the same time, not provide an explicit option to choose the make/model of the hardware one wants. Lenovo deliberately chooses to not show/specify which WiFi adapter one could choose. So, as I suspected, I ended up with a MEDIATEK Corp. Device 7961 wifi adapter.

AMD

For the first time in my computing life, I’m now using AMD at the core. I was pretty frustrated with annoying Intel Graphics bugs, so decided to take the plunge and give AMD/ATI a shot, knowing that the radeon driver does have decent support. So far, on the graphics side of things, I’m glad that things look bright. The stock in-kernel radeon driver has been working perfect for my needs and I haven’t had to tinker even once so far, in my 30 days of use.

On the overall system performance, I have not done any benchmarks nor do I want to do. But wholly, the system performance is smooth.

Power/Thermal

This is where things need more improvement on the AMD side. This AMD laptop terribly draws a lot of power in suspend mode. And it isn’t just this machine, but also the previous T14 Gen1 which has similar problems. I’m not sure if this is a generic ThinkPad problem, or an AMD specific problem. But coming from the Dell XPS 13 9370 Intel, this does draw a lot lot more power. So much, that I chose to use hibernation instead.

Similarly, on the thermal side, this machine doesn’t cool down well as compared the the Dell XPS Intel one. On an idle machine, its temperature are comparatively higher. Looking at powertop reports, it does show to consume an average of 10 watts power even while idle.

I’m hoping these are Linux ingeration issues and that Lenovo/AMD will improve things in the coming months. But given the user feedback on the ThinkPad T14 Gen1 thread, it may just be wishful thinking.

Linux

The overall hardware support has been surprisingly decent. The MediaTek WiFi driver had some glitches but with Linux 5.15+, things have considerably improved. And I hope the trend will continue with forthcoming Linux releases. My previous device driver experience with MediaTek wasn’t good but I took the plunge, considering that in the worst scenario I’d have the option to swap the card.

There’s a lot of marketing about Linux + Intel. But I took a jibe with Linux + AMD. There are glitches but nothing so far that has been a dealbreaker. If anything, I wish Lenovo/AMD would seriously work on the power/thermal issues.

Migration

Other than what’s mentioned above, I haven’t had any serious issues. I may have had some rare occassional hangs but they’ve been so infrequent that I haven’t spent time to investigate those.

Upon receiving the machine, my biggest requirement was how to switch my current workstation from Dell XPS to Lenovo ThinkPad. I’ve been using btrfs for some time now. And over the years, built my own practise on how to structure it. Things like, provisioning [sub]volumes, based on use cases is one thing I see. Like keeping separate subvols for: cache/temporary data, copy-on-write data , swap etc. I wish these things could be simplified; either on the btrfs tooling side or some different tool on top of it.

Below is filtered list of subvols created over years, that were worthy of moving to the new machine.

rrs@priyasi:~$ cat btrfs-volume-layout 
ID 550 gen 19166 top level 5 path home/foo/.cache
ID 552 gen 1522688 top level 5 path home/rrs
ID 553 gen 1522688 top level 552 path home/rrs/.cache
ID 555 gen 1426323 top level 552 path home/rrs/rrs-home/Libvirt-Images
ID 618 gen 1522672 top level 5 path var/spool/news
ID 634 gen 1522670 top level 5 path var/tmp
ID 635 gen 1522688 top level 5 path var/log
ID 639 gen 1522226 top level 5 path var/cache
ID 992 gen 1522670 top level 5 path disk-tmp
ID 1018 gen 1522688 top level 552 path home/rrs/NoBackup
ID 1196 gen 1522671 top level 5 path etc
ID 23721 gen 775692 top level 5 path swap
18:54 ♒ ॐ ♅ ♄ ⛢     ☺ 😄    

btrfs send/receive

This did come in handy but I sorely missed some feature. Maybe they aren’t there, or are there and I didn’t look close enough. Over the years, different attributes were set to different subvols. Over time I forget what feature was added where. But from a migration point of view, it’d be nice to say, “Take this volume and take it with all its attributes”. I didn’t find that functionality in send/receive.

There’s get/set-property which I noticed later but by then it was late. So some sort of tooling, ideally something like btrfs migrate or somesuch would be nicer.

In the file system world, we already have nice tools to take care of similar scenarios. Like with rsync, I can request it to carry all file attributes.

Also, iirc, send/receive works only on ro volumes. So there’s more work one needs to do in:

  1. create ro vol
  2. send
  3. receive
  4. don’t forget to set rw property
  5. And then somehow find out other properties set on each individual subvols and [re]apply the same on the destination

I wish this all be condensed into a sub-command.

For my own sake, for this migration, the steps used were:

user@debian:~$ for volume in `sudo btrfs sub list /media/user/TOSHIBA/Migrate/ | cut -d ' ' -f9 | grep -v ROOTVOL | grep -v etc | grep -v btrbk`; do echo $volume; sud
o btrfs send /media/user/TOSHIBA/$volume | sudo btrfs receive /media/user/BTRFSROOT/ ; done            
Migrate/snapshot_disk-tmp
At subvol /media/user/TOSHIBA/Migrate/snapshot_disk-tmp
At subvol snapshot_disk-tmp
Migrate/snapshot-home_foo_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_foo_.cache
At subvol snapshot-home_foo_.cache
Migrate/snapshot-home_rrs
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs
At subvol snapshot-home_rrs
Migrate/snapshot-home_rrs_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_.cache
At subvol snapshot-home_rrs_.cache
ERROR: crc32 mismatch in command
Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol snapshot-home_rrs_rrs-home_Libvirt-Images
ERROR: crc32 mismatch in command
Migrate/snapshot-var_spool_news
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_spool_news
At subvol snapshot-var_spool_news
Migrate/snapshot-var_lib_machines
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_lib_machines
At subvol snapshot-var_lib_machines
Migrate/snapshot-var_lib_machines_DebianSidTemplate
..... snipped .....

And then, follow-up with:

user@debian:~$ for volume in `sudo btrfs sub list /media/user/BTRFSROOT/ | cut -d ' ' -f9`; do echo $volume; sudo btrfs property set -ts /media/user/BTRFSROOT/$volume ro false; done
ROOTVOL
ERROR: Could not open: No such file or directory
etc
snapshot_disk-tmp
snapshot-home_foo_.cache
snapshot-home_rrs
snapshot-var_spool_news
snapshot-var_lib_machines
snapshot-var_lib_machines_DebianSidTemplate
snapshot-var_lib_machines_DebSidArmhf
snapshot-var_lib_machines_DebianJessieTemplate
snapshot-var_tmp
snapshot-var_log
snapshot-var_cache
snapshot-disk-tmp

And then finally, renaming everything to match proper:

user@debian:/media/user/BTRFSROOT$ for x in snapshot*; do vol=$(echo $x | cut -d '-' -f2 | sed -e "s|_|/|g"); echo $x $vol; sudo mv $x $vol; done
snapshot-var_lib_machines var/lib/machines
snapshot-var_lib_machines_Apertisv2020ospackTargetARMHF var/lib/machines/Apertisv2020ospackTargetARMHF
snapshot-var_lib_machines_Apertisv2021ospackTargetARM64 var/lib/machines/Apertisv2021ospackTargetARM64
snapshot-var_lib_machines_Apertisv2022dev3ospackTargetARMHF var/lib/machines/Apertisv2022dev3ospackTargetARMHF
snapshot-var_lib_machines_BusterArm64 var/lib/machines/BusterArm64
snapshot-var_lib_machines_DebianBusterTemplate var/lib/machines/DebianBusterTemplate
snapshot-var_lib_machines_DebianJessieTemplate var/lib/machines/DebianJessieTemplate
snapshot-var_lib_machines_DebianSidTemplate var/lib/machines/DebianSidTemplate
snapshot-var_lib_machines_DebianSidTemplate_var_lib_portables var/lib/machines/DebianSidTemplate/var/lib/portables
snapshot-var_lib_machines_DebSidArm64 var/lib/machines/DebSidArm64
snapshot-var_lib_machines_DebSidArmhf var/lib/machines/DebSidArmhf
snapshot-var_lib_machines_DebSidMips var/lib/machines/DebSidMips
snapshot-var_lib_machines_JenkinsApertis var/lib/machines/JenkinsApertis
snapshot-var_lib_machines_v2019 var/lib/machines/v2019
snapshot-var_lib_machines_v2019LinuxSupport var/lib/machines/v2019LinuxSupport
snapshot-var_lib_machines_v2020 var/lib/machines/v2020
snapshot-var_lib_machines_v2021dev3Slim var/lib/machines/v2021dev3Slim
snapshot-var_lib_machines_v2021dev3SlimTarget var/lib/machines/v2021dev3SlimTarget
snapshot-var_lib_machines_v2022dev2OspackMinimal var/lib/machines/v2022dev2OspackMinimal
snapshot-var_lib_portables var/lib/portables
snapshot-var_log var/log
snapshot-var_spool_news var/spool/news
snapshot-var_tmp var/tmp

snapper

Entirely independent of this, but indirectly related. I use snapper as my snapshotting tool. It worked perfect on my previous machine. While everything got migrated, the only thing that fell apart was snapper. It just wouldn’t start/run proper. Funny thing is that I just removed the snapper configs and reinitialized with the exact same config again, and voila snapper was happy.

Conclusion

That was pretty much it. With the above and then also migrating /boot and then just chroot to install the boot loader. At some time, I’d like to explore other boot options but given that that is such a non-essential task, it is low on the list.

The good part was that I booted into my new machine with my exact workstation setup as it was. All the way to the user cache and the desktop session. So it was nice on that part.

But I surely think there’s room for a better migration experience here. If not directly as btrfs migrate, then maybe as an independent tool. The problem is that such a tool is going to be used once in years, so I didn’t find the motivation to write one. But this surely would be a good use case for the distribution vendors.

11 January, 2022 02:07PM by Ritesh Raj Sarraf ([email protected])

Russ Allbery

Review: Hench

Review: Hench, by Natalie Zina Walschots

Publisher: William Morrow
Copyright: September 2020
ISBN: 0-06-297859-4
Format: Kindle
Pages: 403

Anna Tromedlov is a hench, which means she does boring things for terrible people for money. Supervillains need a lot of labor to keep their bases and criminal organizations running, and they get that labor the same way everyone else does: through temporary agencies. Anna does spreadsheets, preferably from home on her couch.

On-site work was terrifying and she tried to avoid it, but the lure of a long-term contract was too strong. The Electric Eel, despite being a creepy sleazeball, seemed to be a manageable problem. He needed some support at a press conference, which turns out to be code for being a diversity token in front of the camera, but all she should have to do is stand there.

That's how Anna ends up holding the mind control device to the head of the mayor's kid when the superheroes attack, followed shortly by being thrown across the room by Supercollider.

Left with a complex fracture of her leg that will take months to heal, a layoff notice and a fruit basket from Electric Eel's company, and a vaguely menacing hospital conversation with the police (including Supercollider in a transparent disguise) in which it's made clear to her that she is mistaken about Supercollider's hand-print on her thigh, Anna starts wondering just how much damage superheroes have done. The answer, when analyzed using the framework for natural disasters, is astonishingly high. Anna's resulting obsession with adding up the numbers leads to her starting a blog, the Injury Report, with a growing cult following. That, in turn, leads to a new job and a sponsor: the mysterious supervillain Leviathan.

To review this book properly, I need to talk about Watchmen.

One of the things that makes superheroes interesting culturally is the straightforwardness of their foundational appeal. The archetypal superhero story is an id story: an almost pure power fantasy aimed at teenage boys. Like other pulp mass media, they reflect the prevailing cultural myths of the era in which they're told. World War II superheroes are mostly all-American boy scouts who punch Nazis. 1960s superheroes are a more complex mix of outsider misfits with a moral code and sarcastic but earnestly ethical do-gooders. The superhero genre is vast, with numerous reinterpretations, deconstructions, and alternate perspectives, but its ur-story is a good versus evil struggle of individual action, in which exceptional people use their powers for good to defeat nefarious villains.

Watchmen was not the first internal critique of the genre, but it was the one that everyone read in the 1980s and 1990s. It takes direct aim at that moral binary. The superheroes in Watchmen are not paragons of virtue (some of them are truly horrible people), and they have just as much messy entanglement with the world as the rest of us. It was superheroes re-imagined for the post-Vietnam, post-Watergate era, for the end of the Cold War when we were realizing how many lies about morality we had been told. But it still put superheroes and their struggles with morality at the center of the story.

Hench is a superhero story for the modern neoliberal world of reality TV and power inequality in the way that Watchmen was a superhero story for the Iran-Contra era and the end of the Cold War.

Whether our heroes have feet of clay is no longer a question. Today, a better question is whether the official heroes, the ones that are celebrated as triumphs of individual achievement, are anything but clay. Hench doesn't bother asking whether superheroes have fallen short of their ideal; that answer is obvious. What Hench asks instead is a question familiar to those living in a world full of televangelists, climate denialism, manipulative advertising, and Facebook: are superheroes anything more than a self-perpetuating scam? Has the good superheroes supposedly do ever outweighed the collateral damage? Do they care in the slightest about the people they're supposedly protecting? Or is the whole system of superheroes and supervillains a performance for an audience, one that chews up bystanders and spits them out mangled while delivering simplistic and unquestioned official morality?

This sounds like a deeply cynical premise, but Hench is not a cynical book. It is cynical about superheroes, which is not the same thing. The brilliance of Walschots's approach is that Anna has a foot in both worlds. She works for a supervillain and, over the course of the book, gains access to real power within the world of superheroic battles. But she's also an ordinary person with ordinary problems: not enough money, rocky friendships, deep anger at the injustices of the world and the way people like her are discarded, and now a disability and PTSD. Walschots perfectly balances the tension between those worlds and maintains that tension straight to the end of the book. From the supervillain world, Anna draws support, resources, and a mission, but all of the hope, true morality, and heart of this book comes from the ordinary side.

If you had the infrastructure of a supervillain at your disposal, what would you do with it?

Anna's answer is to treat superheroes as a destructive force like climate change, and to do whatever she can to drive them out of the business and thus reduce their impact on the world. The tool she uses for that is psychological warfare: make them so miserable that they'll snap and do something too catastrophic to be covered up. And the raw material for that psychological warfare is data.

That's the foot in the supervillain world. In descriptions of this book, her skills with data are often called her superpower. That's not exactly wrong, but the reason why she gains power and respect is only partly because of her data skills. Anna lives by the morality of the ordinary people world: you look out for your friends, you treat your co-workers with respect as long as they're not assholes, and you try to make life a bit better for the people around you. When Leviathan gives her the opportunity to put together a team, she finds people with skills she admires, funnels work to people who are good at it, and worries about the team dynamics. She treats the other ordinary employees of a supervillain as people, with lives and personalities and emotions and worth. She wins their respect.

Then she uses their combined skills to destroy superhero lives.

I was fascinated by the moral complexity in this book. Anna and her team do villainous things by the morality of the superheroic world (and, honestly, by the morality of most readers), including some things that result in people's deaths. By the end of the book, one could argue that Anna has been driven by revenge into becoming an unusual sort of supervillain. And yet, she treats the people around her so much better than either the heroes or the villains do. Anna is fiercely moral in all the ordinary person ways, and that leads directly to her becoming a villain in the superhero frame. Hench doesn't resolve that conflict; it just leaves it on the page for the reader to ponder.

The best part about this book is that it's absurdly grabby, unpredictable, and full of narrative momentum. Walschots's pacing kept me up past midnight a couple of times and derailed other weekend plans so that I could keep reading. I had no idea where the plot was going even at the 80% mark. The ending is ambiguous and a bit uncomfortable, just like the morality throughout the book, but I liked it the more I thought about it.

One caveat, unfortunately: Hench has some very graphic descriptions of violence and medical procedures, and there's an extended torture sequence with some incredibly gruesome body horror that I thought went on far too long and was unnecessary to the plot. If you're a bit squeamish like I am, there are some places where you'll want to skim, including one sequence that's annoyingly intermixed with important story developments.

Otherwise, though, this is a truly excellent book. It has a memorable protagonist with a great first-person voice, an epic character arc of empowerment and revenge, a timely take on the superhero genre that uses it for sharp critique of neoliberal governance and reality TV morality, a fascinatingly ambiguous and unsettled moral stance, a gripping and unpredictable plot, and some thoroughly enjoyable competence porn. I had put off reading it because I was worried that it would be too cynical or dark, but apart from the unnecessary torture scene, it's not at all. Highly recommended.

Rating: 9 out of 10

11 January, 2022 02:56AM

January 10, 2022

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Grading using the Wacom Intuos S

I've been teaching economics for a few semesters already and, slowly but surely, I'm starting to get the hang of it. Having to deal with teaching remotely hasn't been easy though and I'm really hoping the winter semester will be in-person again.

Although I worked way too much last semester1, I somehow managed to transition to using a graphics tablet. I bought a Wacom Intuos S tablet (model CTL-4100) in late August 2021 and overall, I have been very happy with it. Wacom Canada offers a small discount for teachers and I ended up paying 115 CAD (~90 USD) for the tablet, an overall very reasonable price.

Unsurprisingly, the Wacom support on Linux is very good and my tablet worked out of the box. The only real problem I had was by default, the tablet sometimes boots up in Android mode, making it unusable. This is easily solved by pressing down on the pad's first and last buttons for a few seconds, until the LED turns white.

The included stylus came with hard plastic nibs, but I find them too slippery. I eventually purchased hard felt nibs, which increase the friction and makes for a more paper-like experience. They are a little less durable, but I wrote quite a fair bit and still haven't gone through a single one yet.

Learning curve

Learning how to use a graphical tablet took me at least a few weeks! When writing on a sheet of paper, the eyes see what the hand writes directly. This is not the case when using a graphical tablet: you are writing on a surface and see the result on your screen, a completely different surface. This dissociation takes a bit of practise to master, but after going through more than 300 pages of notes, it now feels perfectly normal.

Here is a side-by-side comparison of my very average hand-writing2:

  1. on paper
  2. using the tablet, the first week
  3. using the tablet, after a couple of months

Comparison of my writing, on paper, using the tablet and using the tablet after a few weeks

I still prefer the result of writing on paper, but I think this is mostly due to me not using the pressure sensitivity feature. The support in xournal wasn't great, but now that I've tried it in xournalpp (more on this below), I think I will be enabling it in the future. The result on paper is also more consistent, but I trust my skills will improve over time.

Pressure sensitivity on vs off

Use case

The first use case I have for the tablet is grading papers. I've been asking my students to submit their papers via Moodle for a few semesters already, but until now, I was grading them using PDF comments. The experience wasn't great3 and was rather slow compared to grading physical copies.

I'm also a somewhat old-school teacher: I refuse to teach using slides. Death by PowerPoint is real. I write on the blackboard a lot4 and I find it much easier to prepare my notes by hand than by typing them, as the end result is closer to what I actually end up writing down on the board.

Writing notes by hand on sheets of paper is a chore too, especially when you revisit the same materiel regularly. Being able to handwrite digital notes gives me a lot more flexibility and it's been great.

So far, I have been using xournal to write notes and grade papers, and although it is OK, it has a bunch of quirks I dislike. I was waiting for xournalpp to be packaged in Debian, and it now is5! I'm looking forward to using it next semester.

Towards a better computer monitor

I have also been feeling the age of my current computer monitor. I am currently using an old 32" 1080p TV from LG and up until now, I had been able to deal with the drawbacks. The colors are pretty bad and 1080p for such a large display isn't great, but I got used to it.

What I really noticed when I started using my graphics tablet was the input lag. It's bad enough that there's a clear jello effect when writing and it eventually gives me a headache. It's so bad I usually prefer to work on my laptop, which has a nicer but noticeably smaller panel.

I'm currently looking to replace this aging TV6 by something more modern. I have been waiting out since I would like to buy something that will last me another 10 years if possible. Sadly, 32" high refresh rate 4K monitors aren't exactly there yet and I haven't found anything matching my criteria. I would probably also need a new GPU, something that is not easy to come by these days.


  1. I worked at two colleges at the same time, teaching 2 different classes (one of which I was giving for the first time...) to 6 groups in total. I averaged more than 60h per week for sure. 

  2. Yes, I only write in small caps. Students love it, as it's much easier to read on the blackboard. 

  3. Although most PDF readers support displaying comments, some of my more clueless students still had trouble seeing them and I had to play tech support more than I wanted. 

  4. Unsurprisingly, my students also love it. One of the most common feedback I get at the end of the semester is they hate slides too and are very happy I'm one of the few teachers who writes on the board. 

  5. Many thanks to Barak A. Pearlmutter for maintaining this package. 

  6. It dates back from 2010, when my mom replaced our old CRT by a flat screen. FullHD TVs were getting affordable and I wasn't sad to see our tiny 20-something inches TV go. I eventually ended up with the LG flatscreen a few years later when I moved out in my first apartment and my mom got something better. 

10 January, 2022 05:00AM by Louis-Philippe Véronneau

January 09, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.13: Some Fixes and Documentation

A new version, now at 0.3.13, of the Rblpapi package just arrived at CRAN. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the thirteenth release since the package first appeared on CRAN in 2016. It comprises the PRs from three different contributors (with special thanks once again to Michael Kerber), and extends test and documentation, and extends two function interfaces to control explicitly whether returned lists of length one should be simplified.

The list of changes follow below.

Changes in Rblpapi version 0.3.13 (2022-01-09)

  • Add a test for bds (Michael Kerber in #352)

  • Add simplify argument (and option) to bdh and bds (Dirk in #354 closing #353, #351)

  • Improve documentation for bsearch (John in #357 closing #356)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 January, 2022 11:07PM

Russell Coker

Video Conferencing (LCA)

I’ve just done a tech check for my LCA lecture. I had initially planned to do what I had done before and use my phone for recording audio and video and my PC for other stuff. The problem is that I wanted to get an external microphone going and plugging in a USB microphone turned off the speaker in the phone (it seemed to direct audio to a non-existent USB audio output). I tried using bluetooth headphones with the USB microphone and that didn’t work. Eventually a viable option seemed to be using USB headphones on my PC with the phone for camera and microphone. Then it turned out that my phone (Huawei Mate 10 Pro) didn’t support resolutions higher than VGA with Chrome (it didn’t have the “advanced” settings menu to select resolution), this is probably an issue of Android build features. So the best option is to use a webcam on the PC, I was recommended a Logitech C922 but OfficeWorks only has a Logitech C920 which is apparently OK.

The free connection test from freeconference.com [1] is good for testing out how your browser works for videoconferencing. It tests each feature separately and is easy to run.

After buying the C920 webcam I found that it sometimes worked and sometimes caused a kernel panic like the following (partial panic log included for the benefit of people Googling this Logitech C920 problem):

[95457.805417] BUG: kernel NULL pointer dereference, address: 0000000000000000
[95457.805424] #PF: supervisor read access in kernel mode
[95457.805426] #PF: error_code(0x0000) - not-present page
[95457.805429] PGD 0 P4D 0 
[95457.805431] Oops: 0000 [#1] SMP PTI
[95457.805435] CPU: 2 PID: 75486 Comm: v4l2src0:src Not tainted 5.15.0-2-amd64 #1  Debian 5.15.5-2
[95457.805438] Hardware name: HP ProLiant ML110 Gen9/ProLiant ML110 Gen9, BIOS P99 02/17/2017
[95457.805440] RIP: 0010:usb_ifnum_to_if+0x3a/0x50 [usbcore]
...
[95457.805481] Call Trace:
[95457.805484]  
[95457.805485]  usb_hcd_alloc_bandwidth+0x23d/0x360 [usbcore]
[95457.805507]  usb_set_interface+0x127/0x350 [usbcore]
[95457.805525]  uvc_video_start_transfer+0x19c/0x4f0 [uvcvideo]
[95457.805532]  uvc_video_start_streaming+0x7b/0xd0 [uvcvideo]
[95457.805538]  uvc_start_streaming+0x2d/0xf0 [uvcvideo]
[95457.805543]  vb2_start_streaming+0x63/0x100 [videobuf2_common]
[95457.805550]  vb2_core_streamon+0x54/0xb0 [videobuf2_common]
[95457.805555]  uvc_queue_streamon+0x2a/0x40 [uvcvideo]
[95457.805560]  uvc_ioctl_streamon+0x3a/0x60 [uvcvideo]
[95457.805566]  __video_do_ioctl+0x39b/0x3d0 [videodev]

It turns out that Ubuntu Launchpad bug #1827452 has great information on this problem [2]. Apparently if the device decides it doesn’t have enough power then it will reconnect and get a different USB bus device number and this often happens when the kernel is initialising it. There’s a race condition in the kernel code in which the code to initialise the device won’t realise that the device has been detached and will dereference a NULL pointer and then mess up other things in USB device management. The end result for me is that all USB devices become unusable in this situation, commands like “lsusb” hang, and a regular shutdown/reboot hangs because it can’t kill the user session because something is blocked on USB.

One of the comments on the Launchpad bug is that a powered USB hub can alleviate the problem while a USB extension cable (which I had been using) can exacerbate it. Officeworks currently advertises only one powered USB hub, it’s described as “USB 3” but also “maximum speed 480 Mbps” (USB 2 speed). So basically they are selling a USB 2 hub for 4* the price that USB 2 hubs used to sell for.

When debugging this I used the “cheese” webcam utility program and ran it in a KVM virtual machine. The KVM parameters “-device qemu-xhci -usb -device usb-host,hostbus=1,hostaddr=2” (where 1 and 2 are replaced by the Bus and Device numbers from “lsusb”) allow the USB device to be passed through to the VM. Doing this meant that I didn’t have to reboot my PC every time a webcam test failed.

For audio I’m using the Sades Wand gaming headset I wrote about previously [3].

09 January, 2022 07:20AM by etbe

François Marier

Removing an alias/domain from a Let's Encrypt certificate managed by certbot

I recently got an error during a certbot renewal:

Challenge failed for domain echo.fmarier.org
Failed to renew certificate jabber-gw.fmarier.org with error: Some challenges have failed.
The following renewals failed:
  /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem (failure)
1 renew failure(s), 0 parse failure(s)

due to the fact that I had removed the DNS entry for echo.fmarier.org.

I tried to find a way to remove that name from the certificate before renewing it, but it seems like the only way to do it is to create a new certificate without that alternative name.

First, I looked for the domains included in the certificate:

$ certbot certificates
...
  Certificate Name: jabber-gw.fmarier.org
    Serial Number: 31485424904a33fb2ab43ab174b4b146512
    Key Type: RSA
    Domains: jabber-gw.fmarier.org echo.fmarier.org fmarier.org
    Expiry Date: 2022-01-04 05:28:57+00:00 (VALID: 29 days)
    Certificate Path: /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem
    Private Key Path: /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem

Then, deleted the existing certificate:

$ certbot delete jabber-gw.fmarier.org

and finally created a new certificate with all other names except for the obsolete one:

$ certbot certonly -d jabber-gw.fmarier.org -d fmarier.org --duplicate

09 January, 2022 06:00AM

hackergotchi for Matthew Garrett

Matthew Garrett

Pluton is not (currently) a threat to software freedom

At CES this week, Lenovo announced that their new Z-series laptops would ship with AMD processors that incorporate Microsoft's Pluton security chip. There's a fair degree of cynicism around whether Microsoft have the interests of the industry as a whole at heart or not, so unsurprisingly people have voiced concerns about Pluton allowing for platform lock-in and future devices no longer booting non-Windows operating systems. Based on what we currently know, I think those concerns are understandable but misplaced.

But first it's helpful to know what Pluton actually is, and that's hard because Microsoft haven't actually provided much in the way of technical detail. The best I've found is a discussion of Pluton in the context of Azure Sphere, Microsoft's IoT security platform. This, in association with the block diagrams on page 12 and 13 of this slidedeck, suggest that Pluton is a general purpose security processor in a similar vein to Google's Titan chip. It has a relatively low powered CPU core, an RNG, and various hardware cryptography engines - there's nothing terribly surprising here, and it's pretty much the same set of components that you'd find in a standard Trusted Platform Module of the sort shipped in pretty much every modern x86 PC. But unlike Titan, Pluton seems to have been designed with the explicit goal of being incorporated into other chips, rather than being a standalone component. In the Azure Sphere case, we see it directly incorporated into a Mediatek chip. In the Xbox Series devices, it's incorporated into the SoC. And now, we're seeing it arrive on general purpose AMD CPUs.

Microsoft's announcement says that Pluton can be shipped in three configurations:as the Trusted Platform Module; as a security processor used for non-TPM scenarios like platform resiliency; or OEMs can choose to ship with Pluton turned off. What we're likely to see to begin with is the former - Pluton will run firmware that exposes a Trusted Computing Group compatible TPM interface. This is almost identical to the status quo. Microsoft have required that all Windows certified hardware ship with a TPM for years now, but for cost reasons this is often not in the form of a separate hardware component. Instead, both Intel and AMD provide support for running the TPM stack on a component separate from the main execution cores on the system - for Intel, this TPM code runs on the Management Engine integrated into the chipset, and for AMD on the Platform Security Processor that's integrated into the CPU package itself.

So in this respect, Pluton changes very little; the only difference is that the TPM code is running on hardware dedicated to that purpose, rather than alongside other code. Importantly, in this mode Pluton will not do anything unless the system firmware or OS ask it to. Pluton cannot independently block the execution of any other code - it knows nothing about the code the CPU is executing unless explicitly told about it. What the OS can certainly do is ask Pluton to verify a signature before executing code, but the OS could also just verify that signature itself. Windows can already be configured to reject software that doesn't have a valid signature. If Microsoft wanted to enforce that they could just change the default today, there's no need to wait until everyone has hardware with Pluton built-in.

The two things that seem to cause people concerns are remote attestation and the fact that Microsoft will be able to ship firmware updates to Pluton via Windows Update. I've written about remote attestation before, so won't go into too many details here, but the short summary is that it's a mechanism that allows your system to prove to a remote site that it booted a specific set of code. What's important to note here is that the TPM (Pluton, in the scenario we're talking about) can't do this on its own - remote attestation can only be triggered with the aid of the operating system. Microsoft's Device Health Attestation is an example of remote attestation in action, and the technology definitely allows remote sites to refuse to grant you access unless you booted a specific set of software. But there are two important things to note here: first, remote attestation cannot prevent you from booting whatever software you want, and second, as evidenced by Microsoft already having a remote attestation product, you don't need Pluton to do this! Remote attestation has been possible since TPMs started shipping over two decades ago.

The other concern is Microsoft having control over the firmware updates. The context here is that TPMs are not magically free of bugs, and sometimes these can have security consequences. One example is Infineon TPMs producing weak RSA keys, a vulnerability that could be rectified by a firmware update to the TPM. Unfortunately these updates had to be issued by the device manufacturer rather than Infineon being able to do so directly. This meant users had to wait for their vendor to get around to shipping an update, something that might not happen at all if the machine was sufficiently old. From a security perspective, being able to ship firmware updates for the TPM without them having to go through the device manufacturer is a huge win.

Microsoft's obviously in a position to ship a firmware update that modifies the TPM's behaviour - there would be no technical barrier to them shipping code that resulted in the TPM just handing out your disk encryption secret on demand. But Microsoft already control the operating system, so they already have your disk encryption secret. There's no need for them to backdoor the TPM to give them something that the TPM's happy to give them anyway. If you don't trust Microsoft then you probably shouldn't be running Windows, and if you're not running Windows Microsoft can't update the firmware on your TPM.

So, as of now, Pluton running firmware that makes it look like a TPM just isn't a terribly interesting change to where we are already. It can't block you running software (either apps or operating systems). It doesn't enable any new privacy concerns. There's no mechanism for Microsoft to forcibly push updates to it if you're not running Windows.

Could this change in future? Potentially. Microsoft mention another use-case for Pluton "as a security processor used for non-TPM scenarios like platform resiliency", but don't go into any more detail. At this point, we don't know the full set of capabilities that Pluton has. Can it DMA? Could it play a role in firmware authentication? There are scenarios where, in theory, a component such as Pluton could be used in ways that would make it more difficult to run arbitrary code. It would be reassuring to hear more about what the non-TPM scenarios are expected to look like and what capabilities Pluton actually has.

But let's not lose sight of something more fundamental here. If Microsoft wanted to block free operating systems from new hardware, they could simply mandate that vendors remove the ability to disable secure boot or modify the key databases. If Microsoft wanted to prevent users from being able to run arbitrary applications, they could just ship an update to Windows that enforced signing requirements. If they want to be hostile to free software, they don't need Pluton to do it.

(Edit: it's been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you're running. There's various reasons I don't think this is realistic - one is that there's just way too much variability in measurements for it to be practical to write a policy that's strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)

comment count unavailable comments

09 January, 2022 12:59AM

January 08, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

2021 in Fiction

Cover for *This is How You Lose the Time War*
Cover for *Robot*
Cover for *The Glass Hotel*

Following on from last year's round-up of my reading, here's a look at the fiction I enjoyed in 2021.

I managed to read 42 books in 2021, up from 31 last year. That's partly to do with buying an ereader: 33/36% of my reading (by pages/by books) was ebooks. I think this demonstrates that ebooks have mostly complemented paper books for me, rather than replacing them.

My book of the year (although it was published in 2019) was This is How You Lose the Time War by Amal El-Mohtar and Max Gladstone: A short epistolary love story between warring time travellers and quite unlike anything else I've read for a long time. Other notables were The Glass Hotel by Emily St John Mandel and Robot by Adam Wiśniewski-Snerg.

The biggest disappointment for me was The Ministry for the Future by Kim Stanley Robinson (KSR), which I haven't even finished. I love KSRs writing: I've written about him many times on this blog, at least in 2002, 2006 and 2009, I think I've read every other novel he's published and most of his short stories. But this one was too much of something for me. He's described this novel a the end-point of a particular journey and approach to writing he's taken, which I felt relieved to learn, assuming he writes any more novels (and I really hope that he does) they will likely be in a different "mode".

My "new author discovery" for 2021 was Chris Beckett: I tore through Two Tribes and America City before promptly buying all his other work. He fits roughly into the same bracket as Adam Roberts and Christopher Priest, two of my other favourite authors.

5 of the books I read (12%) were from my "backlog" of already-purchased physical books. I'd like to try and reduce my Backlog further so I hope to push this figure up next year.

I made a small effort to read more diverse authors this year. 24% of the books I read (by book count and page count) were by women. 15% by page count were (loosely) BAME (19% by book count). Again I'd like to increase these numbers modestly in 2022.

Unlike 2020, I didn't complete any short story collections in 2021! This is partly because there was only one issue of Interzone published in all of 2021, a double-issue which I haven't yet finished. This is probably a sad date point in terms of Interzone's continued existence, but it's not dead yet.

08 January, 2022 09:32PM

John Goerzen

Make the Internet Yours Again With an Instant Mesh Network

I’m going to lead with the technical punch line, and then explain it:

Yggdrasil Network is an opportunistic mesh that can be deployed privately or as part of a global-scale network. Each node gets a stable IPv6 address (or even an entire /64) that is derived from its public key and is bound to that node as long as the node wants it (of course, it can generate a new keypair anytime) and is valid wherever the node joins the mesh. All traffic is end-to-end encrypted.

Yggdrasil will automatically discover peers on a LAN via broadcast beacons, and requires zero configuration to peer in such a way. It can also run as an overlay network atop the public Internet. Public peers serve as places to join the global network, and since it’s a mesh, if one device on your LAN joins the global network, the others will automatically have visibility on it also, thanks to the mesh routing.

It neatly solves a lot of problems of portability (my ssh sessions stay live as I move networks, for instance), VPN (incoming ports aren’t required since local nodes can connect to a public peer via an outbound connection), security, and so forth.

Now on to the explanation:

The Tyranny of IP rigidity

Every device on the Internet, at one time, had its own globally-unique IP address. This number was its identifier to the world; with an IP address, you can connect to any machine anywhere. Even now, when you connect to a computer to download a webpage or send a message, under the hood, your computer is talking to the other one by IP address.

Only, now it’s hard to get one. The Internet protocol we all grew up with, version 4 (IPv4), didn’t have enough addresses for the explosive growth we’ve seen. Internet providers and IT departments had to use a trick called NAT (Network Address Translation) to give you a sort of fake IP address, so they could put hundreds or thousands of devices behind a single public one. That, plus the mobility of devices — changing IPs whenever they change locations — has meant that a fundamental rule of the old Internet is now broken:

Every participant is an equal peer. (Well, not any more.)

Nowadays, you can’t you host your own website from your phone. Or share files from your house. (Without, that is, the use of some third-party service that locks you down and acts as an intermediary.)

Back in the 90s, I worked at a university, and I, like every other employee, had a PC on my desk with an unfirewalled public IP. I installed a webserver, and poof – instant website. Nowadays, running a website from home is just about impossible. You may not have a public IP, and if you do, it likely changes from time to time. And even then, your ISP probably blocks you from running servers on it.

In short, you have to buy your way into the resources to participate on the Internet.

I wrote about these problems in more detail in my article Recovering Our Lost Free Will Online.

Enter Yggdrasil

I already gave away the punch line at the top. But what does all that mean?

  • Every device that participates gets an IP address that is fully live on the Yggdrasil network.
  • You can host a website, or a mail server, or whatever you like with your Yggdrasil IP.
  • Encryption and authentication are smaller (though not nonexistent) worries thanks to the built-in end-to-end encryption.
  • You can travel the globe, and your IP will follow you: onto a plane, from continent to continent, wherever. Yggdrasil will find you.
  • I’ve set up /etc/hosts on my laptop to use the Yggdrasil IPs for other machines on my LAN. Now I can just “ssh foo” and it will work — from home, from a coffee shop, from a 4G tether, wherever. Now, other tools like tinc can do this, obviously. And I could stop there; I could have a completely closed, private Yggdrasil network.

    Or, I can join the global Yggdrasil network. Each device, in addition to accepting peers it finds on the LAN, can also be configured to establish outbound peering connections or accept inbound ones over the Internet. Put a public peer or two in your configuration and you’ve joined the global network. Most people will probably want to do that on every device (because why not?), but you could also do that from just one device on your LAN. Again, there’s no need to explicitly build routes via it; your other machines on the LAN will discover the route’s existence and use it.

    This is one of many projects that are working to democratize and decentralize the Internet. So far, it has been quite successful, growing to over 2000 nodes. It is the direct successor to the earlier cjdns/Hyperboria and BATMAN networks, and aims to be a proof of concept and a viable tool for global expansion.

    Finally, think about how much easier development is when you don’t have to necessarily worry about TLS complexity in every single application. When you don’t have to worry about port forwarding and firewall penetration. It’s what the Internet should be.

    08 January, 2022 03:57AM by John Goerzen

    Ayoyimika Ajibade

    Nodejs 16 and Webpack 5 transition in Debian🍥

    What is Debian 🍥 ?

    Debian is also known as Debian GNU/Linux is a free open-source operating system (OS) based currently on the Linux kernel or the FreeBSD kernel, developed by the community-supported Debian Project; although efforts are in place to provide Debian for other kernels, primarily for the Hurd.

    Fun fact about Debian 😁💃💃

    • Debian was the first Linux distribution to include a package management system for easy installation and removal of software. It was also the first Linux distribution that could be upgraded without requiring reinstallation.

    • To protect your system against “Trojan horses” and other malevolent software, Debian's servers verify that uploaded packages come from their registered Debian maintainers.

    • Debian comes with over 59000 packages; as of this writing (precompiled software that is bundled up in a nice format for easy installation on your machine), a package manager (APT), and other utilities that make it possible to manage thousands of packages on thousands of computers as easily as installing a single application. All of it is FREE!

    • Debian is also the basis for many other distributions, most notably Ubuntu

    What is Webpack ?

    Webpack is a static module bundler for modern JavaScript applications. When webpack processes your application, it internally builds a dependency graph from one or more entry points and then combines every module your project needs into one or more bundles, which are static assets to serve your content from

    What is nodejs ?

    Node.js is an open-source, cross-platform built on Chrome's JavaScript runtime for easily building fast and scalable network applications and also developing server-side applications, Here javascript code is no longer limited to the traditional method of running on the web browser

    What does Transitioning mean in Debian?

    Transitioning is a concept in Debian about maintaining only one version of a library like webpack, nodejs. There is a bottleneck as other libraries and applications may not support the version we have in Debian. So we have to port that software which For example, node-mini-css-extract-plugin, node-mermaid and so many packages uses webpack. In buster we had webpack4 and in bullseye, we want to update it to webpack5. node-mini-css-extract-plugin already supports webpack5, but others like node-mermaid don't support it yet. So either we wait or we help those projects to update their webpack version. Check out this chat between my mentor and a community member on transitioning of rails6

    Getting Started with Creating or Updating packages in Debian

    To be able to create or maintain packages suitable for uploading to Debian you must be in a sid/unstable environment or distribution. See recommended instructions on how to setup Debian Sid via this link

    See link on how to debianize a new package

    See link for brief steps on how to update a package to its new upstream version. For more detailed content on the whys and hows of updating a package to its new upstream version visit here

    Note💡 In updating to the new upstream version we have to watch out for breaking changes caused by both minor updates or major updates. As per https://semver.org major updates(e.g If the current version is 2.3.4, then 3.0 is a major update) of libraries with versions greater than 1.0 and minor updates(e.g If the current version is 0.10 then 0.11 is a minor update) of libraries with versions less than 1.0 can have breaking changes

    The overall flow of webpack5 and nodejs16 transitioning in Debian

    After grasping the fundamental process and flow on how to update a package, you are well on your way to transitioning🚀🚀. Transitioning in webpack or nodejs involves building and testing of dependencies or packages that depend on webpack or nodejs respectively called reverse-dependencies, these reverse dependencies are tested and built with the new updated version usually uploaded to the experimental distribution if reverse dependencies are built and tested successfully both reverse dependencies and dependency in this case nodejs or webpack are then uploaded to the unstable/sid distribution for further processing

    The major guidelines to follow while transitioning are

    • Find a list of reverse dependencies to fix

    • See if new upstream versions of reverse dependencies are available that supports the transitioning version

    • See if new upstream of reverse dependencies are available that supports the transitioning version works

    • Report bugs found while rebuilding and testing reverse dependencies in Debian

    • Forward bugs found while rebuilding and testing reverse dependencies upstream

    • Fix or update packages and forward patches upstream

    After a successful transitioning phase users of the Debian OS have access to the latest and also oldest installation of these packages via apt based on their preferences, which implies having the benefit of more features, bug fixes, updated security patches from those packages, all these are possible due to the community of amazing people💗🔥

    08 January, 2022 03:30AM by Ayoyimika Ajibade

    January 07, 2022

    Ingo Juergensmann

    Moving my repositories from Github to Codeberg.org

    Some weeks ago I moved my repositories from Github (evil, Microsoft, blabla) to Codeberg. Codeberg is a non-profit organisation located in Germany. When you really dislike Microsoft products it is somewhat a natural reaction (at least for me) to move away from Github, which was bought by Microsoft, to some more independent service provider for hosting source code. Nice thing with Codeberg is as well that it offers a migration tool from Github to Codeberg. Additionally Codeberg is also on Mastodon. If you are looking for a good service hosting your git repositories and want to move away from Github as well, please give Codeberg a try.

    So, please update your git settings to https://github.com/ingoj to https://codeberg.org/Windfluechter (or the specific repo).

    07 January, 2022 10:50PM by ij

    Ayoyimika Ajibade

    Everyone Struggles

    Starting anything new always has in it an element of uncertainty, doubt, fears, and struggle to forge ahead, this has been my current situation as an outreachy intern working on the transition of nodejs16 and webpack5 which is about updating all packages that depend on nodejs14 and webpack4 to work well with the updated version of nodejs16 and webpack5 in the Debian operations system. Juicy right!😋

    As a software developer struggling to grasp both basic and advanced knowledge of a concept can seem daunting, much like learning anything new, you can be overwhelmed when you are surrounded and know there is a whole lot of other new concept, tools, process, languages you have to learn that are linked to what you are currently learning, as you are struggling to grasp the fundamental idea of what you are currently learning. imbued in any struggle to get a solution to the problem is where innovation and inventions lie in, and our learning becomes improved as we dive into fact-finding, getting your hypothesis after a series of tests and ultimately proffering a solution

    Some of my struggles as I intern with Debian has been lack of skill of the shell scripting language as that is one of the core languages to understand so as to navigate your way around maintaining packages for Debian, also funny enough having just an intermediate knowledge of the javascript programming language as arguably having a basic knowledge of javascript is necessary to building and testing javascript packages in Debian as I know only the basic of javascript since my core language is Python, that I struggle with. The good thing is that the more I keep at it the faster the chance of the struggles reducing

    Now to the fun part! having a community of developers who have been through the struggling phase is divine, as they make your learning experience much easier, my mentors and other community member have made learning to package modules for Debian much easier as all hands are on deck to always help out with our challenges. I remember it felt so wonderful when my first contribution got merged and I became more encouraged to update more packages. These helped me a lot in the contribution stage for Debian as I better familiar with how the system worked. I’m super grateful to my mentors and co-intern as they are always there to assist me.

    How I Navigate my way through my struggles

    I guess the first thing about any challenge is to be aware of it and admit your limitations of particular knowledge, then you move on to creatively seek solutions by asking for help from those who know the way. Voila! Now comes the part where you have to take up their solutions, ideas, opinions and make it work for your particular case scenario that is a skill set that all Software developers must-have.

    Going through documentation has immensely helped solve my problems much faster and build new knowledge, as I get the fundamental idea of why and how things work. I also try to break each concept down into steps, achieve my goals for each step, then build all solutions in each step together, surfing the internet to find solutions also has a huge benefit.

    Vocabulary terms Used in Debian 🍥

    1. uscan => a tool to identify and download upstream source code from the repository, also compressing it into the required format.

    2. apt => a package manager to manage packages in Debian, similar to pip in python, npm in javascript.

    3. stretch/buster/bullseye/bookworm/sid => old old stable Debian9 - The codename for the release before the previous stable release (stretch). old stable Debian10 - The previous stable release (Buster). stable Debian11 - The current stable release (Bullseye). testing Debian12 - The next-generation stable release (Bookworm). unstable - The unstable development release (Sid) where new or updated packages are introduced. To understand more on debian release cycle

    4. reverse-rebuild => is building all modules that depend on a package in Debian while building the main package.

    5. lintian => A helper tool used to check for inconsistencies and errors in a Debian Package based on Debian standards.

    6. pkg-js-tools => A collection of tools to aid packaging Node modules in Debian.

    7. dpkg-buildpackage => A command to build upstream code in an unclean chroot or environment.

    8. quilt => A patch creation and management automation script. quilt helps manage a series of patches that a Debian package maintainer needs to be applied to upstream source when building the package.

    9. autopkgtest => a script used to test an installed binary package using the source package's tests

    10. RFS => (Request For Sponsorship) Working in the Debian ecosystem includes two roles either as a Debian Maintainer with restricted rights and privileges like uploading to the Debian archive or as a Debian Developer with all rights and privileges such as uploading to the Debian archive, as a new contributor or a Debian maintainer (with few rights and privileges) in Debian you can RFS so that your pull request (PR) can be merged to the Debian archive by a Debian Developer, much like your contribution has been accepted 😸

    There are so many terms and tools you have to get accustomed to, but they are easy to understand and use, as enough and frequently updated wiki documentation are available to guide you through, plus a whole lot of community members you can ask questions from.

    “strength and growth come only through continuous effort and struggle.” — Napoleon Hill

    07 January, 2022 08:02AM by Ayoyimika Ajibade

    Reproducible Builds (diffoscope)

    diffoscope 199 released

    The diffoscope maintainers are pleased to announce the release of diffoscope version 199. This version includes the following changes:

    [ Chris Lamb ]
    * Support both variants of "odt2txt", including the one provided by unoconv.
      (Closes: reproducible-builds/diffoscope#298)
    
    [ Jelle van der Waa ]
    * Add external tool reference on Arch Linux for xb-tool.
    

    You find out more by visiting the project homepage.

    07 January, 2022 12:00AM

    January 06, 2022

    Jacob Adams

    Linux Hibernation Documentation

    Recently I’ve been curious about how hibernation works on Linux, as it’s an interesting interaction between hardware and software. There are some notes in the Arch wiki and the kernel documentation (as well as some kernel documentation on debugging hibernation and on sleep states more generally), and of course the ACPI Specification

    The Formal Definition

    ACPI (Advanced Configuration and Power Interface) is, according to the spec, “an architecture-independent power management and configuration framework that forms a subsystem within the host OS” which defines “a hardware register set to define power states.”

    ACPI defines four global system states G0, working/on, G1, sleeping, G2, soft off, and G3, mechanical off1. Within G1 there are 4 sleep states, numbered S1 through S4. There are also S0 and S5, which are equivalent to G0 and G2 respectively2.

    Sleep

    According to the spec, the ACPI S1-S4 states all do the same thing from the operating system’s perspective, but each saves progressively more power, so the operating system is expected to pick the deepest of these states when entering sleep. However, most operating systems3 distinguish between S1-S3, which are typically referred to as sleep or suspend, and S4, which is typically referred to as hibernation.

    S1: CPU Stop and Cache Wipe

    The CPU caches are wiped and then the CPU is stopped, which the spec notes is equivalent to the WBINVD instruction followed by the STPCLK signal on x86. However, nothing is powered off.

    S2: Processor Power off

    The system stops the processor and most system clocks (except the real time clock), then powers off the processor. Upon waking, the processor will not continue what it was doing before, but instead use its reset vector4.

    S3: Suspend/Sleep (Suspend-to-RAM)

    Mostly equivalent to S2, but hardware ensures that only memory and whatever other hardware memory requires are powered.

    S4: Hibernate (Suspend-to-Disk)

    In this state, all hardware is completely powered off and an image of the system is written to disk, to be restored from upon reapplying power. Writing the system image to disk can be handled by the operating system if supported, or by the firmware.

    Linux Sleep States

    Linux has its own set of sleep states which mostly correspond with ACPI states.

    Suspend-to-Idle

    This is a software only sleep that puts all hardware into the lowest power state it can, suspends timekeeping, and freezes userspace processes.

    All userspace and some kernel threads5, except those tagged with PF_NOFREEZE, are frozen before the system enters a sleep state. Frozen tasks are sent to the __refrigerator(), where they set TASK_UNINTERRUPTIBLE and PF_FROZEN and infinitely loop until PF_FROZEN is unset6.

    This prevents these tasks from doing anything during the imaging process. Any userspace process running on a different CPU while the kernel is trying to create a memory image would cause havoc. This is also done because any filesystem changes made during this would be lost and could cause the filesystem and its related in-memory structures to become inconsistent. Also, creating a hibernation image requires about 50% of memory free, so no tasks should be allocating memory, which freezing also prevents.

    Standby

    This is equivalent to ACPI S1.

    Suspend-to-RAM

    This is equivalent to ACPI S3.

    Hibernation

    Hibernation is mostly equivalent to ACPI S4 but does not require S4, only requiring “low-level code for resuming the system to be present for the underlying CPU architecture” according to the Linux sleep state docs.

    To hibernate, everything is stopped and the kernel takes a snapshot of memory. Then, the system writes out the memory image to disk. Finally, the system either enters S4 or turns off completely.

    When the system restores power it boots a new kernel, which looks for a hibernation image and loads it into memory. It then overwrites itself with the hibernation image and jumps to a resume area of the original kernel7. The resumed kernel restores the system to its previous state and resumes all processes.

    Hybrid Suspend

    Hybrid suspend does not correspond to an official ACPI state, but instead is effectively a combination of S3 and S4. The system writes out a hibernation image, but then enters suspend-to-RAM. If the system wakes up from suspend it will discard the hibernation image, but if the system loses power it can safely restore from the hibernation image.

    1. The difference between soft and mechanical off is that mechanical off is “entered and left by a mechanical means (for example, turning off the system’s power through the movement of a large red switch)” 

    2. It’s unclear to me why G and S states overlap like this. I assume this is a relic of an older spec that only had S states, but I have not as yet found any evidence of this. If someone has any information on this, please let me know and I’ll update this footnote. 

    3. Of the operating systems I know of that support ACPI sleep states (I checked Windows, Mac, Linux, and the three BSDs8), only MacOS does not allow the user to deliberately enable hibernation, instead supporting a hybrid suspend it calls safe sleep 

    4. “The reset vector of a processor is the default location where, upon a reset, the processor will go to find the first instruction to execute. In other words, the reset vector is a pointer or address where the processor should always begin its execution. This first instruction typically branches to the system initialization code.” Xiaocong Fan, Real-Time Embedded Systems, 2015 

    5. All kernel threads are tagged with PF_NOFREEZE by default, so they must specifically opt-in to task freezing. 

    6. This is not from the docs, but from kernel/freezer.c which also notes “Refrigerator is place where frozen processes are stored :-).” 

    7. This is the operation that requires “special architecture-specific low-level code”. 

    8. Interestingly NetBSD has a setting to enable hibernation, but does not actually support hibernation 

    06 January, 2022 12:00AM

    January 05, 2022

    Reproducible Builds

    Reproducible Builds in December 2021

    Welcome to the December 2021 report from the Reproducible Builds project! In these reports, we try and summarise what we have been up to over the past month, as well as what else has been occurring in the world of software supply-chain security.

    As a quick recap of what reproducible builds is trying to address, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. As always, if you would like to contribute to the project, please get in touch with us directly or visit the Contribute page on our website.


    Early in December, Julien Voisin blogged about setting up a rebuilderd instance in order to reproduce Tails images. Working on previous work from 2018, Julien has now set up a public-facing instance which is providing build attestations.

    As Julien dryly notes in his post, “Currently, this isn’t really super-useful to anyone, except maybe some Tails developers who want to check that the release manager didn’t backdoor the released image.” Naturally, we would contend — sincerely — that this is indeed useful.


    The secure/anonymous Tor browser now supports reproducible source releases. According to the project’s changelog, version 0.4.7.3-alpha of Tor can now build reproducible tarballs via the make dist-reprod command. This issue was tracked via Tor issue #26299.


    Fabian Keil posted a question to our mailing list this month asking how they might analyse differences in images produced with the FreeBSD and ElectroBSD’s mkimg and makefs commands:

    After rebasing ElectroBSD from FreeBSD stable/11 to stable/12
    I recently noticed that the "memstick" images are unfortunately
    still not 100% reproducible.

    Fabian’s original post generated a short back-and-forth with Chris Lamb regarding how diffoscope might be able to support the particular format of images generated by this command set.


    diffoscope

    diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploading versions 195, 196, 197 and 198 to Debian, as well as made the following changes:

    • Support showing Ordering differences only within .dsc field values. []
    • Add support for ‘XMLb’ files. []
    • Also add, for example, /usr/lib/x86_64-linux-gnu to our local binary search path. []
    • Support OCaml versions 4.11, 4.12 and 4.13. []
    • Drop some unnecessary has_same_content_as logging calls. []
    • Replace token variable with an anonymously-named variable instead to remove extra lines. []
    • Don’t use the runtime platform’s native endianness when unpacking .pyc files. This fixes test failures on big-endian machines. []

    Mattia Rizzolo also made a number of changes to diffoscope this month as well, such as:

    • Also recognize GnuCash files as XML. []
    • Support the pgpdump PGP packet visualiser version 0.34. []
    • Ignore the new Lintian tag binary-with-bad-dynamic-table. []
    • Fix the Enhances field in debian/control. []

    Finally, Brent Spillner fixed the version detection for Black ‘uncompromising code formatter’ [], Jelle van der Waa added an external tool reference for Arch Linux [] and Roland Clobus added support for reporting when the GNU_BUILD_ID field has been modified []. Thank you for your contributions!


    Distribution work

    In Debian this month, 70 reviews of packages were added, 27 were updated and 41 were removed, adding to our database of knowledge about specific issues. A number of issue types were created as well, including:

    strip-nondeterminism version 1.13.0-1 was uploaded to Debian unstable by Holger Levsen. It included contributions already covered in previous months as well as new ones from Mattia Rizzolo, particularly that the dh_strip_nondeterminism Debian integration interface uses the new get_non_binnmu_date_epoch() utility when available: this is important to ensure that strip-nondeterminism does not break some kinds of binNMUs.


    In the world of openSUSE, however, Bernhard M. Wiedemann posted his monthly reproducible builds status report.


    In NixOS, work towards the longer-term goal of making the graphical installation image reproducible is ongoing. For example, Artturin made the gnome-desktop package reproducible.


    Upstream patches

    The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. In December, we wrote a large number of such patches, including:


    Testing framework

    The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

    • Holger Levsen:

      • Run the Debian scheduler less often. []
      • Fix the name of the Debian ‘testing’ suite name. []
      • Detect builds that are rescheduling due to problems with the diffoscope container. []
      • No longer special-case particular machines having a different /boot partition size. []
      • Automatically fix failed apt-daily and apt-daily-upgrade services [], failed e2scrub_all.service & user@ systemd units [][] as well as ‘generic’ build failures [].
      • Simplify a script to powercycle arm64 architecture nodes hosted at/by codethink.co.uk. []
      • Detect if the udd-mirror.debian.net service is down. []
      • Various miscellaneous node maintenance. [][]
    • Roland Clobus (Debian ‘live’ image generation):

      • If the latest snapshot is not complete yet, try to use the previous snapshot instead. []
      • Minor: whitespace correction + comment correction. []
      • Use unique folders and reports for each Debian version. []
      • Turn off debugging. []
      • Add a better error description for incorrect/missing arguments. []
      • Report non-reproducible issues in Debian sid images. []

    Lastly, Mattia Rizzolo updated the automatic logfile parsing rules in a number of ways (eg. to ignore a warning about the Python setuptools deprecation) [][] and Vagrant Cascadian adjusted the config for the Squid caching proxy on a node. []



    If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

    05 January, 2022 02:44PM

    Jonathan Wiltshire

    Continuing adventures of the mystery cable

    My 4×2 has been in action again, trying to find the remainder of the mystery sometimes-4mm/sometimes-2.5mm/sometimes-1.5mm cable. It finally appeared in the tiniest gap possible between back wall and joist.

    As we had suspected by tracing everything else, the junction is an unfused union of all three cable types with a 230V 32A circuit breaker on one end and a light switch on the other. So in the event of fault current at the kitchen lights, the 1.5mm cable is definitely going to burn out and almost certainly set fire to the flat roof. Delightful.

    It is no longer like this.

    More hunting for the mystery cable (bonus glimpse of one of the new light fittings)
    The 4×2 ceiling removal device in action
    At last, quarry in sight!
    Mystery Cable Junction Box of Fire and Doom ™?
    All the terrifying branches removed and ready to go back into the ceiling for now, until the next room is refurbished

    05 January, 2022 01:39PM by Jonathan

    hackergotchi for Thomas Lange

    Thomas Lange

    FAI.me service now support backports for Debian 11 (bullseye)

    The FAI.me service for creating customized installation and cloud images now supports a backports kernel for the stable release Debian 11 (aka bullseye). If you enable the backports option, you will currently get kernel 5.14. This will help you if you have newer hardware that is not support by the default kernel 5.10. The backports option is also still available for the images when using the old Debian 10 (buster) release.

    The URL of the FAI.me service is

    https://fai-project.org/FAIme/

    FAI.me

    05 January, 2022 11:46AM

    January 04, 2022

    Russell Coker

    Terrorists Inspired by Fiction

    The Tom Clancy book Debt of Honor published in August 1994 first introduced the concept of a heavy passenger aircraft being used as a weapon by terrorists against a well defended building. In April 1994 there was an attempt to hijack and deliberately crash FedEx flight 705. It’s possible for a book to be changed 4 months before publication, but it seems unlikely that a significant plot point in a series of books was changed in such a small amount of time so it’s likely that Tom Clancy got the idea first. There have been other variations on that theme, such as the Yokosuka_MXY-7 Kamakazi flying bomb (known by the Allies as “Baka” which is Japanese for idiot). But Tom Clancy seemed to pioneer the idea of a commercial passenger jet being subverted for the purpose of ground attack.

    7 years after Tom Clancy’s book was published the 911 hijackings happened.

    The TV series Black Mirror first aired in 2011, and the first episode was about terrorists kidnapping a princess and demanding that the UK PM perform an indecent act with a pig for her release. While the plot was a little extreme (the entire series is extreme) the basic concept of sexual extortion based on terrorist acts is something that could be done in real life, and if terrorists were inspired by this they are taking longer than expected to do it.

    Most democracies seem to end up with two major parties that are closely matched. Even if a government was strict about not negotiating with terrorists it seems likely that terrorists demanding that a politician perform an unusual sex act on TV would change things, supporters would be divided into groups that support and oppose negotiating. Discussions wouldn’t be as civil as when the negotiation involves money or freeing prisoners. If an election result was perceived to have been influenced by such terrorism then supporters of the side that lost would claim it to be unfair and reject the result. If the goal of terrorists was to cause chaos then that would be one way of achieving it, and they have had over 10 years to consider this possibility.

    Are we overdue for a terror attack inspired by Black Mirror?

    04 January, 2022 11:00PM by etbe

    Jelmer Vernooij

    Personal Streaming Audio Server

    For a while now, I’ve been looking for a good way to stream music from my home music collection on my phone.

    There are quite a few options for music servers that support streaming. However, Android apps that can stream music from one of those servers tend to be unmaintained, clunky or slow (or more than one of those).

    It is possible to use something that runs in a web server, but that means no offline caching - which can be quite convenient in spots without connectivity, such as the Underground or other random bits of London with poor cell coverage.

    Server

    Most music servers today support some form of the subsonic API.

    I’ve tried a couple, with mixed results:

    • supysonic; Python. Slow. Ran into some issues with subsonic clients. No real web UI.
    • gonic; Go. Works well & fast enough. Minimal web UI, i.e. no ability to play music from a browser.
    • airsonic; Java. Last in a chain of (abandoned) forks. More effort to get to work, and resource intensive.

    Eventually, I’ve settled on Navidrome. It’s got a couple of things going for it:

    • Good subsonic implementation that worked with all the Android apps I used it with.
    • Great Web UI for use in a browser

    I run Navidrome in Kubernetes. It’s surprisingly easy to get going. Here’s the deployment I’m using:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: navidrome
    spec:
     replicas: 1
     selector:
       matchLabels:
         app: navidrome
     template:
       metadata:
         labels:
           app: navidrome
       spec:
         containers:
           - name: navidrome
             image: deluan/navidrome:latest
             imagePullPolicy: Always
             resources:
               limits:
                 cpu: ".5"
                 memory: "2Gi"
               requests:
                 cpu: "0.1"
                 memory: "10M"
             ports:
               - containerPort: 4533
             volumeMounts:
               - name: navidrome-data-volume
                 mountPath: /data
               - name: navidrome-music-volume
                 mountPath: /music
             env:
               - name: ND_SCANSCHEDULE
                 value: 1h
               - name: ND_LOGLEVEL
                 value: info
               - name: ND_SESSIONTIMEOUT
                 value: 24h
               - name: ND_BASEURL
                 value: /navidrome
             livenessProbe:
                httpGet:
                  path: /navidrome/app
                  port: 4533
                initialDelaySeconds: 30
                periodSeconds: 3
                timeoutSeconds: 90
         volumes:
            - name: navidrome-data-volume
              hostPath:
               path: /srv/navidrome
               type: Directory
            - name: navidrome-music-volume
              hostPath:
                path: /srv/media/music
                type: Directory
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: navidrome
    spec:
      ports:
        - port: 4533
          name: web
      selector:
        app: navidrome
      type: ClusterIP
    

    At the moment, this deployment is still tied to the machine with my music on it since it relies on hostPath volumes, but I’m planning to move that to ceph in the future.

    I then expose this service on /navidrome on my private domain (here replaced with example.com) using an Ingress:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: navidrome
    spec:
      ingressClassName: nginx
      rules:
      - host: example.com
        http:
          paths:
          - backend:
              service:
                name: navidrome
                port:
                  name: web
            path: /navidrome(/|$)(.*)
            pathType: Prefix
    

    Client

    On the desktop, I usually just use navidrome’s web interface. Clementine’s support for subsonic is also okay. sublime-music is meant to be a music player specifically for Subsonic, but I’ve not really found it stable enough for day-to-day usage.

    There are various Android clients for Subsonic, but I’ve only really considered the Open Source ones that are hosted on F-Droid. Most of those are abandoned, but D-Sub works pretty well - as does my preferred option, Subtracks.

    04 January, 2022 06:00PM by Jelmer Vernooij

    hackergotchi for Jonathan McDowell

    Jonathan McDowell

    Upgrading from a CC2531 to a CC2538 Zigbee coordinator

    Previously I setup a CC2531 as a Zigbee coordinator for my home automation. This has turned out to be a good move, with the 4 gang wireless switch being particularly useful. However the range of the CC2531 is fairly poor; it has a simple PCB antenna. It’s also a very basic device. I set about trying to improve the range and scalability and settled upon a CC2538 + CC2592 device, which feature an MMCX antenna connector. This device also has the advantage that it’s ARM based, which I’m hopeful means I might be able to build some firmware myself using a standard GCC toolchain.

    For now I fetched the JetHome firmware from https://github.com/jethome-ru/zigbee-firmware/tree/master/ti/coordinator/cc2538_cc2592 (JH_2538_2592_ZNP_UART_20211222.hex) - while it’s possible to do USB directly with the CC2538 my board doesn’t have those bits so going the external USB UART route is easier.

    The device had some existing firmware on it, so I needed to erase this to force a drop into the boot loader. That means soldering up the JTAG pins and hooking it up to my Bus Pirate for OpenOCD goodness.

    OpenOCD config
    source [find interface/buspirate.cfg]
    
    buspirate_port /dev/ttyUSB1
    buspirate_mode normal
    buspirate_vreg 1
    buspirate_pullup 0
    
    transport select jtag
    
    source [find target/cc2538.cfg]
    
    Steps to erase
    $ telnet localhost 4444
    Trying ::1...
    Trying 127.0.0.1...
    Connected to localhost.
    Escape character is '^]'.
    Open On-Chip Debugger
    > mww 0x400D300C 0x7F800
    > mww 0x400D3008 0x0205
    > shutdown
    shutdown command invoked
    Connection closed by foreign host.
    

    At that point I can switch to the UART connection (on PA0 + PA1) and flash using cc2538-bsl:

    $ git clone https://github.com/JelmerT/cc2538-bsl.git
    $ cc2538-bsl/cc2538-bsl.py -p /dev/ttyUSB1 -e -w -v ~/JH_2538_2592_ZNP_UART_20211222.hex
    Opening port /dev/ttyUSB1, baud 500000
    Reading data from /home/noodles/JH_2538_2592_ZNP_UART_20211222.hex
    Firmware file: Intel Hex
    Connecting to target...
    CC2538 PG2.0: 512KB Flash, 32KB SRAM, CCFG at 0x0027FFD4
    Primary IEEE Address: 00:12:4B:00:22:22:22:22
        Performing mass erase
    Erasing 524288 bytes starting at address 0x00200000
        Erase done
    Writing 524256 bytes starting at address 0x00200000
    Write 232 bytes at 0x0027FEF88
        Write done
    Verifying by comparing CRC32 calculations.
        Verified (match: 0x74f2b0a1)
    

    I then wanted to migrate from the old device to the new without having to repair everything. So I shut down Home Assistant and backed up the CC2531 network information using zigpy-znp (which is already installed for Home Assistant):

    python3 -m zigpy_znp.tools.network_backup /dev/zigbee > cc2531-network.json
    

    I copied the backup to cc2538-network.json and modified the coordinator_ieee to be the new device’s MAC address (rather than end up with 2 devices claiming the same MAC if/when I reuse the CC2531) and did:

    python3 -m zigpy_znp.tools.network_restore --input cc2538-network.json /dev/ttyUSB1
    

    The old CC2531 needed unplugged first, otherwise I got an RuntimeError: Network formation refused, RF environment is likely too noisy. Temporarily unscrew the antenna or shield the coordinator with metal until a network is formed. error.

    After that I updated my udev rules to map the CC2538 to /dev/zigbee and restarted Home Assistant. To my surprise it came up and detected the existing devices without any extra effort on my part. However that resulted in 2 coordinators being shown in the visualisation, with the old one turning up as unk_manufacturer. Fixing that involved editing /etc/homeassistant/.storage/core.device_registry and removing the entry which had the old MAC address, removing the device entry in /etc/homeassistant/.storage/zha.storage for the old MAC and then finally firing up sqlite to modify the Zigbee database:

    $ sqlite3 /etc/homeassistant/zigbee.db
    SQLite version 3.34.1 2021-01-20 14:10:07
    Enter ".help" for usage hints.
    sqlite> DELETE FROM devices_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM endpoints_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM in_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM neighbors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11' OR device_ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM node_descriptors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM out_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> .quit
    

    So far it all seems a bit happier than with the CC2531; I’ve been able to pair a light bulb that was previously detected but would not integrate, which suggests the range is improved.

    (This post another in the set of “things I should write down so I can just grep my own website when I forget what I did to do foo”.)

    04 January, 2022 03:50PM

    Russell Coker

    Big Smart TVs

    Recently a relative who owned a 50″ Plasma TV asked me for advice on getting a new TV. Looking at the options all the TVs seem to be smart TVs (running Android with built in support for YouTube and Netflix) and most of them seem to be 4K resolution. 4K doesn’t provide much benefit now as most people don’t have BlueRay DVD players and discs, there aren’t a lot of 4K YouTube videos, and most streaming services don’t offer 4K resolution. But as 4K doesn’t cost much more it doesn’t make sense not to get it.

    I gave my relative a list of good options from Kogan (the Australian company that has the cheapest consumer electronics) and they chose a 65″ 4K Smart TV from Kogan. That only cost $709 plus delivery which is reasonably affordable for something that will presumably last for a long time and be used by many people.

    Netflix on a web browser won’t do more than FullHD resolution unless you use Edge on Windows 10. But Netflix on the smart tv has a row advertising 4K shows which indicates that 4K is supported. There are some 4K videos on YouTube but not a lot at this time.

    Size

    It turns out that 65″ is very big. It didn’t fit on the table that had been used for the 50″ Plasma TV.

    Rtings.com has a good article about TV size vs distance [1]. According to their calculations if you want to sit 2 meters away from a TV and have a 30 degree field of view (recommended for “mixed” use) then a 45″ TV is ideal.

    According to their calculations on pixel sizes, if you have a FullHD display (or the common modern case a FullHD signal displayed on a 4K monitor) that is between 1.8 and 2.5 meters away from you then a 45″ TV is the largest that will be useful. To take proper advantage of a monitor larger than 45″ at a distance of 2 meters you need a 4K signal. If you have a 4K signal then you can get best results by having a 45″ monitor less than 1.8 meters away from you. As most TV watching involves less than 3 people it shouldn’t be inconvenient to be less than 1.8 meters away from the TV.

    The 65″ TV weighs 21Kg according to the specs, that isn’t a huge amount for something small, but for something a large and inconvenient as a 65″ TV it’s impossible for one person to safely move. Kogan sells 43″ TVs that weigh 6KG, that’s something that most adults could move with one hand. I think that a medium size TV that can be easily moved to a convenient location would probably give an equivalent viewing result to an extremely large TV that can’t be moved at all. I currently have a 40″ LCD TV, the only reason I have that is because a friend didn’t need it, the previous 32″ TV that I used was adequate for my needs. Most of my TV viewing is on a 28″ monitor, which I find adequate for 2 or 3 people. So I generally wouldn’t recommend a 65″ TV for anyone.

    Android for TVs

    Android wasn’t designed for TVs and doesn’t work that well on them. Having buttons on the remote for Netflix and YouTube is handy, but it would be nice if there were programmable buttons for other commonly used apps or a way to switch between the last few apps (like ALT-TAB on a PC).

    One good feature of Android for TV is that it can display a set of rows of shows (similar to the Netflix method of displaying) where each row is from a different app. The apps I’ve installed on that TV which support the row view are Netflix, YouTube, YouTube Music, ABC iView (that’s Australian ABC), 7plus, 9now, and SBS on Demand. That’s nice, now we just need channel 10’s app to support that to have coverage for all Australian free TV stations in the Android TV interface.

    Conclusion

    It’s a nice TV and it generally works well. Android is OK for TV use but far from great. It is running Android version 9, maybe a newer version of Android works better on TVs.

    It’s too large for reasonable people to use in a home. I’ve seen smaller TVs used for 20 people in an office in a video conference. It’s cheap enough that most people can afford it, but it’s easier and more convenient to have something smaller and lighter.

    04 January, 2022 11:37AM by etbe

    January 03, 2022

    Paul Wise

    FLOSS Activities December 2021

    Focus

    This month I didn't have any particular focus. I just worked on issues in my info bubble.

    Changes

    Issues

    Review

    • Spam: reported 166 Debian mailing list posts
    • Patches: reviewed libpst upstream patches
    • Debian packages: sponsored nsis, memtest86+
    • Debian wiki: RecentChanges for the month
    • Debian BTS usertags: changes for the month
    • Debian screenshots:

    Administration

    • libpst: setup GitHub presence, migrate from hg to git, requested details from bug reporters
    • plac: cleaned up git repo anomalies
    • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages: stardict, node-carto
    • Debian wiki: unblock IP addresses, approve accounts

    Communication

    • Respond to queries from Debian users and contributors on the mailing lists and IRC

    Sponsors

    The purple-discord, python-plac, sptag, smart-open, libpst, memtest86+, oci-python-sdk work was sponsored. All other work was done on a volunteer basis.

    03 January, 2022 11:35PM

    Ian Jackson

    Debian’s approach to Rust - Dependency handling

    tl;dr: Faithfully following upstream semver, in Debian package dependencies, is a bad idea.

    Introduction

    I have been involved in Debian for a very long time. And I’ve been working with Rust for a few years now. Late last year I had cause to try to work on Rust things within Debian.

    When I did, I found it very difficult. The Debian Rust Team were very helpful. However, the workflow and tooling require very large amounts of manual clerical work - work which it is almost impossible to do correctly since the information required does not exist. I had wanted to package a fairly straightforward program I had written in Rust, partly as a learning exercise. But, unfortunately, after I got stuck in, it looked to me like the effort would be wildly greater than I was prepared for, so I gave up.

    Since then I’ve been thinking about what I learned about how Rust is packaged in Debian. I think I can see how to fix some of the problems. Although I don’t want to go charging in and try to tell everyone how to do things, I felt I ought at least to write up my ideas. Hence this blog post, which may become the first of a series.

    This post is going to be about semver handling. I see problems with other aspects of dependency handling and source code management and traceability as well, and of course if my ideas find favour in principle, there are a lot of details that need to be worked out, including some kind of transition plan.

    How Debian packages Rust, and build vs runtime dependencies

    Today I will be discussing almost entirely build-dependencies; Rust doesn’t (yet?) support dynamic linking, so built Rust binaries don’t have Rusty dependencies.

    However, things are a bit confusing because even the Debian “binary” packages for Rust libraries contain pure source code. So for a Rust library package, “building” the Debian binary package from the Debian source package does not involve running the Rust compiler; it’s just file-copying and format conversion. The library’s Rust dependencies do not need to be installed on the “build” machine for this.

    So I’m mostly going to be talking about Depends fields, which are Debian’s way of talking about runtime dependencies, even though they are used only at build-time. The way this works is that some ultimate leaf package (which is supposed to produce actual executable code) Build-Depends on the libraries it needs, and those Depends on their under-libraries, so that everything needed is installed.

    What do dependencies mean and what are they for anyway?

    In systems where packages declare dependencies on other packages, it generally becomes necessary to support “versioned” dependencies. In all but the most simple systems, this involves an ordering (or similar) on version numbers and a way for a package A to specify that it depends on certain versions of B.

    Both Debian and Rust have this. Rust upstream crates have version numbers and can specify their dependencies according to semver. Debian’s dependency system can represent that.

    So it was natural for the designers of the scheme for packaging Rust code in Debian to simply translate the Rust version dependencies to Debian ones. However, while the two dependency schemes seem equivalent in the abstract, their concrete real-world semantics are totally different.

    These different package management systems have different practices and different meanings for dependencies. (Interestingly, the Python world also has debates about the meaning and proper use of dependency versions.)

    The epistemological problem

    Consider some package A which is known to depend on B. In general, it is not trivial to know which versions of B will be satisfactory. I.e., whether a new B, with potentially-breaking changes, will actually break A.

    Sometimes tooling can be used which calculates this (eg, the Debian shlibdeps system for runtime dependencies) but this is unusual - especially for build-time dependencies. Which versions of B are OK can normally only be discovered by a human consideration of changelogs etc., or by having a computer try particular combinations.

    Few ecosystems with dependencies, in the Free Software community at least, make an attempt to precisely calculate the versions of B that are actually required to build some A. So it turns out that there are three cases for a particular combination of A and B: it is believed to work; it is known not to work; and: it is not known whether it will work.

    And, I am not aware of any dependency system that has an explicit machine-readable representation for the “unknown” state, so that they can say something like “A is known to depend on B; versions of B before v1 are known to break; version v2 is known to work”. (Sometimes statements like that can be found in human-readable docs.)

    That leaves two possibilities for the semantics of a dependency A depends B, version(s) V..W: Precise: A will definitely work if B matches V..W, and Optimistic: We have no reason to think B breaks with any of V..W.

    At first sight the latter does not seem useful, since how would the package manager find a working combination? Taking Debian as an example, which uses optimistic version dependencies, the answer is as follows: The primary information about what package versions to use is not only the dependencies, but mostly in which Debian release is being targeted. (Other systems using optimistic version dependencies could use the date of the build, i.e. use only packages that are “current”.)

    Precise

    Optimistic

    People involved in version management

    Package developers,
    downstream developers/users.

    Package developers,
    downstream developer/users,
    distribution QA and release managers.

    Package developers declare versions V and dependency ranges V..W so that

    It definitely works.

    A wide range of B can satisfy the declared requirement.

    The principal version data used by the package manager

    Only dependency versions.

    Contextual, eg, Releases - set(s) of packages available.

    Version dependencies are for

    Selecting working combinations (out of all that ever existed).

    Sequencing (ordering) of updates; QA.

    Expected use pattern by a downstream

    Downstream can combine any
    declared-good combination.

    Use a particular release of the whole system. Mixing-and-matching requires additional QA and remedial work.

    Downstreams are protected from breakage by

    Pessimistically updating versions and dependencies whenever anything might go wrong.

    Whole-release QA.

    A substantial deployment will typically contain

    Multiple versions of many packages.

    A single version of each package, except where there are actual incompatibilities which are too hard to fix.

    Package updates are driven by

    Top-down:
    Depending package updates the declared metadata.
    Bottom-up:
    Depended-on package is updated in the repository for the work-in-progress release.

    So, while Rust and Debian have systems that look superficially similar, they contain fundamentally different kinds of information. Simply representing the Rust versions directly into Debian doesn’t work.

    What is currently done by the Debian Rust Team is to manually patch the dependency specifications, to relax them. This is very labour-intensive, and there is little automation supporting either decisionmaking or actually applying the resulting changes.

    What to do

    Desired end goal

    To update a Rust package in Debian, that many things depend on, one need simply update that package.

    Debian’s sophisticated build and CI infrastructure will try building all the reverse-dependencies against the new version. Packages that actually fail against the new dependency are flagged as suffering from release-critical problems.

    Debian Rust developers then update those other packages too. If the problems turn out to be too difficult, it is possible to roll back.

    If a problem with a depending packages is not resolved in a timely fashion, priority is given to updating core packages, and the depending package falls by the wayside (since it is empirically unmaintainable, given available effort).

    There is no routine manual patching of dependency metadata (or of anything else).

    Radical proposal

    Debian should not precisely follow upstream Rust semver dependency information. Instead, Debian should optimistically try the combinations of packages that we want to have. The resulting breakages will be discovered by automated QA; they will have to be fixed by manual intervention of some kind, but usually, simply updating the depending package will be sufficient.

    This no longer ensures (unlike the upstream Rust scheme) that the result is expected to build and work if the dependencies are satisfied. But as discussed, we don’t really need that property in Debian. More important is the new property we gain: that we are able to mix and match versions that we find work in practice, without a great deal of manual effort.

    Or to put it another way, in Debian we should do as a Rust upstream maintainer does when they do the regular “update dependencies for new semvers” task: we should update everything, see what breaks, and fix those.

    (In theory a Rust upstream package maintainer is supposed to do some additional checks or something. But the practices are not standardised and any checks one does almost never reveal anything untoward, so in practice I think many Rust upstreams just update and see what happens. The Rust upstream community has other mechanisms - often, reactive ones - to deal with any problems. Debian should subscribe to those same information sources, eg RustSec.)

    Nobbling cargo

    Somehow, when cargo is run to build Rust things against these Debian packages, cargo’s dependency system will have to be overridden so that the version of the package that is actually selected by Debian’s package manager is used by cargo without complaint.

    We probably don’t want to change the Rust version numbers of Debian Rust library packages, so this should be done by either presenting cargo with an automatically-massaged Cargo.toml where the dependency version restrictions are relaxed, or by using a modified version of cargo which has special option(s) to relax certain dependencies.

    Handling breakage

    Rust packages in Debian should already be provided with autopkgtests so that ci.debian.net will detect build breakages. Build breakages will stop the updated dependency from migrating to the work-in-progress release, Debian testing.

    To resolve this, and allow forward progress, we will usually upload a new version of the dependency containing an appropriate Breaks, and either file an RC bug against the depending package, or update it. This can be done after the upload of the base package.

    Thus, resolution of breakage due to incompatibilities will be done collaboratively within the Debian archive, rather than ad-hoc locally. And it can be done without blocking.

    My proposal prioritises the ability to make progress in the core, over stability and in particular over retaining leaf packages. This is not Debian’s usual approach but given the Rust ecosystem’s practical attitudes to API design, versioning, etc., I think the instability will be manageable. In practice fixing leaf packages is not usually really that hard, but it’s still work and the question is what happens if the work doesn’t get done. After all we are always a shortage of effort - and we probably still will be, even if we get rid of the makework clerical work of patching dependency versions everywhere (so that usually no work is needed on depending packages).

    Exceptions to the one-version rule

    There will have to be some packages that we need to keep multiple versions of. We won’t want to update every depending package manually when this happens. Instead, we’ll probably want to set a version number split: rdepends which want version <X will get the old one.

    Details - a sketch

    I’m going to sketch out some of the details of a scheme I think would work. But I haven’t thought this through fully. This is still mostly at the handwaving stage. If my ideas find favour, we’ll have to do some detailed review and consider a whole bunch of edge cases I’m glossing over.

    The dependency specification consists of two halves: the depending .deb‘s Depends (or, for a leaf package, Build-Depends) and the base .debVersion and perhaps Breaks and Provides.

    Even though libraries vastly outnumber leaf packages, we still want to avoid updating leaf Debian source packages simply to bump dependencies.

    Dependency encoding proposal

    Compared to the existing scheme, I suggest we implement the dependency relaxation by changing the depended-on package, rather than the depending one.

    So we retain roughly the existing semver translation for Depends fields. But we drop all local patching of dependency versions.

    Into every library source package we insert a new Debian-specific metadata file declaring the earliest version that we uploaded. When we translate a library source package to a .deb, the “binary” package build adds Provides for every previous version.

    The effect is that when one updates a base package, the usual behaviour is to simply try to use it to satisfy everything that depends on that base package. The Debian CI will report the build or test failures of all the depending packages which the API changes broke.

    We will have a choice, then:

    Breakage handling - update broken depending packages individually

    If there are only a few packages that are broken, for each broken dependency, we add an appropriate Breaks to the base binary package. (The version field in the Breaks should be chosen narrowly, so that it is possible to resolve it without changing the major version of the dependency, eg by making a minor source change.)

    When can then do one of the following:

    • Update the dependency from upstream, to a version which works with the new base. (Assuming there is one.) This should be the usual response.

    • Fix the dependency source code so that builds and works with the new base package. If this wasn’t just a backport of an upstream change, we should send our fix upstream. (We should prefer to update the whole package, than to backport an API adjustment.)

    • File an RC bug against the dependency (which will eventually trigger autoremoval), or preemptively ask for the Debian release managers to remove the dependency from the work-in-progress release.

    Breakage handling - declare new incompatible API in Debian

    If the API changes are widespread and many dependencies are affected, we should represent this by changing the in-Debian-source-package metadata to arrange for fewer Provides lines to be generated - withdrawing the Provides lines for earlier APIs.

    Hopefully examination of the upstream changelog will show what the main compat break is, and therefore tell us which Provides we still want to retain.

    This is like declaring Breaks for all the rdepends. We should do it if many rdepends are affected.

    Then, for each rdependency, we must choose one of the responses in the bullet points above. In practice this will often be a mass bug filing campaign, or large update campaign.

    Breakage handling - multiple versions

    Sometimes there will be a big API rewrite in some package, and we can’t easily update all of the rdependencies because the upstream ecosystem is fragmented and the work involved in reconciling it all is too substantial.

    When this happens we will bite the bullet and include multiple versions of the base package in Debian. The old version will become a new source package with a version number in its name.

    This is analogous to how key C/C++ libraries are handled.

    Downsides of this scheme

    The first obvious downside is that assembling some arbitrary set of Debian Rust library packages, that satisfy the dependencies declared by Debian, is no longer necessarily going to work. The combinations that Debian has tested - Debian releases - will work, though. And at least, any breakage will affect only people building Rust code using Debian-supplied libraries.

    Another less obvious problem is that because there is no such thing as Build-Breaks (in a Debian binary package), the per-package update scheme may result in no way to declare that a particular library update breaks the build of a particular leaf package. In other words, old source packages might no longer build when exposed to newer versions of their build-dependencies, taken from a newer Debian release. This is a thing that already happens in Debian, with source packages in other languages, though.

    Semver violation

    I am proposing that Debian should routinely compile Rust packages against dependencies in violation of the declared semver, and ship the results to Debian’s millions of users.

    This sounds quite alarming! But I think it will not in fact lead to shipping bad binaries, for the following reasons:

    The Rust community strongly values safety (in a broad sense) in its APIs. An API which is merely capable of insecure (or other seriously bad) use is generally considered to be wrong. For example, such situations are regarded as vulnerabilities by the RustSec project, even if there is no suggestion that any actually-broken caller source code exists, let alone that actually-broken compiled code is likely.

    The Rust community also values alerting programmers to problems. Nontrivial semantic changes to APIs are typically accompanied not merely by a semver bump, but also by changes to names or types, precisely to ensure that broken combinations of code do not compile.

    Or to look at it another way, in Debian we would simply be doing what many Rust upstream developers routinely do: bump the versions of their dependencies, and throw it at the wall and hope it sticks. We can mitigate the risks the same way a Rust upstream maintainer would: when updating a package we should of course review the upstream changelog for any gotchas. We should look at RustSec and other upstream ecosystem tracking and authorship information.

    Difficulties for another day

    As I said, I see some other issues with Rust in Debian.

    • I think the library “feature flagencoding scheme is unnecessary. I hope to explain this in a future essay.

    • I found Debian’s approach to handling the source code for its Rust packages quite awkward; and, it has some troubling properties. Again, I hope to write about this later.

    • I get the impression that updating rustc in Debian is a very difficult process. I haven’t worked on this myself and I don’t feel qualified to have opinions about it. I hope others are thinking about how to make things easier.

    Thanks all for your attention!



    comment count unavailable comments

    03 January, 2022 06:35PM

    Thorsten Alteholz

    My Debian Activities in December 2021

    FTP master

    This month I accepted 412 and rejected 44 packages. The overall number of packages that got accepted was 423.

    Debian LTS

    This was my ninetieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

    This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of:

    • [DLA 2846-1] raptor2 security update for one CVE
    • [DLA 2845-1] libsamplerate security update for one CVE
    • [DLA 2859-1] zziplib security update for one CVE
    • [DLA 2858-1] libzip security update for one CVE
    • [DLA 2869-1] xorg-server security update for three CVEs
    • [#1002912] for graphicsmagick in Buster
    • [debdiff] for sphinxearch/buster to maintainer and sec team
    • [debdiff] for zziplib/buster to maintainer
    • [debdiff] for zziplib/bullseye to maintainer
    • [debdiff] for raptor2/bullseye to maintainer

    I also started to work on libarchive

    Further I worked on packages in NEW on security-master. In order to faster process such packages, I added a notification when work arrived there.

    Last but not least I did some days of frontdesk duties.

    Debian ELTS

    This month was the forty-second ELTS month.

    During my allocated time I uploaded:

    • ELA-527-1 for libsamplerate
    • ELA-528-1 for raptor2
    • ELA-529-1 for ufraw
    • ELA-532-1 for zziplib
    • ELA-534-1 for xorg-server

    Last but not least I did some days of frontdesk duties.

    Debian Astro

    Related to my previous article about fun with telescopes I uploaded new versions or did source uploads for:

    Besides the indi-stuff I also uploaded

    Other stuff

    I celebrated christmas :-).

    03 January, 2022 12:23PM by alteholz

    hackergotchi for Joachim Breitner

    Joachim Breitner

    Telegram bots in Python made easy

    A while ago I set out to get some teenagers interested in programming, and thought about a good way to achieve that. A way that allows them to get started with very little friction, build something that’s relevant in their currently live quickly, and avoids frustration.

    They were old enough to have their own smartphone, and they were already happily chatting with their friends, using the Telegram messenger. I have already experimented a bit with writing bots for Telegram (e.g. @Umklappbot or @Kaleidogen), and it occurred to me that this might be a good starting point: Chat bot interactions have a very simple data model: message in, response out, all simple text. Much simpler than anything graphical or even web programming. In a way it combines the simplicity of the typical initial programming exercises on the command-line with the impact and relevance of web programming.

    But of course “real” bot programming is still too hard – installing a programming environment, setting up a server, deploying, dealing with access tokens, understanding the Telegram Bot API and mapping it to your programming language.

    The IDEThe IDE

    So I built a browser-based Python programming environments for Telegram bots that takes care of all of that. You simply write a single Python function, click the “Deploy” button, and the bot is live. That’s it!

    This environment provides a much simpler “API” for the bots: Define a function like the following:

      def private_message(sender, text):
         return "Hello!"

    This gets called upon a message, and if it returns a String, that’s the response. That’s it! Not enough to build any kind of Telegram bot, but sufficient for many fun applications.

    A chatbotA chatbot

    In fact, my nephew and niece use this to build a simple interactive fiction game, where the player says where they are going (“house”, ”forest”, “lake”) and thus explore the story, and in the end kill the dragon. And my girlfriend created a shopping list bot that we are using “productively”.

    If you are curious, you can follow the instructions to create your own bot. There you can also find the source code and instructions for hosting your own instance (on Amazon Web Services).

    Help with the project (e.g. improving the sandbox for running untrustworthy python code; making the front-end work better) is of course highly appreciated, too. The frontend is written in PureScript, and the backend in Python, building on Amazon lambda and Amazon DynamoDB.

    03 January, 2022 10:20AM by Joachim Breitner ([email protected])

    Birger Schacht

    Introducing carl

    For some time now I wanted to learn Rust, but I either didn’t have the time or couldn’t come up with a nice beginner project. Given that I recently found myself to be without a job and we had another lockdown in the part of the world I happen to live in, I decided to give that idea another go (no pun intended).

    There is apparently a trend to reimplement existing Unix tools in Rust (see exa, a ‘modern replacement for ls’, delta, a syntax highlighting pager for git, diff and grep output, bat, a ‘cat clone with wings’, zellij, a terminal workspace, ripgrep, a line-oriented search tool …). I looked around what else was out there, but what I wasn’t able to find was an implementation of cal(1) in Rust (maybe I wasn’t looking hard enough, feel free to point anything out to me I might have overlooked). No cal in Rust even though a calendar implementation would provide the potential to go over the top with terminal colors, which is also very important when writing reimplementations of older CLI tools! So I started writing and soon had a simple prototype of a cal reimplementation. A couple of weeks later I can now present carl, a cal implementation in Rust:

    default carl output

    The default output of carl is what you would expect from a commandline calendar tool. It prints the days of the current month and highlights the current day. Other than the cal tools I tried, carl by default also prints days that are in the past in grey. Like cal it can also print three month if you use the -3 switch or the whole year if you use the -y switch.

    Colors

    You can use a theme file to change the colors carl uses for various dates. The name of the theme is set in carls configuration file (in XDG_CONFIG_HOME/.carl/config.toml or system-wide in XDG_CONFIG_DIRS/.carl/config.toml) using the theme setting - carl looks for the theme in the file ${themename}.theme in the configuration folders. A custom theme could for example look like this:

    custom carl output

    You can change the foreground color, background color and/or style of a date using a combination of various stylenames like BGRed, Bold and FGCyan. The README lists all possible stylenames. A list of date properties defines which dates the specific style should affect- date properties are for example CurrentDate, AfterCurrentDate or FirstDayOfMonth. Again, the README lists the existing date properties and I’m happy to implement more of them, if you have ideas.

    Ical support

    I also added the option to list .ical files in the configuration file. carl can display the dates from the ical file(s) with a separate style. There can either be a global style for all the dates from all the ical files by using the IsEvent date property or a separate style for the dates of a specific *.ical file.

    carl output with ical files

    Using the --agenda commandline switch carl also shows an agenda of dates from the ical files.

    Installation

    I have uploaded carl to crates.io, so you can install carl using cargo: cargo install carl. I also plan to upload it to Debian at some point. If you find bugs or have feature requests, please don’t hesitate to create issues.

    03 January, 2022 08:52AM

    January 02, 2022

    hackergotchi for Ben Hutchings

    Ben Hutchings

    Debian LTS work, December 2021

    In December I was assigned 20 hours of work by Freexian's Debian LTS initiative. I worked 16 hours, and the remaining 4 hours cancelled out my over-work in November.

    I completed an update to the linux (4.9) package and issued DLA 2843-1.

    02 January, 2022 03:58PM

    January 01, 2022

    hackergotchi for Chris Lamb

    Chris Lamb

    Favourite books of 2021: Classics

    In my three most recent posts, I went over the memoirs and biographies, the non-fiction and fiction I enjoyed in 2021. But in the last of my 2021 book-related posts, however, I'll be going over my favourite classics.

    Of course, the difference between regular fiction and a 'classic' is an ambiguous, arbitrary and often-meaningless distinction: after all, what does it matter if Hemingway's The Old Man and the Sea (from 1951) is a classic or not? The term also smuggles in some of the ethnocentric gatekeeping encapsulated in the term 'Western canon' too. Nevertheless, the label of 'classic' has some utility for me in that it splits up the vast amount of non-fiction I read in two...

    Books that just missed the cut here include: Oscar Wilde's The Picture of Dorian Gray (moody and hilarious, but I cannot bring myself to include it due to the egregious antisemitism); Tolstoy's The Kreutzer Sonata (so angry! so funny!); and finally Notes from Underground by Fyodor Dostoevsky. Of significant note, though, would be the ghostly The Turn of the Screw by Henry James.

    §

    Heart of Darkness (1899)

    Joseph Conrad

    Heart of Darkness tells the story of Charles Marlow, a sailor who accepts an assignment from a Belgian trading company as a ferry-boat captain in the African interior, and the novella is widely regarded as a critique of European colonial rule in Africa. Loosely remade by Francis Ford Coppola as Apocalypse Now (1979), I started this book with the distinct possibility that this superb film adaptation would, for a rare treat, be 'better than the book'. However, Conrad demolished this idea of mine within two chapters, yet also elevated the film to a new level as well.

    This was chiefly due to how observant Conrad was of the universals that make up human nature. Some of his insight pertains to the barbarism of the colonialists, of course, but Conrad applies his shrewd acuity to the at the smaller level as well. Some of these quotes are justly famous: “Ah! but it was something to have at least a choice of nightmares,” for example, as well as the reference to a fastidiously turned-out colonial administrator who, with unimaginable horrors occurring mere yards from his tent, we learn “he was devoted to his books, which were in applepie order”. (It seems to me to be deliberately unclear whether his devotion arises from gross inhumanity, utter denial or some combination of the two.) Oh, and there's a favourite moment of mine when a character remarks that “It was very fine for a time, but after a bit I did get tired of resting.”

    Tired of resting! Yes, it's difficult to now say something original about a many-layered classic such as this, especially one that has analysed from so many angles already; from a literary perspective at first, of course, but much later from a critical postcolonial perspective, such as in Chinua Achebe's noted 1975 lecture, An Image of Africa. Indeed, the history of criticism in the twentieth century of Heart of Darkness must surely parallel the social and political developments in the Western world. (On a highly related note, the much-cited non-fiction book King Leopold's Ghost is on my reading list for 2022.)

    I will therefore limit myself to saying that the boat physically falling apart as it journeys deeper into the Congo may be intended to represent that our idea of 'Western civilisation' ceases to function, both morally as well as physically, in this remote environment. And, whilst I'm probably not the first to notice the potential ambiguity, when Marlow lies to Kurtz's 'Intended [wife]' in the closing section in order to save her from being exposed to the truth about Kurtz (surely a metaphor about the ignorance of the West whilst also possibly incorporating some comment on gender?), the Intended replies: “I knew it.” For me, though, it is not beyond doubt that what the Intended 'knows' is that she knew that Marlow would lie to her: in other words, that the alleged ignorance of everyday folk in the colonial homeland is studied and deliberate. Compact and fairly easy-to-read, it is clear that Heart of Darkness rewards even the most rudimentary analysis.

    §

    Rebecca (1938)

    Daphne du Maurier

    Daphne du Maurier creates in Rebecca a credible and suffocating atmosphere in the shape of Manderley, a grand English mansion owned by aristocratic widower Maxim de Winter. Our unnamed narrator (a young woman seemingly naïve in the ways of the world) meets Max in Monte Carlo, and she soon becomes the second Mrs. de Winter. The tale takes a turn to the 'gothic', though, when it becomes apparent that the unemotional Max, as well as potentially Manderley itself, appears to be haunted by the memory of his late first wife, the titular Rebecca.

    Still, Rebecca is less of a story about supernatural ghosts than one about the things that can haunt our minds. For Max, this might be something around guilt; for our narrator, the class-centered fear that she will never fit in. Besides, Rebecca doesn't need an actual ghost when you have Manderley's overbearing housekeeper, Mrs Danvers, surely one of the creepiest characters in all of fiction. Either way, the conflict of a kind between the fears of the protagonists means that they never really connect with each other.

    The most obvious criticism of Rebecca is that the main character is unreasonably weak and cannot quite think or function on her own. (Isn't it curious that the trait of the male 'everyman' is a kind of physical clumsiness yet the female equivalent is shorthanded by being slightly slow?) But the naïvete of Rebecca's narrator makes her easier to relate to in a way, and it also makes the reader far more capable of empathising with her embarrassment. This is demonstrated best whilst she, in one of the best evocations of this particular anxiety I have yet come across, is gingerly creeping around Manderlay and trying to avoid running into the butler.

    A surprise of sorts comes in the latter stages of the book, and this particular twist brings us into contact with a female character who is anything but 'credulous'. This revelation might even change your idea of who the main character of this book really is too. (Speaking of amateur literary criticism, I have many fan theories about Rebecca, including that Maxim de Winter's estate manager, Frank Crawley, is actually having an affair with Max, and also that Maxim may have a lot more involvement in Mrs Danvers final act that he lets on.)

    An easily accessible novel (with a great-but-not-perfect 1940 adaptation by Alfred Hitchcock, Rebecca is a real indulgence.

    §

    A Clockwork Orange (1962)

    Anthony Burgess

    One of Stanley Kubrick's most prominent tricks was to use different visual languages in order to prevent the audience from immediately grasping the underlying story. In his 1975 Barry Lyndon, for instance, the intentionally sluggish pacing and elusive characters require significant digestion to fathom and appreciate, and the luminous and quasi-Renaissance splendour of the cinematography does its part to constantly distract the viewer from the film's greater meaning.

    This is very much the case in Kubrick's A Clockwork Orange as well — whilst it ostensibly appears to be about a Saturnalia of violence, the 'greater meaning' of A Clockwork Orange pertains to the Christian conception of free will; admittedly, a much drier idea to bother making a film around. This is all made much clearer when reading Anthony Burgess' 1962 original novel. Alex became a 'true Christian' through the experimental rehabilitation process, and even offers to literally turn the other cheek at one point. But as Alex had no choice to do so (and can no longer choose to commit violence), he is incapable of making a free moral choice. Thus, is he really a Man?

    Yet whilst the book's central concern is our conception of free will in modern societies, it also appears to be a repudiation of two conservative principles. Firstly, A Clockwork Orange demolishes the idea that 'high art' leads to morally virtuous citizens. After all, if you can do a “bit of the old ultra-violence” whilst listening to the “glorious 9th by old Ludvig van,” then so much for the oft-repeated claims that culture makes you better as a person. (This, at least, I already knew from personal experience.)

    The other repudiation in A Clockwork Orange is in regard to the pervasive idea that the countryside is a refuge from crime and sin. By contrast, we see the gang commit their most horrific violence in rural areas, and, later, Alex is taken to the countryside by his former droogs for a savage beating. Although this doesn't seem to quite fit the novel, this was actually an important point for Burgess to include: otherwise his book could easily be read as a commentary on the corrupting influence of urban spaces, rather than of modernity itself.

    The language of this book cannot escape comment here. Alex narrates most of the book in a language called Nadsat, a fractured slang constructed by Burgess based on Russian and Cockney rhyming slang. (The language is strange for only a few pages, I promise. And note that 'Alex' is a very common Russian name.) Using Nadsat has the effect of making the book feel distinctly alien, but it also prevents it from prematurely aging too. Indeed, it comes as bit of a shock to realise that A Clockwork Orange was published 1962, the same year as The Beatles' released their first single, Love Me Do.

    I could probably say a whole lot more about this thoroughly engrossing book and its movie adaptation (eg. the meta-textual line in Kubrick's version: “It's funny how the colours of the real world only seem really real when you watch them on a screen...” appears verbatim in the textual original), but I'll leave it there. The book of A Clockwork Orange is not only worth the investment in the language, but is, again, somehow better than the film.

    §

    The Great Gatsby (1925)

    F. Scott Fitzgerald

    I'm actually being a little deceitful by including this book here: I cannot really say that The Great Gatsby was a 'favourite' read of the year, but its literary merit is so undeniable (and my respect for Fitzgerald's achievement is deep enough) that the experience was one of those pleasures you feel at seeing anything done well.

    Here you have a book so rich in symbolic meaning that you could easily confuse the experience with drinking Coke syrup undiluted. And a text that has made the difficulty and complexity of reading character a prominent theme of the novel, as well as a technical concern of the book itself. Yet at all times you have in your mind that The Great Gatsby is first and foremost a book about a man writing a book, and, therefore, about the construction of stories and myths.

    What is the myth being constructed in Gatsby? The usual answer today is that the book is really about the moral virtues of America. Or, rather, the lack thereof. Indeed, as James Boice wrote in 2016:

    Could Wilson have killed Gatsby any other way? Could he have ran him over, or poisoned him, or attacked him with a knife? Not at all—this an American story, the quintessential one, so Gatsby could have only died the quintessential American death.

    The “quintessential American death” is, of course, being killed with a gun.

    Whatever your own analysis, The Great Gatsby is not only magnificently written, but it is captivating to the point where references intrude many months later. For instance, when reading something about Disney's 'princess culture', I was reminded of when Daisy says of her daughter: “I hope she'll be a fool—that's the best thing of a girl can be in this world, a beautiful little fool”. Or the billboard with the eyes of 'Doctor T. J. Eckleburg'. Or the fact that the books in Gatsby's library have never been read (so what is 'Owl Eyes' doing there during the party?!). And the only plain room in Gatsby's great house is his bedroom... Okay, fine, I must have been deluding myself: I love this novel.

    01 January, 2022 06:38PM