February 25, 2023

hackergotchi for Gregor Herrmann

Gregor Herrmann

demo video: dpt(1) in pkg-perl-tools

in the Debian Perl Group we are maintaining a lot of packages (around 4000 at the time of writing). this also means that we are spending some time on improving our tools which allow us to handle this amount of packages in a reasonable time.

many of the tools are shipped in the pkg-perl-tools package since 2013, & lots of them are scripts which are called as subcommands of the dpt(1) wrapper script.

in the last years I got the impression that not all team members are aware of all the useful tools, & that some more promotion might be called for. & last week I was in the mood for creating a short demo video to showcase how I use some dpt(1) subcommands when updating a package to a new upstream release. (even though I prefer text over videos myself :))

probably not a cinematographic masterpiece but as the feedback of a few viewers has been positive, I'm posting it here as well:

(direct link as planets ignore iframes …)

25 February, 2023 10:36PM

Jelmer Vernooij

Silver Platter Batch Mode

Background

Silver-Platter makes it easier to publish automated changes to repositories. However, in its default mode, the only option for reviewing changes before publishing them is to run in dry-run mode. This can be quite cumbersome if you have a lot of repositories.

A new “batch” mode now makes it possible to generate a large number of changes against different repositories using a script, review and optionally alter the diffs, and then all publish them (and potentially refresh them later if conflicts appear).

Example running pyupgrade

I’m using the pyupgrade example recipe that comes with silver-platter.

 ---
 name: pyupgrade
 command: 'pyupgrade --exit-zero-even-if-changed $(find -name "test_*.py")'
 mode: propose
 merge-request:
   commit-message: Upgrade Python code to a modern version

And a list of candidate repositories to process in candidates.yaml.

 ---
 - url: https://github.com/jelmer/dulwich
 - url: https://github.com/jelmer/xandikos

With these in place, the updated repositories can be created:

 $ svp batch generate --recipe=pyupgrade.yaml --candidates=candidate.syml pyupgrade

The intermediate results

This will create a directory called pyupgrade, with a clone of each of the repositories.

$ ls pyupgrade
batch.yaml  dulwich  xandikos

$ cd pyupgrade/dulwich
$ git log
commit 931f9ffb26e9143c56f20e0b85e6ddb0a8eee2eb (HEAD -> master)
Author: Jelmer Vernooij <[email protected]>
Date:   Sat Feb 25 22:28:12 2023 +0000

Run pyupgrade
diff --git a/dulwich/tests/compat/test_client.py b/dulwich/tests/compat/test_client.py
index 02ab6c0a..9b0661ed 100644
--- a/dulwich/tests/compat/test_client.py
+++ b/dulwich/tests/compat/test_client.py
@@ -628,7 +628,7 @@ class HTTPGitServer(http.server.HTTPServer):
         self.server_name = "localhost"

     def get_url(self):
-        return "http://{}:{}/".format(self.server_name, self.server_port)
+        return f"http://{self.server_name}:{self.server_port}/"


 class DulwichHttpClientTest(CompatTestCase, DulwichClientTestBase):
...

There is also a file called batch.yaml that describes the pending changes:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
name: pyupgrade
work:
- url: https://github.com/dulwich/dulwich
  name: dulwich
  description: Upgrade to modern Python statements
  commit-message: Run pyupgrade
  mode: propose
- url: https://github.com/jelmer/xandikos
  name: xandikos
  description: Upgrade to modern Python statements
  commit-message: Run pyupgrade
  mode: propose
recipe: ../pyupgrade.yaml

At this point the changes can be reviewed, and batch.yaml edited as the user sees fit - they can remove entries that don’t appear to be correct, edit the metadata for the merge requests, etc. It’s also possible to make changes to the clones.

Once you’re happy, publish the results:

$ svp batch publish pyupgrade

This will publish all the changes, using the mode and parameters specified in batch.yaml.

batch.yaml is automatically stripped of any entries in work that have fully landed, i.e. where the pull request has been merged or where the changes were pushed to the origin.

To check up on the status of your changes, run svp batch status:

$ svp batch status pyupgrade

To refresh any merge proposals that may have become out of date, simply run publish again:

svp batch publish pyupgrade

25 February, 2023 09:44PM by Jelmer Vernooij

Petter Reinholdtsen

OpenSnitch available in Debian Sid and Bookworm

Thanks to the efforts of the OpenSnitch lead developer Gustavo Iñiguez Goya allowing me to sponsor the upload, the interactive application firewall OpenSnitch is now available in Debian Testing, soon to become the next stable release of Debian.

This is a package which set up a network firewall on one or more machines, which is controlled by a graphical user interface that will ask the user if a program should be allowed to connect to the local network or the Internet. If some background daemon is trying to dial home, it can be blocked from doing so with a simple mouse click, or by default simply by not doing anything when the GUI question dialog pop up. A list of all programs discovered using the network is provided in the GUI, giving the user an overview of how the machine(s) programs use the network.

OpenSnitch was uploaded for NEW processing about a month ago, and I had little hope of it getting accepted and shaping up in time for the package freeze, but the Debian ftpmasters proved to be amazingly quick at checking out the package and it was accepted into the archive about week after the first upload. It is now team maintained under the Go language team umbrella. A few fixes to the default setup is only in Sid, and should migrate to Testing/Bookworm in a week.

During testing I ran into an issue with Minecraft server broadcasts disappearing, which was quickly resolved by the developer with a patch and a proposed configuration change. I've been told this was caused by the Debian packages default use if /proc/ information to track down kernel status, instead of the newer eBPF module that can be used. The reason is simply that upstream and I have failed to find a way to build the eBPF modules for OpenSnitch without a complete configured Linux kernel source tree, which as far as we can tell is unavailable as a build dependency in Debian. We tried unsuccessfully so far to use the kernel-headers package. It would be great if someone could provide some clues how to build eBPF modules on build daemons in Debian, possibly without the full kernel source.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

25 February, 2023 07:30PM

hackergotchi for Holger Levsen

Holger Levsen

20230225-Debian-Reunion-Hamburg-2023

Debian Reunion Hamburg 2023 from May 23 to 30

As in the last years there will be a Debian Reunion Hamburg 2023 event taking place at the same location as previous years, from May 23rd until the 30th (with the 29th being a public holiday in Germany and elsewhere).

This is just a short announcement to get the word out, that this event will happen, so you can ponder and prepare attending. The wiki page has more information and some fine folks have even already registered! Announcements on the appropriate mailinglists will follow soon.

And once again, a few things still need to be sorted out, eg a call for papers and a call for sponsors. Also this year I'd like to distribute the work on more shoulders, especially dealing with accomodation (there are 34 beds available on-site), accomodation payments and finances in general.

If you want to help with any of that or have questions about the event, please reach out via #debconf-hamburg on irc.oftc.net or via the debconf-hamburg mailinglist.

I'm very much looking forward to meet some of you once again and getting to know some others for the first time! Yay.

25 February, 2023 04:19PM

February 24, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

ttdo 0.0.9 on CRAN: Small Update

A new minor release of our ttdo package arrived on CRAN a few days ago. The ttdo package extends the excellent (and very minimal / zero depends) unit testing package tinytest by Mark van der Loo with the very clever and well-done diffobj package by Brodie Gaslam to give us test results with visual diffs (as shown in the screenshot below) which seemingly is so compelling an idea that it eventually got copied by another package which shall remain unnamed…

ttdo screenshot

This release adds a versioned dependency on the just released tinytest version 1.4.1. As we extend tinytest (for use in the autograder we deploy within the lovely PrairieLearn framework) by consuming the tinytest code we have to update in sync.

There were no other code changes in the package beside the usual maintenance of badges and continuous integration setup.

As usual, the NEWS entry follows.

Changes in ttdo version 0.0.9 (2023-02-21)

  • Minor cleanup in README.md

  • Minor continuous integration update

  • Updated (versioned) depends on tinytest to 1.4.1

My CRANberries provides the usual summary of changes to the previous version. Please use the GitHub repo and its issues for any questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 February, 2023 10:33PM

Scarlett Gately Moore

Snowstorms, Kittens and Shattered dreams

Icy morning Witch Wells AzIcy morning Witch Wells Az

Long ago I applied for my dream job at a company I have wanted to wok for since its beginning and I wasn’t ready technically. Fast forward to now, I am ready! A big thank you goes out to Blue Systems for that. So I go out and find the perfect role and start the application process. The process was months long, but was going very well, the interviews and I passed the technical with flying colors. I got to the end where the hiring lead told me he was submitting my offer… I was so excited, so much so, I told my husband and parents “I got the job!” I know, I jinxed myself there. Soon I receive the “There was a problem”.. One obscure assessment called GIA came back not so good. I remember that day, we were in the middle of a long series of winter storms and I when I took the test, my kitten decided right then it was me time. I couldn’t very well throw her out into the snowstorm, so I continued on the best I could. It is my fault, it clearly states to be distraction free. So I speak again to the hiring lead and we both feel with my experience and technical knowledge and abilities we can still move forward. I still had hope. After some time passes, I asked for an update and got the dreaded rejection. I am told it wasn’t just the GIA, but that I am not a good overall fit for the company. In one fell swoop my dreams are dashed and final, for this and all roles within that company. I wasn’t given a reason either. I am devastated, heart broken, and shocked. I get along with everyone, I exceed the technical requirements, and I work well in the community. Dream door closed.

I will not let this get me down. I am moving on. I will find my place where I ‘fit in’.

With that said, I no longer have the will, passion, or drive to work on snaps anymore. I will leave instructions with Jonathon as to what needs to be done to move forward. The good news is my core22 kde-neon extension was merged into upstream snapcraft, so whomever takes over will have a much easier time knocking them out. @kubuntu-council I will do whatever it takes to pay back the money for the hardware you provided me to do snaps, I am truly sorry about this.

What does my future hold? I will still continue with my Debian efforts. In fact, I have ventured out from the KDE umbrella and joined the go-team. I am finalizing my packaging for

https://github.com/charmbracelet/gum

and it’s dependencies: roff, mango, mango-kong. I had my first golang patch for a failing test and have submitted it upstream. I will upload these to experimental while the freeze is on.

I will be moving all the libraries in the mycroft team to the python umbrella as they are useful for other things and mycroft is no more.

During the holidays I was tinkering around with selenium UI testing and stumbled on some accessibility issues within KDE, so I think this is a good place for me to dive into for my KDE contributions.

I have been approached to collaborate with OpenOS on a few things, time permitting I will see what I can do there.

I have a possible gig to do some websites, while I move forward in my job hunt.

I will not give up! I will find my place where I ‘fit in’.

Meanwhile, I must ask for donations to get us by. Anything helps, thank you for your consideration.

https://gofund.me/a9c36b87

24 February, 2023 02:12PM by sgmoore

February 23, 2023

hackergotchi for Steve Kemp

Steve Kemp

A quick hack for Emacs

As I've mentioned in the past I keep a work-log, or work-diary, recording my activities every day.

I have a bunch of standard things that I record, but one thing that often ends up happening is that I make references to external bug trackers, be they Jira, Bugzilla, or something else.

Today I hacked up a simple emacs minor-mode for converting these references to hyperlinks, automatically, via the use of regular expressions.

Given this configuration:

(setq linkifier-patterns '(
          ("\\\<XXX-[0-9]+\\\>" "https://jira.example.com/browse/%s")
          ("\\\<BUG-[0-9]+\\\>" "https://bugzilla.example.com/show?id=%s")))

When the minor-mode is active the any literal text that matches the pattern, for example "XXX-1234", will suddenly become a clickable button that will open Jira, and BUG-1234 will become a clickable button that opens the appropriate bug in Bugzilla.

There's no rewriting of the content, this is just a bit of magic that changes the display of the text (i.e. I'm using a button/text-property).

Since I mostly write in org-mode I could have written my text like so:

[[jira:XXX-1234][XXX-1234]]

But that feels like an ugly thing to do, and that style of links wouldn't work outside org-files anyway. That said it's a useful approach if you're only using org-mode, and the setup is simple:

(add-to-list 'org-link-abbrev-alist
    '("jira" . "http://jira.example.com/browse/%s"))

Anyway, cute hack. Useful too.

23 February, 2023 08:30PM

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Announcing hz.tools

Interested in future updates? Follow me on mastodon at @[email protected]. Posts about hz.tools will be tagged #hztools.

If you're on the Fediverse, I'd very much appreciate boosts on my announcement toot!

Ever since 2019, I’ve been learning about how radios work, and trying to learn about using them “the hard way” – by writing as much of the stack as is practical (for some value of practical) myself. I wrote my first “Hello World” in 2018, which was a simple FM radio player, which used librtlsdr to read in an IQ stream, did some filtering, and played the real valued audio stream via pulseaudio. Over 4 years this has slowly grown through persistence, lots of questions to too many friends to thank (although I will try), and the eternal patience of my wife hearing about radios nonstop – for years – into a number of Go repos that can do quite a bit, and support a handful of radios.

I’ve resisted making the repos public not out of embarrassment or a desire to keep secrets, but rather, an attempt to keep myself free of any maintenance obligations to users – so that I could freely break my own API, add and remove API surface as I saw fit. The worst case was to have this project feel like work, and I can’t imagine that will happen if I feel frustrated by PRs that are “getting ahead of me” – solving problems I didn’t yet know about, or bugs I didn’t understand the fix for.

As my rate of changes to the most central dependencies has slowed, i’ve begun to entertain the idea of publishing them. After a bit of back and forth, I’ve decided it’s time to make a number of them public, and to start working on them in the open, as I’ve built up a bit of knowledge in the space, and I and feel confident that the repo doesn’t contain overt lies. That’s not to say it doesn’t contain lies, but those lies are likely hidden and lurking in the dark. Beware.

That being said, it shouldn’t be a surprise to say I’ve not published everything yet – for the same reasons as above. I plan to open repos as the rate of changes slows and I understand the problems the library solves well enough – or if the project “dead ends” and I’ve stopped learning.

Intention behind hz.tools

It’s my sincere hope that my repos help to make Software Defined Radio (SDR) code a bit easier to understand, and serves as an understandable framework to learn with. It’s a large codebase, but one that is possible to sit down and understand because, well, it was written by a single person. Frankly, I’m also not productive enough in my free time in the middle of the night and on weekends and holidays to create a codebase that’s too large to understand, I hope!

I remain wary of this project turning into work, so my goal is to be very upfront about my boundaries, and the limits of what classes of contributions i’m interested in seeing.

Here’s some goals of open sourcing these repos:

  • I do want this library to be used to learn with. Please go through it all and use it to learn about radios and how software can control them!
  • I am interested in bugs if there’s a problem you discover. Such bugs are likely a great chance for me to fix something I’ve misunderstood or typoed.
  • I am interested in PRs fixing bugs you find. I may need a bit of a back and forth to fully understand the problem if I do not understand the bug and fix yet. I hope you may have some grace if it’s taking a long time.

Here’s a list of some anti-goals of open sourcing these repos.

  • I do not want this library to become a critical dependency of an important project, since I do not have the time to deal with the maintenance burden. Putting me in that position is going to make me very uncomfortable.
  • I am not interested in feature requests, the features have grown as I’ve hit problems, I’m not interested in building or maintaining features for features sake. The API surface should be exposed enough to allow others to experiment with such things out-of-tree.
  • I’m not interested in clever code replacing clear code without a very compelling reason.
  • I use GNU/Linux (specifically Debian ), and from time-to-time I’ve made sure that my code runs on OpenBSD too. Platforms beyond that will likely not be supported at the expense of either of those two. I’ll take fixes for bugs that fix a problem on another platform, but not damage the code to work around issues / lack of features on other platforms (like Windows).

I’m not saying all this to be a jerk, I do it to make sure I can continue on my journey to learn about how radios work without my full time job becoming maintaining a radio framework single-handedly for other people to use – even if it means I need to close PRs or bugs without merging it or fixing the issue.

With all that out of the way, I’m very happy to announce that the repos are now public under github.com/hztools.

Should you use this?

Probably not. The intent here is not to provide a general purpose Go SDR framework for everyone to build on, although I am keenly aware it looks and feels like it, since that what it is to me. This is a learning project, so for any use beyond joining me in learning should use something like GNU Radio or a similar framework that has a community behind it.

In fact, I suspect most contributors ought to be contributing to GNU Radio, and not this project. If I can encourage people to do so, contribute to GNU Radio! Nothing makes me happier than seeing GNU Radio continue to be the go-to, and well supported. Consider donating to GNU Radio!

hz.tools/rf - Frequency types

The hz.tools/rf library contains the abstract concept of frequency, and some very basic helpers to interact with frequency ranges (such as helpers to deal with frequency ranges, or frequency range math) as well as frequencies and some very basic conversions (to meters, etc) and parsers (to parse values like 10MHz). This ensures that all the hz.tools libraries have a shared understanding of Frequencies, a standard way of representing ranges of Frequencies, and the ability to handle the IO boundary with things like CLI arguments, JSON or YAML.

The git repo can be found at github.com/hztools/go-rf, and is importable as hz.tools/rf.

 // Parse a frequency using hz.tools/rf.ParseHz, and print it to stdout.
 freq := rf.MustParseHz("-10kHz")
fmt.Printf("Frequency: %s\n", freq+rf.MHz)
// Prints: 'Frequency: 990kHz'

// Return the Intersection between two RF ranges, and print
 // it to stdout.
 r1 := rf.Range{rf.KHz, rf.MHz}
r2 := rf.Range{rf.Hz(10), rf.KHz * 100}
fmt.Printf("Range: %s\n", r1.Intersection(r2))
// Prints: Range: 1000Hz->100kHz

These can be used to represent tons of things - ranges can be used for things like the tunable range of an SDR, the bandpass of a filter or the frequencies that correspond to a bin of an FFT, while frequencies can be used for things such as frequency offsets or the tuned center frequency.

hz.tools/sdr - SDR I/O and IQ Types

This… is the big one. This library represents the majority of the shared types and bindings, and is likely the most useful place to look at when learning about the IO boundary between a program and an SDR.

The git repo can be found at github.com/hztools/go-sdr, and is importable as hz.tools/sdr.

This library is designed to look (and in some cases, mirror) the Go io idioms so that this library feels as idiomatic as it can, so that Go builtins interact with IQ in a way that’s possible to reason about, and to avoid reinventing the wheel by designing new API surface. While some of the API looks (and is even called) the same thing as a similar function in io, the implementation is usually a lot more naive, and may have unexpected sharp edges such as concurrency issues or performance problems.

The following IQ types are implemented using the sdr.Samples interface. The hz.tools/sdr package contains helpers for conversion between types, and some basic manipulation of IQ streams.

IQ Format hz.tools Name Underlying Go Type
Interleaved uint8 (rtl-sdr) sdr.SamplesU8 [][2]uint8
Interleaved int8 (hackrf, uhd) sdr.SamplesI8 [][2]int8
Interleaved int16 (pluto, uhd) sdr.SamplesI16 [][2]int16
Interleaved float32 (airspy, uhd) sdr.SamplesC64 []complex64

The following SDRs have implemented drivers in-tree.

SDR Format RX/TX State
rtl u8 RX Good
HackRF i8 RX/TX Good
PlutoSDR i16 RX/TX Good
rtl kerberos u8 RX Old
uhd i16/c64/i8 RX/TX Good
airspyhf c64 RX Exp

The following major packages and subpackages exist at the time of writing:

Import What is it?
hz.tools/sdr Core IQ types, supporting types and implementations that interact with the byte boundary
hz.tools/sdr/rtl sdr.Receiver implementation using librtlsdr.
hz.tools/sdr/rtl/kerberos Helpers to enable coherent RX using the Kerberos SDR.
hz.tools/sdr/rtl/e4k Helpers to interact with the E4000 RTL-SDR dongle.
hz.tools/sdr/fft Interfaces for performing an FFT, which are implemented by other packages.
hz.tools/sdr/rtltcp sdr.Receiver implementation for rtl_tcp servers.
hz.tools/sdr/pluto sdr.Transceiver implementation for the PlutoSDR using libiio.
hz.tools/sdr/uhd sdr.Transceiver implementation for UHD radios, specifically the B210 and B200mini
hz.tools/sdr/hackrf sdr.Transceiver implementation for the HackRF using libhackrf.
hz.tools/sdr/mock Mock SDR for testing purposes.
hz.tools/sdr/airspyhf sdr.Receiver implementation for the AirspyHF+ Discovery with libairspyhf.
hz.tools/sdr/internal/simd SIMD helpers for IQ operations, written in Go ASM. This isn’t the best to learn from, and it contains pure go implemtnations alongside.
hz.tools/sdr/stream Common Reader/Writer helpers that operate on IQ streams.

hz.tools/fftw - hz.tools/sdr/fft implementation

The hz.tools/fftw package contains bindings to libfftw3 to implement the hz.tools/sdr/fft.Planner type to transform between the time and frequency domain.

The git repo can be found at github.com/hztools/go-fftw, and is importable as hz.tools/fftw.

This is the default throughout most of my codebase, although that default is only expressed at the “leaf” package – libraries should not be hardcoding the use of this library in favor of taking an fft.Planner, unless it’s used as part of testing. There are a bunch of ways to do an FFT out there, things like clFFT or a pure-go FFT implementation could be plugged in depending on what’s being solved for.

hz.tools/{fm,am} - analog audio demodulation and modulation

The hz.tools/fm and hz.tools/am packages contain demodulators for AM analog radio, and FM analog radio. This code is a bit old, so it has a lot of room for cleanup, but it’ll do a very basic demodulation of IQ to audio.

The git repos can be found at github.com/hztools/go-fm and github.com/hztools/go-am, and are importable as hz.tools/fm and hz.tools/am.

As a bonus, the hz.tools/fm package also contains a modulator, which has been tested “on the air” and with some of my handheld radios. This code is a bit old, since the hz.tools/fm code is effectively the first IQ processing code I’d ever written, but it still runs and I run it from time to time.

 // Basic sketch for playing FM radio using a reader stream from
 // an SDR or other IQ stream.

bandwidth := 150*rf.KHz
reader, err = stream.ConvertReader(reader, sdr.SampleFormatC64)
if err != nil {
...
}
demod, err := fm.Demodulate(reader, fm.DemodulatorConfig{
Deviation: bandwidth / 2,
Downsample: 8, // some value here depending on sample rate
 Planner: fftw.Plan,
})
if err != nil {
...
}
speaker, err := pulseaudio.NewWriter(pulseaudio.Config{
Format: pulseaudio.SampleFormatFloat32NE,
Rate: demod.SampleRate(),
AppName: "rf",
StreamName: "fm",
Channels: 1,
SinkName: "",
})
if err != nil {
...
}
buf := make([]float32, 1024*64)
for {
i, err := demod.Read(buf)
if err != nil {
...
}
if i == 0 {
panic("...")
}
if err := speaker.Write(buf[:i]); err != nil {
...
}
}

hz.tools/rfcap - byte serialization for IQ data

The hz.tools/rfcap package is the reference implementation of the rfcap “spec”, and is how I store IQ captures locally, and how I send them across a byte boundary.

The git repo can be found at github.com/hztools/go-rfcap, and is importable as hz.tools/rfcap.

If you’re interested in storing IQ in a way others can use, the better approach is to use SigMFrfcap exists for cases like using UNIX pipes to move IQ around, through APIs, or when I send IQ data through an OS socket, to ensure the sample format (and other metadata) is communicated with it.

rfcap has a number of limitations, for instance, it can not express a change in frequency or sample rate during the capture, since the header is fixed at the beginning of the file.

23 February, 2023 02:00AM

February 22, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.12.0.1.0 on CRAN: New Upstream, New Features

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1042 other packages on CRAN, downloaded 28.1 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 513 times according to Google Scholar.

This release brings a new upstream release 12.0.1. We found a small regression with the 12.0.0 release when we tested prior to a CRAN upload. Conrad very promptly fixed this with a literal one liner and made it 12.0.1 which we wrapped up as 0.12.0.1.0. Subsequent testing revealed no issues for us, and CRAN autoprocessed it as I tweeted earlier. This is actually quite impressive given the over 1000 CRAN packages using it all of which got tested again by CRAN. All this is testament to the rigour, as well as the well-oiled process at the repository. Our thanks go to the tireless maintainers!

The releases actually has a rather nice set of changes (detailed below) to which we added one robustification thanks to Kevin.

The full set of changes follows. We include the previous changeset as we may have skipped the usual blog post here.

Changes in RcppArmadillo version 0.12.0.1.0 (2023-02-20)

  • Upgraded to Armadillo release 12.0.1 (Cortisol Profusion)

    • faster fft() and ifft() via optional use of FFTW3

    • faster min() and max()

    • faster index_min() and index_max()

    • added .col_as_mat() and .row_as_mat() which return matrix representation of cube column and cube row

    • added csv_opts::strict option to loading CSV files to interpret missing values as NaN

    • added check_for_zeros option to form 4 of sparse matrix batch constructors

    • inv() and inv_sympd() with options inv_opts::no_ugly or inv_opts::allow_approx now use a scaled threshold similar to pinv()

    • set_cout_stream() and set_cerr_stream() are now no-ops; instead use the options ARMA_WARN_LEVEL, or ARMA_COUT_STREAM, or ARMA_CERR_STREAM

    • fix regression (mis-compilation) in shift() function (reported by us in #409)

  • The include directory order is now more robust (Kevin Ushey in #407 addressing #406)

Changes in RcppArmadillo version 0.11.4.4.0 (2023-02-09)

  • Upgraded to Armadillo release 11.4.4 (Ship of Theseus)

    • extended pow() with various forms of element-wise power operations

    • added find_nan() to find indices of NaN elements

    • faster handling of compound expressions by sum()

  • The package no longer sets a compilation standard, or progagates on in the generated packages as R ensures C++11 on all non-ancient versions

  • The CITATION file was updated to the current format

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 February, 2023 10:34PM

February 21, 2023

Billy Warren

A straight Guide to Salsa CI - A Debian Continuous Integration tool

I won’t waste your time with introductions. The title says it all so let’s jump right in. I’ll give you as many links as possible so that this article stays as short as possible.

So first, what is Salsa? Salsa is a name of a GitLab instance that is used by Debian teams to manage Debian packages and also collaborate on Development. If you have used GitLab before, the Salsa platform is not any different. To have a feel of it, it is available at https://salsa.debian.org. Still, want to know more? Find more information in the wiki. Intrigued to a point of getting started? Setup up your account by following this information

Secondly, what is Salsa CI? Like many large projects with different contributors and strict maintenance, Debian is no different. This Linux distribution is made up of many packages which need to follow a certain standard and structure or purpose of compatibility, scalability and maintainability. The Salsa CI is a continuous integration tool that does just that. I hope that is precise and satisfying 😊.

I would have ended here but since our focus is Salsa CI tool, let me get a little deeper and wider. You could also make great use of your time when I provide more information. The Salsa CI was developed to continuously check for the health of Debian packages before they can be uploaded to the archive by running a series of CI/CD jobs. The jobs are run against setup images that are already uploaded and updated regularly to reduce build time.

A screenshot showing the Salsa Ci jobs.

The use of Salsa CI is becoming prominent ever since its inception. “The Salsa CI pipeline has become popular (used by ~8k projects, from MariaDB to the Linux kernel packaging), and it is even the base for more complex CI pipelines used by other Linux flavours.” The issue is the more popular it becomes, the more efficient it has to get and the more need to make the build time as shorter as possible. This happens by iterating and testing out different tools during different stages of the pipeline to find the best industrial tool. This is one of the priorities for anyone who develops for or maintains Salsa CI.

So that is how ‘deep’ I can go for now.

But wait, what if you what to contribute?
If you have working knowledge in bash, git, CI, python and knowledge in building Debian packages it could be easy for you to figure out where components are and how they interact with each other. What if you don’t have the knowledge? Then that is where the fun comes in.

Getting started on making a meaningful contribution to Salsa CI will need more passion and discipline, the expertise comes later and slowly. I have contributed to Salsa CI even without high-level expertise and knowledge in some of the tools. When I started contributing to Salsa CI what a Debian package is, I even didn’t know that the tool that I am trying to navigate is being used by prominent software teams. But it is the challenge that I set for myself that as of now, enabled me to be able to work on a crucial part of the whole Continous integration. Wanna know what it is?

I am, as at the time of writing this article, integrating sbuild into Salsa CI to replace it with dpkg-buildpackage. This in turn will help to reduce the build time by getting rid of some jobs hence making the CI work faster. Cool, right?

Contributing to such a significant project can be a little challenging at the start but when you realize how important the piece you are working on is, you suddenly fall in love with it and want to follow through so that you can also be part of the large community that helps to make this world a better place in obscure ways.

So why don’t you check out some of the Salsa CI open issues and see if you’d be interested in improving it?

21 February, 2023 01:33PM by Billy Warren

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, January 2023 (by Anton Gladky)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering. This is the first monthly report in 2023.

Debian LTS contributors

In January, 17 contributors have been paid to work on Debian LTS. which is possibly the highest number of active contributors per month! Their reports are available:

  • Abhijith PA did 0.0h (out of 3.0h assigned and 11.0h from previous period), thus carrying over 14.0h to the next month.
  • Adrian Bunk did 26.25h (out of 26.25h assigned).
  • Anton Gladky did 11.5h (out of 8.0h assigned and 7.0h from previous period), thus carrying over 3.5h to the next month.
  • Ben Hutchings did 8.0h (out of 24.0h assigned), thus carrying over 16.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Emilio Pozuelo Monfort did 8.0h (out of 0h assigned and 43.0h from previous period), thus carrying over 35.0h to the next month.
  • Guilhem Moulin did 20.0h (out of 17.5h assigned and 2.5h from previous period).
  • Helmut Grohne did 10.0h (out of 15.0h assigned), thus carrying over 5.0h to the next month.
  • Lee Garrett did 7.5h (out of 20.0h assigned), thus carrying over 12.5h to the next month.
  • Markus Koschany did 26.25h (out of 26.25h assigned).
  • Ola Lundqvist did 4.5h (out of 10.0h assigned and 6.0h from previous period), thus carrying over 11.5h to the next month.
  • Roberto C. Sánchez did 3.75h (out of 18.75h assigned and 7.5h from previous period), thus carrying over 22.5h to the next month.
  • Stefano Rivera did 4.5h (out of 0h assigned and 32.5h from previous period), thus carrying over 28.0h to the next month.
  • Sylvain Beucler did 23.5h (out of 0h assigned and 38.5h from previous period), thus carrying over 15.0h to the next month.
  • Thorsten Alteholz did 14.0h (out of 10.0h assigned and 4.0h from previous period).
  • Tobias Frost did 19.0h (out of 19.0h assigned).
  • Utkarsh Gupta did 43.25h (out of 26.25h assigned and 17.0h from previous period).

Evolution of the situation

Furthermore, we released 46 DLAs in January, which resolved 146 CVEs. We are working diligently to reduce the number of packages listed in dla-needed.txt, and currently, we have 55 packages listed.

We are constantly growing and seeking new contributors. If you are a Debian Developer and want to join the LTS team, please contact us.

Thanks to our sponsors

Sponsors that joined recently are in bold.

21 February, 2023 12:00AM by Anton Gladky

February 20, 2023

hackergotchi for Jonathan McDowell

Jonathan McDowell

Fixing mobile viewing

It was brought to my attention recently that the mobile viewing experience of this blog was not exactly what I’d hope for. In my poor defence I proof read on my desktop and the only time I see my posts on mobile is via FreshRSS. Also my UX ability sucks.

Anyway. I’ve updated the “theme” to a more recent version of minima and tried to make sure I haven’t broken it all in the process (I did break tagging, but then I fixed it again). I double checked the generated feed to confirm it was the same (other than some re-tagging I did), so hopefully I haven’t flooded anyone’s feed.

Hopefully I can go back to ignoring the underlying blog engine for another 5+ years. If not I’ll have to take a closer look at Enrico’s staticsite.

20 February, 2023 07:09PM

February 19, 2023

Russell Coker

New 18 Core CPU and NVMe

I just got a E5-2696 v3 CPU for my ML110 Gen9 home workstation, this has a Passmark score of 23326 which is almost 3 times faster than the E5-2620 v4 which rated 9224. Previously it took over 40 minutes real time to compile a 6.10 kernel that was based on the Debian kernel configuration, now it takes 14 minutes of real time, 202 minutes of user time, and 37 minutes of system CPU time. That’s a definite benefit of having a faster CPU, I don’t often compile kernels but when I do I don’t want to wait 40+ minutes for a result. I also expanded the system from 96G of RAM to 128G, most of the time I don’t need so much RAM but it’s better to have too much than too little, particularly as my friend got me a good deal on RAM. The extra RAM might have helped improve performance too, going from 6/8 DIMM slots full to 8/8 might help the CPU balance access.

That series of HP machines has a plastic mounting bracket for the CPU, see this video about the HP Proliant Smart Socket for details [1]. I was working on this with a friend who has the same model of HP server as I do, after buying myself a system I was so happy with it that I bought another the same when I saw it going for a good price and then sold it to my friend when I realised that I had too many tower servers at home. It turns out that getting the same model of computer as a friend is a really good strategy so then you can work together to solve problems with it. My friend’s first idea was to try and buy new clips for the new CPUs (which would have delayed things and cost more money), but Reddit and some blog posts suggested that you can just skip the smart-socket guide clip and when the chip was resting in the socket it felt secure as the protrusions on the sides of the socket fit firmly enough into the notches in the CPU to prevent it moving far enough to short a connection. Testing on 2 systems showed that you don’t need the clip. As an aside it would be nice if Intel made every CPU that fits a particular socket have the same physical dimensions so clips and heatsinks can work well on all CPUs.

The TDP of the new CPU is 145W and the old one was 85W. One would hope that in a server class system that wouldn’t make a lot of difference but unfortunately the difference was significant. Previously I could have the system running 7/8 cores with BOINC 24*7 and I wouldn’t notice the fans being louder. It is possible that 100% CPU use on a hot day might make the fans sound louder if I didn’t have an air-conditioner on that was loud enough to drown them out, but the noteworthy fact is that with the previous CPU the system fans were a minor annoyance. Now if I have 16 cores running BOINC it’s quite loud, the sort of noise that makes most people avoid using tower servers as workstations! I’ve found that if I limit it to 4 or 5 cores then the system is about as quiet as it was before. As a rough approximation I can use as much CPU power as before without making the fans louder but if I use more CPU power than was previously available it gets noisy.

I also got some new NVMe devices, I was previously using 2*Crucial 1TB P1 NVMes in a BTRFS RAID-1 and now I have 2*Crucial 1TB P3 NVMes (where P1 is the slowest Crucial offering, P3 is better and more expensive, P5 is even better, etc). When doing the BTRFS migrations to move my workstation to new NVMe devices and my server to the old NVMe devices I found that the P3 series seem to have a limit of about 70MB/s for sustained random writes and the P1 series is about 35MB/s. Apparently with the cheaper NVMe devices they slow down if you do lots of random writes, pity that all the review articles talking about GB/s speeds don’t mention this. To see how bad reviews are Google some reviews of these SSDs, you will find a couple of comment threads on places like Reddit of them slowing down with lots of writes and lots of review articles on well known sites that don’t mention it. Generally I’d recommend not upgrading from P1 to P3 NVMe devices, the benefit isn’t enough to cover the effort. For every capacity of NVMe devices the most expensive devices cost more than twice as much as the cheapest devices, and sometimes it will be worth the money. Getting the most expensive device won’t guarantee great performance but getting cheap devices will guarantee that it’s slow.

It seems that CPU development isn’t progressing as well as it used to, the CPU I just bought was released in 2015 and scored 23,343 according to Passmark [2]. The most expensive Intel CPU on offer at my local computer store is the i9-13900K which was released this year and scores 62,914 [3]. One might say that CPUs designed for servers are different from ones designed for desktop PCs, but the i9 in question has a “TDP Up” of 253W which is too big for the PSU I have! According to the HP web site the new ML110 Gen10 servers aren’t sold with a CPU as fast as the E5-2696 v3! In the period from 1988 to about 2015 every year there were new CPUs with new capabilities that were worth an upgrade. Now for the last 8 years or so there hasn’t been much improvement at all. Buy a new PC for better USB ports or something not for a faster CPU!

19 February, 2023 12:13PM by etbe

February 17, 2023

Enrico Zini

Monitoring a heart rate monitor

I bought myself a cheap wearable Bluetooth LE heart rate monitor in order to play with it, and this is a simple Python script to monitor it and plot data.

Bluetooth LE

I was surprised that these things seem decently interoperable.

You can use hcitool to scan for devices:

hcitool lescan

You can then use gatttool to connect to device and poke at them interactively from a command line.

Bluetooth LE from Python

There is a nice library called Bleak which is also packaged in Debian. It's modern Python with asyncio and works beautifully!

Heart rate monitors

Things I learnt:

How about a proper fitness tracker?

I found OpenTracks, also on F-Droid, which seems nice

Why script it from a desktop computer?

The question is: why not?

A fitness tracker on a phone is useful, but there are lots of silly things one can do from one's computer that one can't do from a phone. A heart rate monitor is, after all, one more input device, and there are never enough input devices!

There are so many extremely important use cases that seem entirely unexplored:

  • Log your heart rate with your git commits!
  • Add your heart rate as a header in your emails!
  • Correlate heart rate information with your work activity tracker to find out what tasks stress you the most!
  • Sync ping intervals with your own heartbeat, so you get faster replies when you're more anxious!
  • Configure workrave to block your keyboard if you get too excited, to improve the quality of your mailing list contributions!
  • You can monitor the monitor script of the heart rate monitor that monitors you! Forget buffalo, be your monitor monitor monitor monitor monitor monitor monitor monitor...

17 February, 2023 10:22PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

First impressions of the VisionFive 2

VisionFive 2 packaging

Back in September last year I chose to back the StarFive VisionFive 2 on Kickstarter. I don’t have a particular use in mind for it, but I felt it was one of the first RISC-V systems that were relatively capable (mentally I have it as somewhere between a Raspberry Pi 3 + a Pi 4). In particular it’s a quad 1.5GHz 64-bit RISC-V core with 8G RAM, USB3, GigE ethernet and a single M.2 PCIe slot. More than ample as a personal machine for playing around with RISC-V and doing local builds. I ended up paying £67 for the Early Bird variant (dual GigE ethernet rather than 1 x 100Mb and 1 x GigE). A couple of weeks ago I got an email with a tracking number and last week it finally turned up.

Being impatient the first thing I did was plug it into a monitor, connect up a keyboard, and power it on. Nothing except some flashing lights. Looking at the boot selector DIP switches suggested it was configured to boot from UART, so I flipped them to (what I thought was) the flash setting. It wasn’t - turns out the “ON” marking on the switches represents logic 0 and it was correctly setup when I got it. I went to read the documentation which talked about writing an image to a MicroSD card, but also had details of the UART connection. Wanting to make sure the device was at least doing something before I actually tried an OS on it I hooked up a USB/serial dongle and powered the board up again. Success! U-Boot appeared and I could interact with it.

I went to the VisionFive2 Debian page and proceeded to torrent the Image-69 image, writing it to a MicroSD card and inserting it in the slot on the bottom of the board. It booted fine. I can’t even tell you what graphical environment it booted up because I don’t remember; it worked fine though (at 1080p, I’ve seen reports that 4K screens will make it croak).

Poking around the image revealed that it’s built off a snapshot.debian.org snapshot from 20220616T194833Z, which is a little dated at this point but I understand the rationale behind picking something that works and sticking with it. The kernel is of course a vendor special, based on 5.15.0. Further investigation revealed that the entire X/graphics stack is living in /usr/local, which isn’t overly surprising; it’s Imagination based. I was pleasantly surprised to discover there is work to upstream the Imagination support, but I’m not planning to run the board with a monitor attached so it’s not a high priority for me.

Having discovered all that I decided to see how well a “clean” Debian unstable install from Debian Ports would go. I had a spare Intel Optane lying around (it’s a stupid 22110 M.2 which is too long for any machine I own), so I put it in the slot on the bottom of the board. To my surprise it Just Worked and was detected ok:

# lspci
0000:00:00.0 PCI bridge: PLDA XpressRich-AXI Ref Design (rev 02)
0000:01:00.0 USB controller: VIA Technologies, Inc. VL805/806 xHCI USB 3.0 Controller (rev 01)
0001:00:00.0 PCI bridge: PLDA XpressRich-AXI Ref Design (rev 02)
0001:01:00.0 Non-Volatile memory controller: Intel Corporation NVMe Datacenter SSD [Optane]

I created a single partition with an ext4 filesystem (initially tried btrfs, but the StarFive kernel doesn’t support it), and kicked off a debootstrap with:

# mkfs -t ext4 /dev/nvme0n1p1
# mount /dev/nvme0n1p1 /mnt
# debootstrap --keyring=/etc/apt/trusted.gpg.d/debian-ports-archive-2023.gpg \
	unstable /mnt https://deb.debian.org/debian-ports

The u-boot setup has a convoluted set of vendor scripts that eventually ends up reading a /boot/extlinux/extlinux.conf config from /dev/mmcblk1p2, so I added an additional entry there using the StarFive kernel but pointing to the NVMe device for /. Made sure to set a root password (not that I’ve been bitten by that before, too many times), and rebooted. Success! Well. Sort of. I hit a bunch of problems with having a getty running on ttyS0 as well as one running on hvc0. The second turns out to be a console device from the RISC-V SBI. I did a systemctl mask [email protected] which made things a bit happier, but I was still seeing odd behaviour and output. Turned out I needed to reboot the initramfs as well; the StarFive one was using Plymouth and doing some other stuff that seemed to be confusing matters. An update-initramfs -k 5.15.0-starfive -c built me a new one and everything was happy.

Next problem; the StarFive kernel doesn’t have IPv6 support. StarFive are good citizens and make their 5.15 kernel tree available, so I grabbed it, fed it the existing config, and tweaked some options (including adding IPV6 and SECCOMP, which chrony wanted). Slight hiccup when it turned out trying to do things like make sound modular caused it to fail to compile, and having to backport the fix that allowed the use of GCC 12 (as present in sid), but it got there. So I got cocky and tried to update it to the latest 5.15.94. A few manual merge fixups (which I may or may not have got right, but it compiles and boots for me), and success. Timings:

$ time make -j 4 bindeb-pkg
… [linux-image-5.15.94-00787-g1fbe8ac32aa8]
real	37m0.134s
user	117m27.392s
sys	6m49.804s

On the subject of kernels I am pleased to note that there are efforts to upstream the VisionFive 2 support, with what appears to be multiple members of StarFive engaging in multiple patch submission rounds. It’s really great to see this and I look forward to being able to run an unmodified mainline kernel on my board.

Niggles? I have a few. The provided u-boot doesn’t have NVMe support enabled, so at present I need to keep a MicroSD card to boot off, even though root is on an SSD. I’m also seeing some errors in dmesg from the SSD:

[155933.434038] nvme nvme0: I/O 436 QID 4 timeout, completion polled
[156173.351166] nvme nvme0: I/O 48 QID 3 timeout, completion polled
[156346.228993] nvme nvme0: I/O 108 QID 3 timeout, completion polled

It doesn’t seem to cause any actual issues, and it could be the SSD, the 5.15 kernel or an actual hardware thing - I’ll keep an eye on it (I will probably end up with a different SSD that actually fits, so that’ll provide another data point).

More annoying is the temperature the CPU seems to run at. There’s no heatsink or fan, just the metal heatspreader on top of the CPU, and in normal idle operation it sits at around 60°C. Compiling a kernel it hit 90°C before I stopped the job and sorted out some additional cooling in the form of a desk fan, which kept it as just over 30°C.

Bare VisionFive 2 SBC board with a small desk fan pointed at it

I haven’t seen any actual stability problems, but I wouldn’t want to run for any length of time like that. I’ve ordered a heatsink and also realised that the board supports a Raspberry Pi style PoE “Hat”, so I’ve got one of those that includes a fan ordered (I am a complete convert to PoE especially for small systems like this).

With the desk fan setup I’ve been able to run the board for extended periods under load (I did a full recompile of the Debian 6.1.12-1 kernel package and it took about 10 hours). The M.2 slot is unfortunately only a single PCIe v2 lane, and my testing topped out at about 180MB/s. IIRC that is about half what the slot should be capable of, and less than a 10th of what the SSD can do. Ethernet testing with iPerf3 sustained about 941Mb/s, so basically maxing out the port. The board as a whole isn’t going to set any speed records, but it’s perfectly usable, and pretty impressive for the price point.

On the Debian side I’ve not hit any surprises. There’s work going on to move RISC-V to a proper release architecture, and I’m hoping to be able to help out with that, but the version of unstable I installed from the ports infrastructure has looked just like any other Debian install. Which is what you want. And that pretty much sums up my overall experience of the VisionFive 2; it’s not noticeably different than any other single board computer. That’s a good thing, FWIW, and once the kernel support lands properly upstream (it’ll be post 6.3 at least it seems) it’ll be a boring mainline supported platform that just happens to be RISC-V.

17 February, 2023 06:06PM

Reproducible Builds (diffoscope)

diffoscope 236 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 236. This version includes the following changes:

[ FC Stegerman ]
* Update code to match latest version of Black. (Closes: #1031433)

[ Chris Lamb ]
* Require at least Black version 23.1.0 to run the internal Black tests.
* Update copyright years.

You find out more by visiting the project homepage.

17 February, 2023 12:00AM

Valhalla's Things

git status Side Effects

Posted on February 17, 2023

TIL, from a conversation with friends1, that git status can indeed have side effects, of some sort.

By default, running git status causes a background refresh of the index to happen, which holds the write lock on the repository.

In theory, if somebody is really unlucky, this could break some script / process that is also trying to work on the repo at the same time, especially on a huge repository where git status takes a significant time, rather than the usual fraction of a second2.

There is a way to prevent this, by running git status --no-optional-locks (https://git-scm.com/docs/git#Documentation/git.txt---no-optional-locks) or by setting GIT_OPTIONAL_LOCKS to 0, as writing the updated index is just an optimization and git knows it can be avoided.

I don’t think there are many chances to actually stumble on this in the real life, but I’m writing this down so that if I ever do I have an easy way to remember what happened and find the solution.


  1. I won’t name name or provide details to protect the innocents (and the guilty), but thanks to all of the people involved in the conversations who helped find the answer.↩︎

  2. Related, but unrelated TIL: there is a place called Secondo (second), near Venice, but it’s already a frazione (fraction / subdivision of municipality).↩︎

17 February, 2023 12:00AM

February 16, 2023

Scarlett Gately Moore

KDE Snaps, Security updates, Debian Freeze

Icy morning Witch Wells AzIcy morning Witch Wells Az

Much like our trees, Debian is now in freeze stage for Bookworm. I am still working on packages locally until development opens up again. My main focus is getting mycroft packages updated to the new fork at https://github.com/orgs/OpenVoiceOS/repositories.

On the KDE Snaps side of things:

My PPA is not going well. There is a problem in Focal here qhelpgenerator-qt5 is a missing dependency, HOWEVER it is there… as shown here: https://launchpad.net/ubuntu/focal/+package/qhelpgenerator-qt5 a fun circular dependency for qtbase. It also builds fine in a focal chroot. I have tried copying packages, source recipe builds, adding another PPA with successful builds, to no avail. My PPA does not seem to use its own packages, or universe for that matter. I have tried all of the dependency settings available and nothing changes. If any of you launchpad experts out there want to help me out, or point me in the right direction, it would be appreciated !

This PPA is mandatory for our core20 snaps moving forward, as they currently have no security updates, and I refuse to have my name on security riddled snaps.

As for the kde-neon extension for core22, I have fixed most of the tests and Sergio is wrapping it up, thank you!!!!

I am still looking for work!!! Please reach out if you, or anyone you know is looking for my skill set.

Once again, I ask for you to please consider a donation. We managed to get the bills paid this month ( Big thank you! ) but, March is quickly approaching. The biggest thing is Phone/Internet so I can keep working on things and Job hunt. Thank you so much for your support!

https://gofund.me/a9c36b87

16 February, 2023 07:17PM by sgmoore

hackergotchi for Gunnar Wolf

Gunnar Wolf

We are GREAT at handling multimedia!

I have mentioned several times in this blog, as well as by other communication means, that I am very happy with the laptop I bought (used) about a year and a half ago: an ARM-based Lenovo Yoga C630.

Yes, I knew from the very beginning that using this laptop would pose a challenge to me in many ways, as full hardware support for ARM laptops are nowhere as easy as for plain boring x86 systems. But the advantages far outweigh the inconvenience (i.e. the hoops I had to jump through to handle video-out when I started teaching presentially, which are fortunately a thing of the past now).

Anyway — This post is not about my laptop.

Back in 2018, I was honored to be appointed as a member of the Debian Technical Committee. Of course, that meant (due to the very clear and clever point 6.2.7.1 of the Debian Constitution that my tenure in the Committee (as well as Niko Tyni’s) finished in January 1, 2023. We were invited to take part of a Jitsi call as a last meeting, as well as to welcome Matthew Garrett to the Committee.

Of course, I arranged so I would be calling from my desktop system at work (for which I have an old, terrible webcam — but as long as I don’t need to control screen sharing too finely, mostly works). Out of eight people in the call, two had complete or quite crippling failures with their multimedia setup, and one had a frozen image (at least as far as I could tell).

So… Yes, Debian is indeed good and easy and simple and reliable for most nontechnical users using standard tools.

But… I guess that we power users enjoy tweaking our setup to our precise particular liking. Or that we just don’t care about frivolities such as having a working multimedia setup.

Or I don’t know what happens.

But the fact that close to half of the Technical Committee, which should consist of Debian Developers who know their way around technical obstacles, cannot get a working multimedia setup for a simple, easy WebRTC call (even after a pandemic that made us all work via teleconferencing solutions on a daily basis!) is just… Beautiful 😉

16 February, 2023 07:12PM

February 15, 2023

hackergotchi for Marco d'Itri

Marco d'Itri

I replaced grub with systemd-boot

To be able to investigate and work on the the measured boot features I have switched from grub to systemd-boot (sd-boot).

This initial step is optional, but it is useful because this way /etc/kernel/cmdline will become the new place where the kernel command line can be configured:

. /etc/default/grub
echo "root=/dev/mapper/root $GRUB_CMDLINE_LINUX $GRUB_CMDLINE_LINUX_DEFAULT" > /etc/kernel/cmdline

Do not forget to set the correct root file system there, because initramfs-tools does not support discovering it at boot time using the Discoverable Partitions Specification.

The installation has not been automated yet out of an abundance of caution (but we will work on it soon), so after installing the package I had to run bootctl to install sd-boot in the ESP and enable it in the UEFI boot sequence and then I created boot loader entries for the kernels already installed on the system:

apt install systemd-boot

bootctl install
for kernel in /boot/vmlinuz-*; do
  kernel-install add "${kernel#*/vmlinuz-}" "$kernel"
done

I like to show the boot menu by default, at least until I will be more familiar with sd-boot:

bootctl set-timeout 4

Since other UEFI binaries can be easily chainloaded, I am also going to keep around grub for a while, just to be sure:

cat <<END > /boot/efi/loader/entries/grub.conf
title Grub
linux /EFI/debian/grubx64.efi
END

At this point sd-boot works, but I still had to enable secure boot. So far sd-boot has not been signed with a Debian key known to the shim bootloader, so I needed to create a Machine Owner Key (MOK), enroll it in UEFI and then use it to sign everything.

I dislike the complexity of mokutil and the other related programs, so after removing it and the boot shim I have decided to use sbctl instead. With it I easily created new keys, enrolled them in the EFI key store and then signed everything:

sbctl create-keys
sbctl enroll-keys
for file in /boot/efi/*/*/linux /boot/efi/EFI/*/*.efi; do
  ./sbctl sign -s $file
done

Since there is no sbctl package yet I need to make sure that also the kernels installed in the future will be automatically signed, so I have created a trivial script in /etc/kernel/install.d/ which automatically runs sbctl sign -s or sbctl remove-file.

The Debian wiki SecureBoot page documents how do do this with mokutil and sbsigntool, but I think that sbctl is much friendlier.

As a bonus, I have also added to the boot menu the excellent Debian-based GRML live distribution. Since sd-boot is not capable of loopback-mounting CD-ROM images like grub, I first had to extract the kernel and initramfs and copied them to the ESP:

mount -o loop /boot/grml/grml64-full_2022.11.iso /mnt/
mkdir /boot/efi/grml/
cp /mnt/boot/grml64full/* /boot/efi/grml/
umount /mnt/

cat <<END > /boot/efi/loader/entries/grml.conf
title GRML
linux /grml/vmlinuz
initrd /grml/initrd.img
options boot=live bootid=grml64full202211 findiso=/grml/grml64-full_2022.11.iso live-media-path=/live/grml64-full net.ifnames=0 
END

As expected, after a reboot bootctl reports the new security features:

System:
      Firmware: UEFI 2.70 (Lenovo 0.4496)
 Firmware Arch: x64
   Secure Boot: enabled (user)
  TPM2 Support: yes
  Boot into FW: supported

Current Boot Loader:
      Product: systemd-boot 252.5-2
     Features:  Boot counting
                Menu timeout control
                One-shot menu timeout control
                Default entry control
                One-shot entry control
                Support for XBOOTLDR partition
                Support for passing random seed to OS
                Load drop-in drivers
                Support Type #1 sort-key field
                Support @saved pseudo-entry
                Support Type #1 devicetree field
                Boot loader sets ESP information
          ESP: /dev/disk/by-partuuid/1b767f8e-70fa-5a48-b444-cfe5c272d66e
         File: └─/EFI/systemd/systemd-bootx64.efi

...

Relevant documentation:

15 February, 2023 01:45PM

Lukas Märdian

Netplan v0.106 is now available

I’m happy to announce that Netplan version 0.106 is now available on GitHub and is soon to be deployed into an Ubuntu/Debian/Fedora installation near you! Six months and 65 commits after the previous version, this release is brought to you by 4 free software contributors from around the globe.

Highlights

Highlights of this release include the new netplan status command, which queries your system for IP addresses, routes, DNS information, etc… in addition to the Netplan backend renderer (NetworkManager/networkd) in use and the relevant Netplan YAML configuration ID. It displays all this in a nicely formatted way (or alternatively in machine readable YAML/JSON format).

Furthermore, we implemented a clean libnetplan API which can be used by external tools to parse Netplan configuration, migrated away from non-inclusive language (PR#303) and improved the overall Netplan documentation. Another change that should be noted, is that the match.macaddress stanza now only matches on PermanentMACAddress= on the systemd-networkd backend, as has been the case on the NetworkManager backend ever since (see PR#278 for background information on this slight change in behavior).

Changelog

Bug fixes:

15 February, 2023 01:41PM by slyon

Valhalla's Things

My experience with a PinePhone

Posted on February 15, 2023

I’ve had and used1 a PinePhone for quite some time now, and a shiny new blog sounds like a good time to do a review of my experience.

TL;DL: I love it, but my use cases may not be very typical.

While I’ve had a mobile phone since an early time (my parents made me carry one for emergencies before it was usual for my peers) I’ve never used a typical smartphone (android / iPhone / those other proprietary things) because I can’t trust them not to be designed to work against me (data collection, ads, tricking users into micro payments, and other antipatterns of proprietary software design), and bringing them to sanity as most of the people I know do is too much effort for my tastes.

Instead, as a phone I keep using an old nokia featurephone2 which is reliable, only requires charging once a week, and can easily survive falling from hand height to the ground, even if thrown 3.

And then I’ve been carrying a variety of other devices to do other computer-like tasks; earlier it was just a laptop, or a netbook, a Pandora (all of which used a dongle to connect to mobile network internet) then I tried a phone with FirefoxOS (it could have been better) and now the PinePhone has taken their place, at least when I’m not carrying a laptop anyway.

So, the tasks I use the PinePhone for are mostly:

  1. sending and receiving xmpp messages, with no need for notifications (when somebody needs to tell me something where urgency is required, they know to use the other phone, either with a call or an sms);
  2. tethering an internet connection to the laptop;
  3. reading djvu scans of old books while standing in a queue or something;
  4. checking something on the internet when I’m not close to a real computer (i.e. one with a keyboard and a big screen);
  5. running a timer while heat-setting fabric paint with an iron (and reading a djvu book at the same time — yes, this is a very specific task, but it has happened multiple times already :D );
  6. running a calculator with unit conversions;
  7. running the odd command line program;
  8. taking pictures, especially those that I want to send soon (I often also carry a DSLR camera, but I tend to wait a few days before I download them from the card);
  9. map related things.

3 and 5 work perfectly well, no issues there. 1, 2 and 4 usually work just fine, except for the fact that sometimes while the phone is suspended it forgets about being a phone, and needs to be restarted to turn the modem back on. It’s not a big deal while using the phone, I just need to check before I try to use it after a few hours.

For 1, I also had to take care to install dino-im from experimental, as up to now the features required to fit the interface in a mobile screen aren’t available in the official release, but I believe that this has just been fixed.

Somewhat related to 4, I’ve also installed kiwix and the dumps of wiktionary and wikivoyage, but I haven’t had a chance to travel much, so I’m not really using it.

For 6 I’m quite happy with qalculate, the GUI version of qalc (which is what I use on my laptop), even if it has a few minor interface issues, and 7 of course works as well as it can, given the limitations of a small screen and virtual keyboard.

8 is, let us say, problematic. The camera on the PinePhone is peculiar, only works with a specific software, and even there the quality of the pictures is, well, low-fi, vintagey pictures are a look and that’s a specific artistic choice, right? Thanks to the hard work of the megapixels maintainer the quality has improved a lot, and these days it is usable, but there are still limits (no webcam in the browser, no recording of videos).

9 is really bad. A few times I can remember getting a GPS fix. A few times in many months, and now and then I keep trying, to see if a miracle has happened, but usually I only get a vague position from wifi data (which isn’t great, when walking through less densely populated areas).

I’ve seen another PinePhone running gpsd and getting data from an external GPS receiver via bluetooth, and if I really needed it I may seriously consider that solution.

Also, the apps available in mobian aren’t great either, even when compared to running tangogps on an OpenMoko with pre-downloaded maps (I mean! I don’t think my expectations are too high!).

I’ve heard that PureMaps is quite good as a software, but a bit of a PITA to package for Debian, and I really hope that one day there will be a linux-first mobile device with good GPS hardware, so that people will be encouraged to fix the software side.

Thankfully, I don’t usually need GPS and navigation software; when I’m driving into places I don’t know I usually have a human navigator, and when walking into places I can do with just a static map (either printed on paper or on the PinePhone), maybe some pre-calculated route from the OSM website and looking at street names to find out where I am.

Overall, for my use cases the PinePhone works just fine and is an useful addition to the things I always carry with me, and I don’t feel the pressing need to get an android phone. I don’t think it’s ready as a daily driver for everybody, but I think that depending on one’s needs it’s worth asking around (I’d recommend doing so on the fediverse with a #PinePhone and #mobian hashtag), as there is a non-zero chance that it may be a good fit for you.


  1. which, if I’m not mistaken, is often not implied by the fact of owning it :D↩︎

  2. for very low values of feature: it doesn’t have any kind of internet access, and there are only 3 games, one of which is snake.↩︎

  3. don’t ask.↩︎

15 February, 2023 12:00AM

February 14, 2023

hackergotchi for Holger Levsen

Holger Levsen

20230214-i-love-osuosl

I love free software and I ❤️ OSUOSL

So in December 2018 I was approached somewhat out of the blue by someone from OSUOSL who offered eight servers to the Reproducible Builds project and as these machines had 32 cores and 144 GB Ram each (plus 3 TB on a single HDD) and they also offered free hosting, I very happyly said yes.

And since them I'm a very happy Oregon State University Open Source Labs user, and these days we're switching the setup to different machines, which is another story to tell some other time...! ;-)

My point now is: since 2018 I got to know OSUOSL and every year I like them more. They are super friendly, reliable (a working ticket system and a great IRC channel), offer help in various ways, be it with DNS names (and renamings...) or finding new hardware suited to our needs or whatever else we come up with. They are really dedicated to help free software projects and I'm grateful having the privilege to enjoy this since more than four years now.

To quote https://osuosl.org/:

"The Open Source Lab is a nonprofit organization working for the advancement of open source technologies.

The lab, in partnership with the School of Electrical Engineering and Computer Science at Oregon State University, provides hosting for more than 160 projects, including those of worldwide leaders like the Apache Software Foundation, the Linux Foundation and Drupal. Together, the OSL’s hosted sites deliver nearly 430 terabytes of information to people around the world every month. The most active organization of its kind, the OSL offers world-class hosting services, professional software development and on-the-ground training for promising students interested in open source management and programming."

So, IOW, I'm just seeing a very tiny tip of the iceberg of their awesome work. Check out https://osuosl.org/communities/ to see what I mean with very tiny tip. Search for "Debian" and "Reproducible" on that page :)

Thank you, everybody at OSUOSL! You rock and make a big difference for many projects! ❤️

14 February, 2023 06:15PM

February 13, 2023

hackergotchi for Jonathan Dowland

Jonathan Dowland

A visit to Prusa Labs

.
.

In September I was in Czechia for a Red Hat event. I ended up travelling via Prague, and had an unexpected extra day due to an airline strike causing my flight home to be cancelled. I took the opportunity to visit Prusa's offices/factory/Lab, and it was amazing!

The Prusa team were all busy getting ready for the Prague Maker Faire that was happening the day afterwards.1

On arriving at the street which houses Prusa's Lab and Office buildings, the first thing that hit me was the smell. I find the melted-plastic smell of FDM printing (with PLA, at least) quite pleasant, and this was a super-condensed version of that, pumping out of their ground-floor windows. I started at the reception area on the ground floor. Outside reception there's a lovely sculpture representing the history of the development of the MK3S+.

The Reception

The Reception

SLS Farm in the former Hack lab

SLS Farm in the former Hack lab

History of the MK3S+

History of the MK3S+


At the reception you have a small waiting area with shelves of demonstration prints and some spools of Prusament. From here, our kind guide first took us to a region on the ground floor that used to be (prior to COVID times) the public maker/hack lab. The lab contained two modest farms of printers: one of their flagship FDM printer, the MK3S+, and another of their SLS resin printers. A close up of some example resin models is pictured above. The bicycle was tiny: about the size of a thumbnail.

A Historic display

A Historic display

Bespoke QE equipment

Bespoke QE equipment

The MK3S+ Farm

The MK3S+ Farm


.

Moi in the Farm

2KG orange PLA spools for the Farm

2KG orange PLA spools for the Farm

The rest of the ground floor area was full of heavy machinery and prototyping equipment. Onwards we went to the upstairs floors.

Upstairs, past a nice graphic of Prusa's historic products, we visited the assembly and QE testing areas. They have a very organised system of parts buckets and some thorough QE processes, including some bespoke equipment that produces the "receipt" of tests and calibration that they provide for you in the box when you buy a 3D printer.

After that, we visited the production Farm: a large room full of MK3S+ printers churning out parts for other printers. The noise was remarkable. The printers were running custom firmware to continually print the part they had been set for. Some of the printers were designated for printing with ASA: they were colour-coded (yellow controller surround) and within boxed regions to prevent the fumes causing problems. (picture at the top)

Outside the room sat palettes of orange PETG plastic for the printer farm on 2KG spools (not a size they sell to the public just yet)

The final part of the trip was outside, to the real farm: Prusa have a smallholding with Alpacas at the rear of their estate. Whilst we visited it, Josef Prusa himself turned up (in a snazzy looking custom colour Tesla) to feed the animals, say hello and pose for a picture.

Overall, it was a fantastic visit. I'm very grateful to Air France for cancelling my flight home, and to Prusa Labs (in particular Lukáš) for allowing me to come and say hello!


  1. I managed to squeeze that in too on my way to my rescheduled flight, although it was a rush visit and I don't have much to say or show from it. Suffice to say that it was lovely and bittersweet since the UK Maker Faire used to be hosted in my fair home city of Newcastle before they stopped.

13 February, 2023 10:12AM

Vincent Bernat

Valhalla's Things

Cernit Sets for the Royal Game of UR

Posted on February 13, 2023

Some months ago I stumbled on the video where Irving Finkel teaches Tom Scott how to play the Royal Game of Ur and my takeout was:

  1. Irving Finkel is Gandalf or something;
  2. the game sounded quite fun!;

so I did the almost sensible thing, quickly drew a board with inkscape, printed it on 160 g/m² paper and used my piecepack pieces to try a few games.

two copies of a game board made of plain squares: a 3 × 4 squares area at the top, a 3 × 2 area at the bottom, connected by a 1 × 2 corridor in the middle.

I say almost sensible, because rather than drawing the rosettes with inkscape I decided to carve a rubber stamp and use that to print them on the board (which is why the svgs on this page are missing them: if you print them you’ll have to add the rosettes in some way).

And if I had been a sensible person, that’s where I would have stopped, since that’s perfectly enough to play games and find out that it actually is quite a fun game, and one of our staples.

As some of you probably know, I’m not a sensible person.

I also have quite a few blocks and half-blocks of cernit, and one day after I’ve had used some, my hands were still moving and accidentally made some pyramidal dice, and a handful of tokens.

Royal game of Ur pieces in marbled grey and white plastic: the tokens are small coins in one colour with a small circle of the other colour in the middle, the dice are tetrahedrons in one colour with two points marked in the other colour.

And after baking and trying them I liked them, but they had not been planned in any way, and they were a bit too small for the board, so the next time I was using cernit I tried to make a new set.

And while I was doing that I tried a new shape for the dice, as coins marked with a dot in the middle of one of the sides, because I don’t really like tetrahedral dice.

A set of red and green tokens, like the ones above, plus tetrahedron dice and four more coins with a dot of a different colour just on one side. Everything is on top of a board that folds up.

And now, I realized this wasn’t going to be my last set, and urgently felt the need for some container to keep them in and avoid missing pieces.

(Yes, in the picture above one piece was already missing. While taking it I didn’t realize it, and neither I did when picking up everything to put it away, getting the missing piece and storing it safely together with the rest of the set. It must have been hiding in plain sight nearby, but I will never know where.)

Anyway, back to Inkscape, and to a board printed on scrap paper that I tried to fold up until I came up with a layout that folded up in a small drawer, and then I added a case to wrap around it to keep it closed.

A white box, about 2.5 cm × 2.5 cm × 7.5 cm; a drawer is sliding out of one small end.

The drawer from the box above, extracted to show it's made of a folded game of Ur board and contains a set with tokens and dice.

I played around with the case until it was big enough to actually slide around the folded board, and this is the result, ready to be printed out on A4 paper, cut, folded and glued. (This takes most of the sheet, and I’m not sure that the case would still fit around the board/drawer if printed with scaling, so if you want to print it on Letter paper I’d recommend to move the pieces around.)

two copies of the game board above, plus two cut / fold / glue boxes

Now, the only problem left was that green isn’t really my colour, and while I did like the stone effect of this set, I wasn’t exactly pleased by the colour scheme. (why did I do it this way in the first place? probably because I was trying to use up old cernit blocks before opening new ones.)

So, the only possible way out was to make yet another set, right?

A set of red and grey tokens, tetrahedron dice, coins with one side marked with a dot that are square-ish rather than circular and four lozenge-shaped coins with each side of a different colour.

I still used stone effect cernit, but this time in a red/grey scheme that knew I would have liked more, and while I was doing it I tried a few improvements on the randomization devices.

The tetrahedral dice are still the same: they work, it’s what they use in the replica sets, so I keep making them even if they’re not my first choice.

I’ve changed the coins to make them almost square for two reasons, however: one is that the round one tended to roll away into inconvenient places when throwing them with emphasis, and the other one is to make it easier to recognise them from the tokens with no need to flip each one around before starting the game.

The lozenges were a bit of a failure, instead. They work fine when thrown, but I don’t think that there is a self-evident way to decide which side should be counted, and the only intuitive way I can think of (count the ones in the player’s colour) would be unbalanced.

Speaking of balance issues: of course the hand-modelled dice and coins aren’t perfectly balanced but:

  • they don’t feel obviously unbalanced;
  • both players use the same set, so any subtle unbalance isn’t going to affect the chance of winning in an uneven way.

Maybe one day I will find a way to easily roll them a statistically significant number of times, collect data and analyze it to find out how imbalanced they are, but that’s not going to happen with manual data collecting, and I’m not really ready to go down the yak shaving filled road to automatize it.

To wrap up: is it going to be the last set I make for the Royal Game of Ur? lol. Is it going to be the last cernit set I make this month? definitely yes, I now have one I’m happy with, I’m routinely playing with it and I’m currently doing other crafts rather than cernit.

13 February, 2023 12:00AM

February 12, 2023

Russell Coker

Intel vs AMD

In response to a post about my latest laptop I had someone ask why I chose an Intel CPU. I’ve been a fan of the Thinkpad series of laptops since the 90s. They have always seemed well constructed (given the constraints of being light etc) and had a good feature set. Also I really like the TrackPoint. I’ve been a fan of the smaller Thinkpads since I got an X-301 from e-waste [1] and the X1-Carbon series is the latest and greatest line of small Thinkpads.

AMD makes some nice laptop CPUs which appear to have low power use and good performance particularly for smaller numbers of threads, it seems that generally AMD CPUs are designed for fewer cores with higher performance per core which is good for laptops. But Lenovo only makes the Thinkpad Carbon X1 series with Intel CPUs so choosing that model of laptop means choosing Intel. It could be that for some combination of size, TDP, speed, etc Intel just happens to beat AMD for all the times when Lenovo was designing a new motherboard for the Carbon X1. But it seems more likely that Intel has been lobbying Lenovo for this. It would be nice if there was an anti-trust investigation into Intel, everyone who’s involved in the computer industry knows of some of the anti-competitive things that they have done.

Also it would be nice if Lenovo started shipping laptops with ARM CPUs across their entire range. But for the moment I guess I have to keep buying laptops with Intel CPUs.

12 February, 2023 05:31AM by etbe

T320 iDRAC Failure and new HP Z640

The Dell T320

Almost 2 years ago I made a Dell PowerEdge T320 my home server [1]. It was a decent upgrade from the PowerEdge T110 II that I had used previously. One benefit of that system was that I needed more RAM and the PowerEdge T1xx series use unbuffered ECC RAM which is unreasonably expensive as well as the DIMMs tending to be smaller (no Load Reduced DIMMS) and only having 4 slots. As I had bought two T320s I put all the RAM in a single server getting a total of 96G and then put some cheap DIMMs in the other one and sold it with 48G.

The T320 has all the server reliability features including hot-swap redundant PSUs and hot-swap hard drives. One thing it doesn’t have redundancy on is the motherboard management system known as iDRAC. 3 days ago my suburb had a power outage and when power came back on the T320 gave an error message about a failure to initialise the iDRAC and put all the fans on maximum speed, which is extremely loud. When a T320 is running in a room that’s not particularly hot and it doesn’t have SAS disks it’s a very quiet server, one of the quietest I’ve ever owned. When it goes into emergency cooling mode due to iDRAC failure it’s loud enough to be heard from the other end of the house with doors closed in between.

Googling this failure gave a few possible answers. One was for some combinations of booting with the iDRAC button held down, turning off for a while and booting with the iDRAC button held down, etc (this didn’t work). One was for putting a iDRAC firmware file on the SD card so iDRAC could automatically load it (which I tested even though I didn’t have the flashing LED which indicates that it is likely to work, but it didn’t do anything). The last was to enable serial console and configure the iDRAC to load new firmware via TFTP, I didn’t get a iDRAC message from the serial console just the regular BIOS stuff.

So it looks like I’ll have to sell the T320 for parts or find someone who wants to run it in it’s current form. Currently to boot it I have to press F1 a few times to bypass BIOS messages (someone on the Internet reported making a device to key-jam F1). Then when it boots it’s unreasonably loud, but apparently if you are really keen you can buy fans that have temperature sensors to control their own speed and bypass the motherboard control.

I’d appreciate any advice on how to get this going. At this stage I’m not going to go back to it but if I can get it working properly I can sell it for a decent price.

The HP Z640

I’ve replaced the T320 with a HP Z640 workstation with 32G of RAM which I had recently bought to play with Stable Diffusion. There were hundreds of Z640 workstations with NVidia Quadro M6000 GPUs going on eBay for under $400 each, it looked like a company that did a lot of ML work had either gone bankrupt or upgraded all their employees systems. The price for the systems was surprisingly cheap, at regular eBay prices it seems that the GPU and the RAM go for about the same price as the system. It turned out that Stable Diffusion didn’t like the video card in my setup for unknown reasons but also that the E5-1650v3 CPU could render an image in 15 minutes which is fast enough to test it out but not fast enough for serious use. I had been planning to blog about that.

When I bought the T320 server the DDR3 Registered ECC RAM it uses cost about $100 for 8*8G DIMMs, with 16G DIMMs being much more expensive. Now the DDR4 Registered ECC RAM used by my Z640 goes for about $120 for 2*16G DIMMs. In the near future I’ll upgrade that system to 64G of RAM. It’s disappointing that the Z640 only has 4 DIMM sockets per CPU so if you get a single-CPU version (as I did) and don’t get the really expensive Load Reduced RAM then you are limited to 64G. So the supposed capacity benefit of going from DDR3 to DDR4 doesn’t seem to apply to this upgrade.

The Z640 I got has 4 bays for hot-swap SAS/SATA 2.5″ SSD/HDDs and 2 internal bays for 3.5″ hard drives. The T320 has 8*3.5″ hot swap bays and I had 3 hard drives in them in a BTRFS RAID-10 configuration. Currently I’ve got one hard drive attached via USB but that’s obviously not a long-term solution. The 3 hard drives are 4TB, they have worked since 4TB was a good size. I have a spare 8TB disk so I could buy a second ($179 for a shingle HDD) to make a 8TB RAID-1 array. The other option is to pay $369 for a 4TB SSD (or $389 for a 4TB NVMe + $10 for the PCIe card) to keep the 3 device RAID-10. As tempting as 4TB SSDs are I’ll probably get a cheap 8TB disk which will take capacity from 6TB to 8TB and I could use some extra 4TB disks for backups.

I haven’t played with the AMT/MEBX features on this system, I presume that they will work the same way as AMT/MEBX on the HP Z420 I’ve used previously [2].

Update:

HP has free updates for the BIOS etc available here [3]. Unfortunately it seems to require loading a kernel module supplied by HP to do this. This is a bad thing, kernel code that isn’t in the mainline kernel is either of poor quality or isn’t licensed correctly.

I had to change my monitoring system to alert on temperatures over 100% of the “high” range while on the T320 I had it set at 95% of “high” and never got warnings. This is disappointing, enterprise class gear running in a reasonably cool environment (ambient temperature of about 22C) should be able to run all CPU cores at full performance without hitting 95% of the “high” temperature level.

12 February, 2023 02:10AM by etbe

February 11, 2023

Vincent Bernat

Hacking the Geberit Sigma 70 flush plate

My toilet is equipped with a Geberit Sigma 70 flush plate. The sales pitch for this hydraulic-assisted device praises the “ingenious mount that acts like a rocker switch.” In practice, the flush is very capricious and has a very high failure rate. Avoid this type of mechanism! Prefer a fully mechanical version like the Geberit Sigma 20.

After several plumbers, exchanges with Geberit’s technical department, and the expensive replacement of the entire mechanism, I was still getting a failure rate of over 50% for the small flush. I finally managed to decrease this rate to 5% by applying two 8 mm silicone bumpers on the back of the plate. Their locations are indicated by red circles on the picture below:

Geberit Sigma 70 flush plate. Top: the mechanism that converts the mechanical press into a hydraulic impulse. Bottom: the back of the plate with the two places where to apply the bumpers.
Geberit Sigma 70 flush plate. Above: the mechanism installed on the wall. Below, the back of the glass plate. In red, the two places where to apply the silicone bumpers.

Expect to pay about 5 € and as many minutes for this operation.

11 February, 2023 09:22PM by Vincent Bernat

February 10, 2023

hackergotchi for Jonathan Dowland

Jonathan Dowland

HLedger, 1 year on

It's been a year since I started exploring HLedger, and I'm still going. The rollover to 2023 was an opportunity to revisit my approach.

Some time ago I stumbled across Dmitry Astapov's HLedger notes (fully-fledged hledger, which I briefly mentioned in eventual consistency) and decided to adopt some of its ideas.

new year, new journal

First up, Astapov encourages starting a new journal file for a new calendar year. I do this for other, accounting-adjacent files as a matter of course, and I did it for my GNUCash files prior to adopting HLedger. But the reason for those is a general suspicion that a simple mistake with those softwares could irrevocably corrupt my data. I'm much more confident with HLedger, so rolling over at years end isn't necessary for that. But there are other advantages. A quick obvious one is you can get rid of old accounts (such as expense accounts tied to a particular project, now completed).

one journal per import

In the first year, I periodically imported account data via CSV exports of transactions and HLedger's (excellent) CSV import system. I imported all the transactions, once each, into a single, large journal file.

Astapov instead advocates for creating a separate journal for each CSV that you wish to import, and keep around the CSV, leaving you with a 1:1 mapping of CSV:journal. Then use HLedger's "include" mechanism to pull them all into the main journal.

With the former approach, where the CSV data was imported precisely, once, it was only exposed to your import rules once. The workflow ended up being: import transactions; notice some that you could have matched with import rules and auto-coded; write the rule for the next time. With Astapov's approach, you can re-generate the journal from the CSV at any point in the future with an updated set of import rules.

tracking dependencies

Now we get onto the job of driving the generation of all these derivative journal files. Astapov has built a sophisticated system using Haskell's "Shake", which I'm not yet familiar, but for my sins I'm quite adept at (GNU-flavoured) UNIX Make, so I started building with that. An example rule

import/jon/amex/%.journal: import/jon/amex/%.csv rules/amex.csv.rules
    rm -f $(@D)/.latest.$*.csv $@
    hledger import --rules-file rules/amex.csv.rules -f $@ $<

This captures the dependency between the journal and the underlying CSV but also to the relevant rules file; if I modify that, and this target is run in the future, all dependent journals should be re-generated.1

opening balances

It's all fine and well starting over in a new year, and I might be generous to forgive debts, but I can't count on others to do the same. We need to carry over some balance information from one year to the next. Astapov has a more complex (or perhaps featureful) scheme for this involving a custom Haskell program, but I bodged something with a pair of make targets:

import/opening/2023.csv: 2022.journal
    mkdir -p import/opening
    hledger bal -f $< \
                $(list_of_accounts_I_want_to_carry_over) \
        -O csv -N > $@

import/opening/2023.journal: import/opening/2023.csv rules/opening.rules
    rm -f $(@D)/.latest.2023.csv $@
    hledger import --rules-file rules/opening.rules \
        -f $@ $<

I think this could be golfed into a year-generic rule with a little more work. The nice thing about this approach is the opening balances for a given year might change, if adjustments are made in prior years. They shouldn't, for real accounts, but very well could for more "virtual" liabilities. (including: deciding to write off debts.)

run lots of reports

Astapov advocates for running lots of reports, and automatically. There's a really obvious advantage of that to me: there's no chance anyone except me will actually interact with HLedger itself. For family finances, I need reports to be able to discuss anything with my wife.

Extending my make rules to run reports is trivial. I've gone for HTML reports for the most part, as they're the easiest on the eye. Unfortunately the most useful report to discuss (at least at the moment) would be a list of transactions in a given expense category, and the register/aregister commands did not support HTML as an output format. I submitted my first HLedger patch to add HTML output support to aregister: https://github.com/simonmichael/hledger/pull/2000

addressing the virtual posting problem

I wrote in my original hledger blog post that I had to resort to unbalanced virtual postings in order to record both a liability between my personal cash and family, as well as categorise the spend. I still haven't found a nice way around that.

But I suspect having broken out the journal into lots of other journals paves the way to a better solution to the above.

The form of a solution I am thinking of is: some scheme whereby the two destination accounts are combined together; perhaps, choose one as a primary and encode the other information in sub-accounts under that. For example, repeating the example from my hledger blog post:

2022-01-02 ZTL*RELISH
    family:liabilities:creditcard      £ -3.00
    family:dues:jon                     £ 3.00
    (jon:expenses:snacks)               £ 3.00

This could become

2022-01-02 ZTL*RELISH
    family:liabilities:creditcard      £ -3.00
    family:liabilities:jon:snacks

(I note this is very similar to a solution proposed to me by someone responding on twitter).

The next step is to recognise that sometimes when looking at the data I care about one aspect, and at other times the other, but rarely both. So for the case where I'm thinking about family finances, I could use account aliases to effectively flatten out the expense category portion and ignore it.

On the other hand, when I'm concerned about how I've spent my personal cash and not about how much I owe the family account, I could use aliases to do the opposite: rewrite-away the family:liabilities:jon prefix and combine the transactions with the regular jon:expenses account heirarchy.

(this is all speculative: I need to actually try this.)

catching errors after an import

When I import the transactions for a given real bank account, I check the final balance against another source: usually a bank statement, to make sure they agree. I wasn't using any of the myriad methods to make sure that this remains true later on, and so there was the risk that I make an edit to something and accidentally remove a transaction that contributed to that number, and not notice (until the next import).

The CSV data my bank gives me for accounts (not for credit cards) also includes a 'resulting balance' field. It was therefore trivial to extend the CSV import rules to add balance assertions to the transactions that are generated. This catches the problem.

There are a couple of warts with balance assertions on every such transaction: for example, dealing with the duplicate transaction for paying a credit card: one from the bank statement, one from the credit card. Removing one of the two is sufficient to correct the account balances but sometimes they don't agree on the transaction date, or the transactions within a given day are sorted slightly differently by HLedger than by the bank. The simple solution is to just manually delete one or two assertions: there remain plenty more for assurance.

going forward

I've only scratched the surface of the suggestions in Astapov's "full fledged HLedger" notes. I'm up to step 2 of 14. I'm expecting to return to it once the changes I've made have bedded in a little bit.

I suppose I could anonymize and share the framework (Makefile etc) that I am using if anyone was interested. It would take some work, though, so I don't know when I'd get around to it.


  1. the rm …latest… bit is to clear up some state-tracking files that HLedger writes to avoid importing duplicate transactions. In this case, I know better than HLedger.

10 February, 2023 09:11PM

Antoine Beaupré

Picking a USB-C hub and charger

Dear lazy web, help me pick the right hardware to make my shiny new laptop work better. I want a new USB-C dock and travel power supply.

Background

I need advice on hardware, because my current setup in the office doesn't work so well. My new Framework laptop has four (4!) USB-C ports which is great, but it only has those ports (there's a combo jack, but I don't use it because it's noisy). So right now I have the following setup:

  • HDMI: monitor one
  • HDMI: monitor two
  • USB-A: Yubikey
  • USB-C: USB-C hub, which has:
    • RJ-45 network
    • USB-A keyboard
    • USB-A mouse
    • USB-A headset

... and I'm missing a USB-C port for power! So I get into this annoying situation where I need to actually unplug the USB-A Yubikey, unplug the USB-A expansion card, plug in the power for a while so it can charge, and then do that in reverse when I need the Yubikey again (which is: often).

Another option I have is to unplug the headset, but I often need both the headset and the Yubikey at once. I also have a pair of earbuds that work in the combo jack, but, again, they are noticeably noisy.

So this doesn't work.

I'm thinking I should get a USB-C Dock of some sort. The Framework forum has a long list of supported docks in a "megathread", but I figured people here might have their own experience with docks and laptop/dock setups.

So what should USB-C Dock should I get?

Should I consider changing to a big monitor with a built-in USB-C dock and power?

Ideally, i'd like to just walk in the office, put the laptop down and insert a single USB-C cable and be done with it. Does that even work with Wayland? I have read reports of Displaylink not working in Sway specifically... does that apply to any multi-monitor over a single USB-C cable setup?

Oh, and what about travel options? Do you have a fancy small form factor USB-C power charger that you really like?

Current ideas

Here are the devices I'm considering right now...

USB chargers

The spec here is at least 65W USB-C with international plugs.

I particularly like the TOFU, but I am not sure it will deliver... I found that weird little thing through this Twitter post from Benedict Reuschling, from this blog post, from 2.5 admins episode 127 (phew!).

Update: I ordered a TOFU power station today (2023-02-20). I'm a little concerned about the power output (45W instead of 65W for the Framework charger), but I suspect it will not actually be a problem while traveling, since the laptop will keep its charge during the day and will charge at night... The device shipped 3 days later, with an estimated delivery time of 12 to 15 days, so expect this thing to take its sweet time landing on your desk.

USB Docks

Specification:

  • must have 2 or more USB-A ports (3 is ideal, otherwise i need a new headset adapter)
  • at least one USB-C port, preferably more
  • works in Linux
  • 2 display support (or one big monitor?), ideally 2x4k for future-proofing, HDMI or Display-Port, ideally also with double USB-C/Thunderbolt for future-proofing
  • all on one USB-C wire would be nice
  • power delivery over the USB-C cable
  • not too big, preferably

Note that I move from 4 USB-A ports down to 2 or 3 because I can change the USB-A cable on my keyboard for USB-C. But that means I need a slot for a USB-C port on the dock of course. I also could live with one less USB-A cable if I find a combo jack adapter, but that would mean a noisy experience.

Options found so far:

  • ThinkPad universal dock/40ay0090us): 300$USD, 65-100W, combo jack, 3x USB3.1, 2x USB2.0, 1x USB-C, 2x Display Port, 1x HDMI Port, 1x Gigabit Ethernet

  • Caldigit docks are apparently good, and the USB-C HDMI Dock seems like a good candidate (not on sale in there Canada shop), but leaves me wondering whether I want to keep my old analog monitors around and instead get proper monitors with USB-C inputs, and use something like Thunderbolt Element hub (230$USD). Update: I wrote Caldigit and they don't seem to have any Dock that would work for me, they suggest the TS3 plus which only has a single DP connector (!?). The USB-C HDMI dock is actually discontinued and they mentioned that they do have trouble with Linux in general.

  • I was also recommended OWC docks as well. update: their website is a mess, and live chat has confirmed they do not actually have any device that fits the requirement of two HDMI/DP outputs.

  • Anker also has docks (e.g. the Anker 568 USB-C Docking Station 11-in-1 looks nice, but holy moly 300$USD... Also, Anker docks are not all equal, I've heard reports of some of them being bad. Update: I reached out to Anker to clarify whether or not their docks will work on Linux and to advise on which dock to use, and their response is that they "do not recommend you use our items with Linux system". So I guess that settles it with Anker.

  • Cable Matters are promising, and their "USB-C Docking Station with Dual 4K HDMI and 80W Charging for Windows Computers might just actually work. It was out of stock on their website and Amazon but after reaching out to their support by email, they pointed out a product page that works in Canada.

Also: this post from Big Mess Of Wires has me worried that anything might work at all. It's where I had the Cable Matters reference however...

Update: I ordered a this dock from Cable Matters from Amazon (reluctantly). It promises “Linux” support and checked all the boxes for me (4x USB-A, audio, network, 2xHDMI).

It kind of works? I tested the USB-A ports, charging, networking, and the HDMI ports, all worked the first time. But! When I disconnect and reconnect the hub, the HDMI ports stop working. It’s quite infuriating especially since there’s very little diagnostics available. It’s unclear how the devices show up on my computer, I can’t even tell what device provides the HDMI connectors in lsbusb.

I’ve also seen the USB keyboard drop keypresses, which is also ... not fun. I suspect foul play inside Sway.

And yeah, those things are costly! This one goes for 300$ a pop, not great.

Your turn!

So what's your desktop setup like? Do you have docks? a laptop? a desktop? did you build it yourself?

Did you solder a USB-C port in the back of your neck and interface directly with the matrix and there's no spoon?

Do you have a 4k monitor? Two? A 8k monitor that curves around your head in a fully immersive display? Do you work on a Occulus rift and only interface the world through 3d virtual reality, including terminal emulators?

Thanks in advance!

10 February, 2023 08:09PM

Reproducible Builds (diffoscope)

diffoscope 235 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 235. This version includes the following changes:

[ Akihiro Suda ]
* Update .gitlab-ci.yml to push versioned tags to the container registry.
  (Closes: reproducible-builds/diffoscope!119)

[ Chris Lamb ]
* Fix compatibility with PyPDF2. (Closes: reproducible-builds/diffoscope#331)
* Fix compatibility with ImageMagick 7.1.
  (Closes: reproducible-builds/diffoscope#330)

[ Daniel Kahn Gillmor ]
* Update from PyPDF2 to pypdf. (Closes: #1029741, #1029742)

[ FC Stegerman ]
* Add support for Android resources.arsc files.
  (Closes: reproducible-builds/diffoscope!116)
* Add support for dexdump. (Closes: reproducible-builds/diffoscope#134)
* Improve DexFile's FILE_TYPE_RE and add FILE_TYPE_HEADER_PREFIX, and remove
  "Dalvik dex file" from ApkFile's FILE_TYPE_RE as well.

[ Efraim Flashner ]
* Update external tool for isoinfo on guix.
  (Closes: reproducible-builds/diffoscope!124)

You find out more by visiting the project homepage.

10 February, 2023 12:00AM

February 09, 2023

hackergotchi for Jonathan McDowell

Jonathan McDowell

Building a read-only Debian root setup: Part 2

This is the second part of how I build a read-only root setup for my router. You might want to read part 1 first, which covers the initial boot and general overview of how I tie the pieces together. This post will describe how I build the squashfs image that forms the main filesystem.

Most of the build is driven from a script, make-router, which I’ll dissect below. It’s highly tailored to my needs, and this is a fairly lengthy post, but hopefully the steps I describe prove useful to anyone trying to do something similar.

Breakdown of make-router
#!/bin/bash

# Either rb3011 (arm) or rb5009 (arm64)
#HOSTNAME="rb3011"
HOSTNAME="rb5009"

if [ "x${HOSTNAME}" == "xrb3011" ]; then
	ARCH=armhf
elif [ "x${HOSTNAME}" == "xrb5009" ]; then
	ARCH=arm64
else
	echo "Unknown host: ${HOSTNAME}"
	exit 1
fi


It’s a bash script, and I allow building for either my RB3011 or RB5009, which means a different architecture (32 vs 64 bit). I run this script on my Pi 4 which means I don’t have to mess about with QemuUserEmulation.


BASE_DIR=$(dirname $0)
IMAGE_FILE=$(mktemp --tmpdir router.${ARCH}.XXXXXXXXXX.img)
MOUNT_POINT=$(mktemp -p /mnt -d router.${ARCH}.XXXXXXXXXX)

# Build and mount an ext4 image file to put the root file system in
dd if=/dev/zero bs=1 count=0 seek=1G of=${IMAGE_FILE}
mkfs -t ext4 ${IMAGE_FILE}
mount -o loop ${IMAGE_FILE} ${MOUNT_POINT}


I build the image in a loopback ext4 file on tmpfs (my Pi4 is the 8G model), which makes things a bit faster.


# Add dpkg excludes
mkdir -p ${MOUNT_POINT}/etc/dpkg/dpkg.cfg.d/
cat <<EOF > ${MOUNT_POINT}/etc/dpkg/dpkg.cfg.d/path-excludes
# Exclude docs
path-exclude=/usr/share/doc/*

# Only locale we want is English
path-exclude=/usr/share/locale/*
path-include=/usr/share/locale/en*/*
path-include=/usr/share/locale/locale.alias

# No man pages
path-exclude=/usr/share/man/*
EOF


Create a dpkg excludes config to drop docs, man pages and most locales before we even start the bootstrap.


# Setup fstab + mtab
echo "# Empty fstab as root is pre-mounted" > ${MOUNT_POINT}/etc/fstab
ln -s ../proc/self/mounts ${MOUNT_POINT}/etc/mtab

# Setup hostname
echo ${HOSTNAME} > ${MOUNT_POINT}/etc/hostname

# Add the root SSH keys
mkdir -p ${MOUNT_POINT}/root/.ssh/
cat <<EOF > ${MOUNT_POINT}/root/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAv8NkUeVdsVdegS+JT9qwFwiHEgcC9sBwnv6RjpH6I4d3im4LOaPOatzneMTZlH8Gird+H4nzluciBr63hxmcFjZVW7dl6mxlNX2t/wKvV0loxtEmHMoI7VMCnrWD0PyvwJ8qqNu9cANoYriZRhRCsBi27qPNvI741zEpXN8QQs7D3sfe4GSft9yQplfJkSldN+2qJHvd0AHKxRdD+XTxv1Ot26+ZoF3MJ9MqtK+FS+fD9/ESLxMlOpHD7ltvCRol3u7YoaUo2HJ+u31l0uwPZTqkPNS9fkmeCYEE0oXlwvUTLIbMnLbc7NKiLgniG8XaT0RYHtOnoc2l2UnTvH5qsQ== [email protected]
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDQb9+qFemcwKhey3+eTh5lxp+3sgZXW2HQQEZMt9hPvVXk+MiiNMx9WUzxPJnwXqlmmVdKsq+AvjA0i505Pp8fIj5DdUBpSqpLghmzpnGuob7SSwXYj+352hjD52UC4S0KMKbIaUpklADgsCbtzhYYc4WoO8F7kK63tS5qa1XSZwwRwPbYOWBcNocfr9oXCVWD9ismO8Y0l75G6EyW8UmwYAohDaV83pvJxQerYyYXBGZGY8FNjqVoOGMRBTUcLj/QTo0CDQvMtsEoWeCd0xKLZ3gjiH3UrknkaPra557/TWymQ8Oh15aPFTr5FvKgAlmZaaM0tP71SOGmx7GpCsP4jZD1Xj/7QMTAkLXb+Ou6yUOVM9J4qebdnmF2RGbf1bwo7xSIX6gAYaYgdnppuxqZX1wyAy+A2Hie4tUjMHKJ6OoFwBsV1sl+3FobrPn6IuulRCzsq2aLqLey+PHxuNAYdSKo7nIDB3qCCPwHlDK52WooSuuMidX4ujTUw7LDTia9FxAawudblxbrvfTbg3DsiDBAOAIdBV37HOAKu3VmvYSPyqT80DEy8KFmUpCEau59DID9VERkG6PWPVMiQnqgW2Agn1miOBZeIQV8PFjenAySxjzrNfb4VY/i/kK9nIhXn92CAu4nl6D+VUlw+IpQ8PZlWlvVxAtLonpjxr9OTw== noodles@yubikey
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8UHj4IpfqUcGE4cTvLB0d2xmATSUzqtxW6ZhGbZxvQDKJesVW6HunrJ4NFTQuQJYgOXY/o82qBpkEKqaJMEFHTCjcaj3M6DIaxpiRfQfs0nhtzDB6zPiZn9Suxb0s5Qr4sTWd6iI9da72z3hp9QHNAu4vpa4MSNE+al3UfUisUf4l8TaBYKwQcduCE0z2n2FTi3QzmlkOgH4MgyqBBEaqx1tq7Zcln0P0TYZXFtrxVyoqBBIoIEqYxmFIQP887W50wQka95dBGqjtV+d8IbrQ4pB55qTxMd91L+F8n8A6nhQe7DckjS0Xdla52b9RXNXoobhtvx9K2prisagsHT noodles@cup
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK6iGog3WbNhrmrkglNjVO8/B6m7mN6q1tMm1sXjLxQa+F86ETTLiXNeFQVKCHYrk8f7hK0d2uxwgj6Ixy9k0Cw= noodles@sevai
EOF


Setup fstab, the hostname and SSH keys for root.


# Bootstrap our install
debootstrap \
	--arch=${ARCH} \
	--include=collectd-core,conntrack,dnsmasq,ethtool,iperf3,kexec-tools,mosquitto,mtd-utils,mtr-tiny,ppp,tcpdump,rng-tools5,ssh,watchdog,wget \
	--exclude=dmidecode,isc-dhcp-client,isc-dhcp-common,makedev,nano \
	bullseye ${MOUNT_POINT} https://deb.debian.org/debian/


Actually do the debootstrap step, including a bunch of extra packages that we want.


# Install mqtt-arp
cp ${BASE_DIR}/debs/mqtt-arp_1_${ARCH}.deb ${MOUNT_POINT}/tmp
chroot ${MOUNT_POINT} dpkg -i /tmp/mqtt-arp_1_${ARCH}.deb
rm ${MOUNT_POINT}/tmp/mqtt-arp_1_${ARCH}.deb

# Frob the mqtt-arp config so it starts after mosquitto
sed -i -e 's/After=.*/After=mosquitto.service/' ${MOUNT_POINT}/lib/systemd/system/mqtt-arp.service


I haven’t uploaded mqtt-arp to Debian, so I install a locally built package, and ensure it starts after mosquitto (the MQTT broker), given they’re running on the same host.


# Frob watchdog so it starts earlier than multi-user
sed -i -e 's/After=.*/After=basic.target/' ${MOUNT_POINT}/lib/systemd/system/watchdog.service

# Make sure the watchdog is poking the device file
sed -i -e 's/^#watchdog-device/watchdog-device/' ${MOUNT_POINT}/etc/watchdog.conf


watchdog timeouts were particularly an issue on the RB3011, where the default timeout didn’t give enough time to reach multiuser mode before it would reset the router. Not helpful, so alter the config to start it earlier (and make sure it’s configured to actually kick the device file).


# Clean up docs + locales
rm -r ${MOUNT_POINT}/usr/share/doc/*
rm -r ${MOUNT_POINT}/usr/share/man/*
for dir in ${MOUNT_POINT}/usr/share/locale/*/; do
	if [ "${dir}" != "${MOUNT_POINT}/usr/share/locale/en/" ]; then
		rm -r ${dir}
	fi
done


Clean up any docs etc that ended up installed.


# Set root password to root
echo "root:root" | chroot ${MOUNT_POINT} chpasswd


The only login method is ssh key to the root account though I suppose this allows for someone to execute a privilege escalation from a daemon user so I should probably randomise this. Does need to be known though so it’s possible to login via the serial console for debugging.


# Add security to sources.list + update
echo "deb https://security.debian.org/debian-security bullseye-security main" >> ${MOUNT_POINT}/etc/apt/sources.list
chroot ${MOUNT_POINT} apt update
chroot ${MOUNT_POINT} apt -y full-upgrade
chroot ${MOUNT_POINT} apt clean

# Cleanup the APT lists
rm ${MOUNT_POINT}/var/lib/apt/lists/www.*
rm ${MOUNT_POINT}/var/lib/apt/lists/security.*


Pull in any security updates, then clean out the APT lists rather than polluting the image with them.


# Disable the daily APT timer
rm ${MOUNT_POINT}/etc/systemd/system/timers.target.wants/apt-daily.timer

# Disable daily dpkg backup
cat <<EOF > ${MOUNT_POINT}/etc/cron.daily/dpkg
#!/bin/sh

# Don't do the daily dpkg backup
exit 0
EOF

# We don't want a persistent systemd journal
rmdir ${MOUNT_POINT}/var/log/journal


None of these make sense on a router.


# Enable nftables
ln -s /lib/systemd/system/nftables.service \
	${MOUNT_POINT}/etc/systemd/system/sysinit.target.wants/nftables.service


Ensure we have firewalling enabled automatically.


# Add systemd-coredump + systemd-timesync user / group
echo "systemd-timesync:x:998:" >> ${MOUNT_POINT}/etc/group
echo "systemd-coredump:x:999:" >> ${MOUNT_POINT}/etc/group
echo "systemd-timesync:!*::" >> ${MOUNT_POINT}/etc/gshadow
echo "systemd-coredump:!*::" >> ${MOUNT_POINT}/etc/gshadow
echo "systemd-timesync:x:998:998:systemd Time Synchronization:/:/usr/sbin/nologin" >> ${MOUNT_POINT}/etc/passwd
echo "systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin" >> ${MOUNT_POINT}/etc/passwd
echo "systemd-timesync:!*:47358::::::" >> ${MOUNT_POINT}/etc/shadow
echo "systemd-coredump:!*:47358::::::" >> ${MOUNT_POINT}/etc/shadow

# Create /etc/.pwd.lock, otherwise it'll end up in the overlay
touch ${MOUNT_POINT}/etc/.pwd.lock
chmod 600 ${MOUNT_POINT}/etc/.pwd.lock


Create a number of users that will otherwise get created at boot, and a lock file that will otherwise get created anyway.


# Copy config files
cp --recursive --preserve=mode,timestamps ${BASE_DIR}/etc/* ${MOUNT_POINT}/etc/
cp --recursive --preserve=mode,timestamps ${BASE_DIR}/etc-${ARCH}/* ${MOUNT_POINT}/etc/
chroot ${MOUNT_POINT} chown mosquitto /etc/mosquitto/mosquitto.users
chroot ${MOUNT_POINT} chown mosquitto /etc/ssl/mqtt.home.key


There are config files that are easier to replace wholesale, some of which are specific to the hardware (e.g. related to network interfaces). See below for some more details.


# Build symlinks into flash for boot / modules
ln -s /mnt/flash/lib/modules ${MOUNT_POINT}/lib/modules
rmdir ${MOUNT_POINT}/boot
ln -s /mnt/flash/boot ${MOUNT_POINT}/boot


The kernel + its modules live outside the squashfs image, on the USB flash drive that the image lives on. That makes for easier kernel upgrades.


# Put our git revision into os-release
echo -n "GIT_VERSION=" >> ${MOUNT_POINT}/etc/os-release
(cd ${BASE_DIR} ; git describe --tags) >> ${MOUNT_POINT}/etc/os-release


Always helpful to be able to check the image itself for what it was built from.


# Add some stuff to root's .bashrc
cat << EOF >> ${MOUNT_POINT}/root/.bashrc
alias ls='ls -F --color=auto'
eval "\$(dircolors)"

case "\$TERM" in
xterm*|rxvt*)
	PS1="\\[\\e]0;\\u@\\h: \\w\a\\]\$PS1"
	;;
*)
	;;
esac
EOF


Just some niceties for when I do end up logging in.


# Build the squashfs
mksquashfs ${MOUNT_POINT} /tmp/router.${ARCH}.squashfs \
	-comp xz


Actually build the squashfs image.


# Save the installed package list off
chroot ${MOUNT_POINT} dpkg --get-selections > /tmp/wip-installed-packages


Save off the installed package list. This was particularly useful when trying to replicate the existing router setup and making sure I had all the important packages installed. It doesn’t really serve a purpose now.

In terms of the config files I copy into /etc, shared across both routers are the following:

Breakdown of shared config
  • apt config (disable recommends, periodic updates):
    • apt/apt.conf.d/10periodic, apt/apt.conf.d/local-recommends
  • Adding a default, empty, locale:
    • default/locale
  • DNS/DHCP:
    • dnsmasq.conf, dnsmasq.d/dhcp-ranges, dnsmasq.d/static-ips
    • hosts, resolv.conf
  • Enabling IP forwarding:
    • sysctl.conf
  • Logs related:
    • logrotate.conf, rsyslog.conf
  • MQTT related:
    • mosquitto/mosquitto.users, mosquitto/conf.d/ssl.conf, mosquitto/conf.d/users.conf, mosquitto/mosquitto.acl, mosquitto/mosquitto.conf
    • mqtt-arp.conf
    • ssl/lets-encrypt-r3.crt, ssl/mqtt.home.key, ssl/mqtt.home.crt
  • PPP configuration:
    • ppp/ip-up.d/0000usepeerdns, ppp/ipv6-up.d/defaultroute, ppp/pap-secrets, ppp/chap-secrets
    • network/interfaces.d/pppoe-wan

The router specific config is mostly related to networking:

Breakdown of router specific config
  • Firewalling:
    • nftables.conf
  • Interfaces:
    • dnsmasq.d/interfaces
    • network/interfaces.d/eth0, network/interfaces.d/p1, network/interfaces.d/p2, network/interfaces.d/p7, network/interfaces.d/p8
  • PPP config (network interface piece):
    • ppp/peers/aquiss
  • SSH keys:
    • ssh/ssh_host_ecdsa_key, ssh/ssh_host_ed25519_key, ssh/ssh_host_rsa_key, ssh/ssh_host_ecdsa_key.pub, ssh/ssh_host_ed25519_key.pub, ssh/ssh_host_rsa_key.pub
  • Monitoring:
    • collectd/collectd.conf, collectd/collectd.conf.d/network.conf

09 February, 2023 10:13PM

Sam Hartman

Building Carthage with Carthage

This is the second in a series of blog posts introducing Carthage, an Infrastructure as Code framework I’ve been working on the last four years. In this post we’ll talk about how we use Carthage to build the Carthage container images. We absolutely could have just used a Containerfile to do this; in fact I recently removed a hybrid solution that produced an artifact and then used a Containerfile to turn it into an OCI image. The biggest reason we don’t use a Containerfile is that we want to be able to reuse the same infrastructure (installed software and configuration) across multiple environments. For example CarthageServerRole, a reusable Carthage component that install Carthage itself is used in several places:

  1. on raw hardware when we’re using Carthage to drive a hypervisor
  2. As part of image building pipelines to build AMIs for Amazon Web Services
  3. Installed onto AWS instances built from the Debian AMI where we cannot use custom AMIs
  4. Installed onto KVM VMs
  5. As part of building the Carthage container images

So the biggest thing Carthage gives us is uniformity in how we set up infrastructure. We’ve found a number of disadvantages of Containerfiles as well:

  1. Containerfiles mix the disadvantages of imperative and declarative formats. Like a declarative format they have no explicit control logic. It seems like that would be good for introspecting and reasoning about Containers. But all you get is the base image and a set of commands to build a container. For reasoning about common things like whether a container has a particular vulnerability or can be distributed under a particular license, that’s not very useful. So we don’t get much valuable introspection out of the declarative aspects, and all too often we see Containerfiles generated by Makefiles or other multi-level build-systems to get more logic or control flow.

  2. Containerfiles have limited facility for doing things outside the container. The disadvantage of this is that you end up installing all the software you need to build the container into the container itself (or having a multi-level build system). But for example if I want to use Ansible to configure a container, the easiest way to do that is to actually install Ansible into the container itself, even though Ansible has a large dependency chain most of which we won’t need in the container. Yes, Ansible does have a number of connection methods including one for Buildah, but by the point you’re using that, you’re already using a multi-level build system and aren’t really just using a Containerfile.

Okay, so since we’re not going to just use a Containerfile, what do we do instead? We produce a CarthageLayout. A CarthageLayout is an object in the Carthage modeling language. The modeling language looks a lot like Python—in fact it’s even implemented using Python metaclasses and uses the Python parser. However, there are some key semantic differences and it may help to think of the modeling language as its own thing. Carthage layouts are typically contained in Carthage plugins. For example, the oci_images plugin is our focus today. Most of the work in that plugin is in layout.py, and the layout begins here:

class layout(CarthageLayout):
    add_provider(ConfigLayout)
    add_provider(carthage.ansible.ansible_log, str(_dir/"ansible.log"))

The add_provider calls are special, and we’ll discuss them in a future post. For now, think of them as assignments in a more complex namespace than simple identifiers. But the heart of this layout is the CarthageImage class:

    class CarthageImage(PodmanImageModel, carthage_base.CarthageServerRole):
        base_image = injector_access('from_scratch_debian')
        oci_image_tag = 'localhost/carthage:latest'
        oci_image_command = ['/bin/systemd']

Most of the work of our image is done by inheritance. We inherit from the CarthageServerRole from the carthage_base plugin collection. A role is a reusable set of infrastructure that can be attached directly to a MachineModel. By inheriting from this role, we request the installation of the Carthage software. The role also supports copying in various dependencies; for example when Carthage is used to manage a cluster of machines, the layout corresponding to the cluster can automatically be copied to all nodes in the cluster. We do not need this feature to build the container image. The CarthageImage class sets its base image. Currently we are using our own base Debian image that we build with debootstrap and then import as a container image. In the fairly near future, we’ll change that to:

        base_image = ‘debian:bookworm’

That will simply use the Debian image from Dockerhub. We are building our own base image for historical reasons and need to confirm that everything works before switching over. By setting oci_image_tag we specify where in the local images the resulting image will be stored. We also specify that this image boots systemd. We actually do want to do a bit of work on top of CarthageServerRole specific to the container image. To do that we use a Carthage feature called a Customization. There are various types of customization. For example MachineCustomization runs a set of tasks on a Machine that is booted and on the network. When building images, the most common type of customization is a FilesystemCustomization. For these, we have access to the filesystem, and we have some way of running a command in the context of the filesystem. We don’t boot the filesystem as a machine unless we need to. (We might if the filesystem is a kvm VM or AWS instance for example). Carthage collects all the customizations in a role or image model. In the case of container image classes like PodmanImageModel, each customization is applied as an individual layer in the resulting container image.

Roles and customizations are both reusable infrastructure. Roles typically contain customizations. Roles operate at the modeling layer; you might introspect a machine’s model or an image’s model to see what functionality (roles) it provides. In contrast, customizations operate at the implementation layer. They do specific things like move files around, apply Ansible roles or similar.

Let’s take a look at the customization applied for the Carthage container image (full code):


        class customize_for_oci(FilesystemCustomization):

            @setup_task("Remove Software")
            async def remove_software(self):
                await self.run_command("apt", "-y", "purge",
                                       "exim4-base",
                                       )

            @setup_task("Install service")
            async def install_service(self):
               # installs and activates a systemd unit

Then to pull it all together, we simply run the layout:

sudo PYTHONPATH=$(pwd) python3 ./bin/carthage-runner ./oci_images build

In the next post, we will dig more into how to make infrastructure reusable.



comment count unavailable comments

09 February, 2023 08:43PM

February 08, 2023

hackergotchi for Chris Lamb

Chris Lamb

Most anticipated films of 2023

Very few highly-anticipated movies appear in January and February, as the bigger releases are timed so they can be considered for the Golden Globes in January and the Oscars in late February or early March, so film fans have the advantage of a few weeks after the New Year to collect their thoughts on the year ahead. In other words, I'm not actually late in outlining below the films I'm most looking forward to in 2023...

§

Barbie

No, seriously! If anyone can make a good film about a doll franchise, it's probably Greta Gerwig. Not only was Little Women (2019) more than admirable, the same could be definitely said for Lady Bird (2017). More importantly, I can't help feel she was the real 'Driver' behind Frances Ha (2012), one of the better modern takes on Claudia Weill's revelatory Girlfriends (1978). Still, whenever I remember that Barbie will be a film about a billion-dollar toy and media franchise with a nettlesome history, I recall I rubbished the "Facebook film" that turned into The Social Network (2010). Anyway, the trailer for Barbie is worth watching, if only because it seems like a parody of itself.

§

Blitz

It's difficult to overstate just how important the aerial bombing of London during World War II is crucial to understanding the British psyche, despite it being a constructed phenomenon from the outset. Without wishing to underplay the deaths of over 40,000 civilian deaths, Angus Calder pointed out in the 1990s that the modern mythology surrounding the event "did not evolve spontaneously; it was a propaganda construct directed as much at [then neutral] American opinion as at British." It will therefore be interesting to see how British—Grenadian—Trinidadian​​ director Steve McQueen addresses a topic so essential to the British self-conception. (Remember the controversy in right-wing circles about the sole Indian soldier in Christopher Nolan's Dunkirk (2017)?) McQueen is perhaps best known for his 12 Years a Slave (2013), but he recently directed a six-part film anthology for the BBC which addressed the realities of post-Empire immigration to Britain, and this leads me to suspect he sees the Blitz and its surrounding mythology with a more critical perspective. But any attempt to complicate the story of World War II will be vigorously opposed in a way that will make the recent hullabaloo surrounding The Crown seem tame. All this is to say that the discourse surrounding this release may be as interesting as the film itself.

§

Dune, Part II

Coming out of the cinema after the first part of Denis Vileneve's adaptation of Dune (2021), I was struck by the conception that it was less of a fresh adaptation of the 1965 novel by Frank Herbert than an attempt to rehabilitate David Lynch's 1984 version… and in a broader sense, it was also an attempt to reestablish the primacy of cinema over streaming TV and the myriad of other distractions in our lives. I must admit I'm not a huge fan of the original novel, finding within it a certain prurience regarding hereditary military regimes and writing about them with a certain sense of glee that belies a secret admiration for them... not to mention an eyebrow-raising allegory for the Middle East. Still, Dune, Part II is going to be a fantastic spectacle.

§

Ferrari

It'll be curious to see how this differs substantially from the recent Ford v Ferrari (2019), but given that Michael Mann's Heat (1995) so effectively re-energised the gangster/heist genre, I'm more than willing to kick the tires of this about the founder of the eponymous car manufacturer. I'm in the minority for preferring Mann's Thief (1981) over Heat, in part because the former deals in more abstract themes, so I'd have perhaps prefered to look forward to a more conceptual film from Mann over a story about one specific guy.

§

How Do You Live

There are a few directors one can look forward to watching almost without qualification, and Hayao Miyazaki (My Neighbor Totoro, Kiki's Delivery Service, Princess Mononoke Howl's Moving Castle, etc.) is one of them. And this is especially so given that The Wind Rises (2013) was meant to be the last collaboration between Miyazaki and Studio Ghibli. Let's hope he is able to come out of retirement in another ten years.

§

Indiana Jones and the Dial of Destiny

Given I had a strong dislike of Indiana Jones and the Kingdom of the Crystal Skull (2008), I seriously doubt I will enjoy anything this film has to show me, but with 1981's Raiders of the Lost Ark remaining one of my most treasured films (read my brief homage), I still feel a strong sense of obligation towards the Indiana Jones name, despite it feeling like the copper is being pulled out of the walls of this franchise today.

§

Kafka

I only know Polish filmmaker Agnieszka Holland through her Spoor (2017), an adaptation of Olga Tokarczuk's 2009 eco-crime novel Drive Your Plow Over the Bones of the Dead. I wasn't an unqualified fan of Spoor (nor the book on which it is based), but I am interested in Holland's take on the life of Czech author Franz Kafka, an author enmeshed with twentieth-century art and philosophy, especially that of central Europe. Holland has mentioned she intends to tell the story "as a kind of collage," and I can hope that it is an adventurous take on the over-furrowed biopic genre. Or perhaps Gregor Samsa will awake from uneasy dreams to find himself transformed in his bed into a huge verminous biopic.

§

The Killer

It'll be interesting to see what path David Fincher is taking today, especially after his puzzling and strangely cold Mank (2020) portraying the writing process behind Orson Welles' Citizen Kane (1941). The Killer is said to be a straight-to-Netflix thriller based on the graphic novel about a hired assassin, which makes me think of Fincher's Zodiac (2007), and, of course, Se7en (1995). I'm not as entranced by Fincher as I used to be, but any film with Michael Fassbender and Tilda Swinton (with a score by Trent Reznor) is always going to get my attention.

§

Killers of the Flower Moon

In Killers of the Flower Moon, Martin Scorsese directs an adaptation of a book about the FBI's investigation into a conspiracy to murder Osage tribe members in the early years of the twentieth century in order to deprive them of their oil-rich land. (The only thing more quintessentially American than apple pie is a conspiracy combined with a genocide.) Separate from learning more about this disquieting chapter of American history, I'd love to discover what attracted Scorsese to this particular story: he's one of the few top-level directors who have the ability to lucidly articulate their intentions and motivations.

§

Napoleon

It often strikes me that, despite all of his achievements and fame, it's somehow still possible to claim that Ridley Scott is relatively underrated compared to other directors working at the top level today. Besides that, though, I'm especially interested in this film, not least of all because I just read Tolstoy's War and Peace (read my recent review) and am working my way through the mind-boggling 431-minute Soviet TV adaptation, but also because several auteur filmmakers (including Stanley Kubrick) have tried to make a Napoleon epic… and failed.

§

Oppenheimer

In a way, a biopic about the scientist responsible for the atomic bomb and the Manhattan Project seems almost perfect material for Christopher Nolan. He can certainly rely on stars to queue up to be in his movies (Robert Downey Jr., Matt Damon, Kenneth Branagh, etc.), but whilst I'm certain it will be entertaining on many fronts, I fear it will fall into the well-established Nolan mould of yet another single man struggling with obsession, deception and guilt who is trying in vain to balance order and chaos in the world.

§

The Way of the Wind

Marked by philosophical and spiritual overtones, all of Terrence Malick's films are perfumed with themes of transcendence, nature and the inevitable conflict between instinct and reason. My particular favourite is his stunning Days of Heaven (1978), but The Thin Red Line (1998) and A Hidden Life (2019) also touched me ways difficult to relate, and are one of the few films about the Second World War that don't touch off my sensitivity about them (see my remarks about Blitz above). It is therefore somewhat Malickian that his next film will be a biblical drama about the life of Jesus. Given Malick's filmography, I suspect this will be far more subdued than William Wyler's 1959 Ben-Hur and significantly more equivocal in its conviction compared to Paolo Pasolini's ardently progressive The Gospel According to St. Matthew (1964). However, little beyond that can be guessed, and the film may not even appear until 2024 or even 2025.

§

Zone of Interest

I was mesmerised by Jonathan Glazer's Under the Skin (2013), and there is much to admire in his borderline 'revisionist gangster' film Sexy Beast (2000), so I will definitely be on the lookout for this one. The only thing making me hesitate is that Zone of Interest is based on a book by Martin Amis about a romance set inside the Auschwitz concentration camp. I haven't read the book, but Amis has something of a history in his grappling with the history of the twentieth century, and he seems to do it in a way that never sits right with me. But if Paul Verhoeven's Starship Troopers (1997) proves anything at all, it's all in the adaption.

08 February, 2023 11:42PM

Stephan Lachnit

Setting up fast Debian package builds using sbuild, mmdebstrap and apt-cacher-ng

In this post I will give a quick tutorial on how to set up fast Debian package builds using sbuild with mmdebstrap and apt-cacher-ng.

The usual tool for building Debian packages is dpkg-buildpackage, or a user-friendly wrapper like debuild, and while these are geat tools, if you want to upload something to the Debian archive they lack the required separation from the system they are run on to ensure that your packaging also works on a different system. The usual candidate here is sbuild. But setting up a schroot is tedious and performance tuning can be annoying. There is an alternative backend for sbuild that promises to make everything simpler: unshare. In this tutorial I will show you how to set up sbuild with this backend.

Additionally to the normal performance tweaking, caching downloaded packages can be a huge performance increase when rebuilding packages. I do rebuilds quite often, mostly when a new dependency got introduced I didn’t specify in debian/control yet or lintian notices a something I can easily fix. So let’s begin with setting up this caching.

Setting up apt-cacher-ng

Install apt-cacher-ng:

sudo apt install apt-cacher-ng

A pop-up will appear, if you are unsure how to answer it select no, we don’t need it for this use-case.

To enable apt-cacher-ng on your system, create /etc/apt/apt.conf.d/02proxy and insert:

Acquire::http::proxy "http://127.0.0.1:3142";
Acquire::https::proxy "DIRECT";

In /etc/apt-cacher-ng/acng.conf you can increase the value of ExThreshold to hold packages for a shorter or longer duration. The length depends on your specific use case and resources. A longer threshold takes more disk space, a short threshold like one day effecitvely only reduces the build time for rebuilds.

If you encounter weird issues on apt update at some point the future, you can try to clean the cache from apt-cacher-ng. You can use this script:

Setting up mmdebstrap

Install mmdebstrap:

sudo apt install mmdebstrap

We will create a small helper script to ease creating a chroot. Open ~/.local/bin/mmupdate and insert:

#!/bin/sh
mmdebstrap \
  --variant=buildd \
  --aptopt='Acquire::http::proxy "http://127.0.0.1:3142";' \
  --arch=amd64 \
  --components=main,contrib,non-free \
  unstable \
  ~/.cache/sbuild/unstable-amd64.tar.xz \
  http://deb.debian.org/debian

Notes:

  • aptopt enables apt-cacher-ng inside the chroot.
  • --arch sets the CPU architecture (see Debian Wiki).
  • --components sets the archive components, if you don’t want non-free pacakges you might want to remove some entries here.
  • unstable sets the Debian release, you can also set for example bookworm-backports here.
  • unstable-amd64.tar.xz is the output tarball containing the chroot, change accordingly to your pick of the CPU architecture and Debian release.
  • http://deb.debian.org/debian is the Debian mirror, you should set this to the same one you use in your /etc.apt/sources.list.

Make mmupdate executable and run it once:

chmod +x ~/.local/bin/mmupdate
mkdir -p ~/.cache/sbuild
~/.local/bin/mmupdate

If you execute mmupdate again you can see that the downloading stage is much faster thanks to apt-cacher-ng. For me the difference is from about 115s to about 95s. Your results may vary, this depends on the speed of your internet, Debian mirror and disk.

If you have used the schroot backend and sbuild-update before, you probably notice that creating a new chroot with mmdebstrap is slower. It would be a bit annoying to do this manually before we start a new Debian packaging session, so let’s create a systemd service that does this for us.

First create a folder for user services:

mkdir -p ~/.config/systemd/user

Create ~/.config/systemd/user/mmupdate.service and add:

[Unit]
Description=Run mmupdate
Wants=network-online.target

[Service]
Type=oneshot
ExecStart=%h/.local/bin/mmupdate

Start the service and test that it works:

systemctl --user daemon-reload
systemctl --user start mmupdate
systemctl --user status mmupdate

Create ~/.config/systemd/user/mmupdate.timer:

[Unit]
Description=Run mmupdate daily

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target

Enable the timer:

systemctl --user enable mmupdate.timer

Now every day mmupdte will be run automatically. You can adjust the period if you think daily rebuilds are a bit excessive.

A neat advantage of period rebuilds is that they the base files in your apt-cacher-ng cache warm every time they run.

Setting up sbuild:

Install sbuild and (optionally) autopkgtest:

sudo apt install --no-install-recommends sbuild autopkgtest

Create ~/.sbuildrc and insert:

# backend for using mmdebstrap chroots
$chroot_mode = 'unshare';

# build in tmpfs
$unshare_tmpdir_template = '/dev/shm/tmp.sbuild.XXXXXXXX';

# upgrade before starting build
$apt_update = 1;
$apt_upgrade = 1;

# build everything including source for source-only uploads
$build_arch_all = 1;
$build_arch_any = 1;
$build_source = 1;
$source_only_changes = 1;

# go to shell on failure instead of exiting
$external_commands = { "build-failed-commands" => [ [ '%SBUILD_SHELL' ] ] };

# always clean build dir, even on failure
$purge_build_directory = "always";

# run lintian
$run_lintian = 1;
$lintian_opts = [ '-i', '-I', '-E', '--pedantic' ];

# do not run piuparts
$run_piuparts = 0;

# run autopkgtest
$run_autopkgtest = 1;
$autopkgtest_root_args = '';
$autopkgtest_opts = [ '--apt-upgrade', '--', 'unshare', '--release', '%r', '--arch', '%a', '--prefix=/dev/shm/tmp.autopkgtest.' ];

# set uploader for correct signing
$uploader_name = 'Stephan Lachnit <[email protected]>';

You should adjust uploader_name. If you don’t want to run autopkgtest or lintian by default you can also disable it here. Note that for packages that need a lot of space for building, you might want to comment the unshare_tmpdir_template line to prevent a OOM build failure.

You can now build your Debian packages with the sbuild command :)

Finishing touches

You can add these variables to your ~/.bashrc as bonus (with adjusted name / email):

export DEBFULLNAME="<your_name>"
export DEBEMAIL="<your_email>"
export DEB_BUILD_OPTIONS="parallel=<threads>"

In particular adjust the value of parallel to ensure parallel builds.

If you are new to signing / uploading your package, first install the required tools:

sudo apt install devscripts dput-ng

Create ~/.devscripts and insert:

DEBSIGN_KEYID=<your_gpg_fingerpring>
USCAN_SYMLINK=rename

You can now sign the .changes file with:

debsign ../<pkgname_version_arch>.changes

And for source-only uploads with:

debsign -S ../<pkgname_version_arch>_source.changes

If you don’t introduce a new binary package, you always want to go with source-only changes.

You can now upload the package to Debian with

dput ../<filename>.changes

Update Feburary 22nd

Jochen Sprickerhof, who originally advised me to use the unshare backend, commented that one can also use --include=auto-apt-proxy instead of the --aptopt option in mmdebstrap to detect apt proxies automatically. He also let me know that it is possible to use autopkgtest on tmpfs (config in the blog post is updated) and added an entry on the sbuild wiki page on how to setup sbuild+unshare with ccache if you often need to build a large package.

Further, using --variant=apt and --include=build-essential will produce smaller build chroots if wished. On the contrary, one can of course also use the --include option to include debhelper and lintian (or any other packages you like) to further decrease the setup time. However, staying with buildd variant is a good choice for official uploads.

Resources for further reading

https://wiki.debian.org/sbuild
https://www.unix-ag.uni-kl.de/~bloch/acng/html/index.html
https://wiki.ubuntu.com/SimpleSbuild
https://wiki.archlinux.org/title/Systemd/Timers
https://manpages.debian.org/unstable/autopkgtest/autopkgtest-virt-unshare.1.en.html
Thanks for reading!

08 February, 2023 06:49PM

Thorsten Alteholz

My Debian Activities in January 2023

FTP master

This month I accepted 419 and rejected 46 packages. The overall number of packages that got accepted was 429. Looking at these numbers and comparing them to the previous month, one can see: the freeze is near. Everybody wants to get some packages into the archive and I hope nobody is disappointed.

Debian LTS

This was my hundred-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. 

This month my all in all workload has been 14h.

During that time I uploaded:

  • [DLA 3272-1] sudo (embargoed) security update for one CVE
  • [DLA 3286-1] tor security update for one CVE
  • [DLA 3290-1] libzen security update for one CVE
  • [libzen Bullseye] debdiff sent to maintainer
  • [DLA 3294-1] libarchive security update for one CVE

I also attended the monthly LTS meeting and did some days of frontdesk duties.

Debian ELTS

This month was the fifty fourth ELTS month.

  • [ELA-772-1] sudo security update of Jessie and Stretch for one CVE
  • [ELA-781-1] libzen security update of Stretch for one CVE
  • [ELA-782-1] xorg-server security update of Jessie and Stretch for six CVEs
  • [ELA-790-1] libarchive security update of Jessie and Stretch for one CVEs

Last but not least I did some days of frontdesk duties.

Debian Astro

This month I uploaded improved packages or new versions of:

I also uploaded new packages:

Debian IoT

This month I uploaded improved packages of:

Debian Printing

This month I uploaded new versions or improved packages of:

I also uploaded new packages:

08 February, 2023 06:45PM by alteholz

Antoine Beaupré

Major outage with Oricom uplink

The server that normally serves this page, all my email, and many more services was unavailable for about 24 hours. This post explains how and why.

What happened?

Starting February 2nd, I started seeing intermittent packet loss on the network. Every hour or so, the link would go down for one or two minutes, then come back up.

At first, I didn't think much of it because I was away and could blame the crappy wifi or the uplink I using. But when I came in the office on Monday, the service was indeed seriously degraded. I could barely do videoconferencing calls as they would cut out after about half an hour.

I opened a ticket with my uplink, Oricom. They replied that it was an issue they couldn't fix on their end and would need someone on site to fix.

So, the next day (Tuesday, at around 10EST) I called Oricom again, and they made me do a full modem reset, which involves plugging a pin in a hole for 15 seconds on the Technicolor TC4400 cable modem. Then the link went down, and it didn't come back up at all.

Boom.

Oricom then escalated this to their upstream (Oricom is a reseller of Videotron, who has basically the monopoly on cable in Québec) which dispatched a tech. This tech, in turn, arrived some time after lunch and said the link worked fine and it was a hardware issue.

At this point, Oricom put a new modem in the mail and I started mitigation.

Mitigation

Website

The first thing I did, weirdly, was trying to rebuild this blog. I figured it should be pretty simple: install ikiwiki and hit rebuild. I knew I had some patches on ikiwiki to deploy, but surely those are not a deal breaker, right?

Nope. Turns out I wrote many plugins and those still don't ship with ikiwiki, despite having been sent upstream a while back, some years ago.

So I deployed the plugins inside the .ikiwiki directory of the site in the hope of making things a little more "standalone". Unfortunately, that didn't work either because the theme must be shipped in the system-wide location: I couldn't figure out how to put it to have it bundled with the main repository. At that point I mostly gave up because I had spent too much time on this and I had to do something about email otherwise it would start to bounce.

Email

So I made a new VM at Linode (thanks 2.5admins for the credits) to build a new mail server.

This wasn't the best idea, in retrospect, because it was really overkill: I started rebuilding the whole mail server from scratch.

Ideally, this would be in Puppet and I would just deploy the right profile and the server would be rebuilt. Unfortunately, that part of my infrastructure is not Puppetized and even if it would, well the Puppet server was also down so I would have had to bring that up first.

At first, I figured I would just make a secondary mail exchanger (MX), to spool mail for longer so that I wouldn't lose it. But I decided against that: I thought it was too hard to make a "proper" MX as it needs to also filter mail while avoiding backscatter. Might as well just build a whole new server! I had a copy of my full mail spool on my laptop, so I figured that was possible.

I mostly got this right: added a DKIM key, installed Postfix, Dovecot, OpenDKIM, OpenDMARC, glue it all together, and voilà, I had a mail server. Oh, and spampd. Oh, and I need the training data, oh, and this and... I wasn't done and it was time to sleep.

The mail server went online this morning, and started accepting mail. I tried syncing my laptop mail spool against it, but that failed because Dovecot generated new UIDs for the emails, and isync correctly failed to sync. I tried to copy the UIDs from the server in the office (which I had still access to locally), but that somehow didn't work either.

But at least the mail was getting delivered and stored properly. I even had the Sieve rules setup so it would get sorted properly too. Unfortunately, I didn't hook that up properly, so those didn't actually get sorted. Thankfully, Dovecot can re-filter emails with the sieve-filter command, so that was fixed later.

At this point, I started looking for other things to fix.

Web, again

I figured I was almost done with the website, might as well publish it. So I installed the Nginx Debian package, got a cert with certbot, and added the certs to the default configuration. I rsync'd my build in /var/www/html and boom, I had a website. The Goatcounter analytics were timing out, but that was easy to turn off.

Resolution

Almost at that exact moment, a bang on the door told me mail was here and I had the modem. I plugged it in and a few minutes later, marcos was back online.

So this was a lot (a lot!) of work for basically nothing. I could have just taken the day off and wait for the package to be delivered. It would definitely have been better to make a simpler mail exchanger to spool the mail to avoid losing it. And in fact, that's what I eventually ended up doing: I converted the linode server in a mail relay to continue accepting mail with DNS propagates, but without having to sort the mail out of there...

Right now I have about 200 mails in a mailbox that I need to move back into marcos. Normally, this would just be a simple rsync, but because both servers have accepted mail simultaneously, it's going to be simpler to just move those exact mails on there. Because dovecot helpfully names delivered files with the hostname it's running on, it's easy to find those files and transfer them, basically:

rsync -v -n --files-from=<(ssh colette.anarc.at find Maildir -name '*colette*' ) colette.anarc.at: colette/
rsync -v -n --files-from=<(ssh colette.anarc.at find Maildir -name '*colette*' ) colette/ marcos.anarc.at:

Overall, the outage lasted about 24 hours, from 11:00EST (16:00UTC) on 2023-02-07 to the same time today.

Future work

I'll probably keep a mail relay to make those situations more manageable in the future. At first I thought that mail filtering would be a problem, but that happens post queue anyways and I don't bounce mail based on Spamassassin, so back-scatter shouldn't be an issue.

I basically need Postfix, OpenDMARC, and Postgrey. I'm not even sure I need OpenDKIM as the server won't process outgoing mail, so it doesn't need to sign anything, just check incoming signatures, which OpenDMARC can (probably?) do.

Thanks to everyone who supported me through this ordeal, you know who you are (and I'm happy to give credit here if you want to be deanonymized)!

08 February, 2023 05:45PM

February 07, 2023

Stephan Lachnit

Installing Debian on F2FS rootfs with deboostrap and systemd-boot

I recently got a new NVME drive. My plan was to create a fresh Debian install on an F2FS root partition with compression for maximum performance. As it turns out, this is not entirely trivil to accomplish. For one, the Debian installer does not support F2FS (here is my attempt to add it from 2021). And even if it did, grub does not support F2FS with the extra_attr flag that is required for compression support (at least as of grub 2.06).

Luckily, we can install Debian anyway with all these these shiny new features when we go the manual road with debootstrap and using systemd-boot as bootloader. We can break down the process into several steps:

  1. Creating the partition table
  2. Creating and mounting the root partition
  3. Bootstrapping with debootstrap
  4. Chrooting into the system
  5. Configure the base system
  6. Define static file system information
  7. Installing the kernel and bootloader
  8. Finishing touches

Warning: Playing around with partitions can easily result in data if you mess up! Make sure to double check your commands and create a data backup if you don’t feel confident about the process.

Creating the partition partble

The first step is to create the GPT partition table on the new drive. There are several tools to do this, I recommend the ArchWiki page on this topic for details. For simplicity I just went with the GParted since it has an easy GUI, but feel free to use any other tool. The layout should look like this:

Type       │ Partition      │ Suggested size
───────────┼────────────────┼───────────────
EFI        │ /dev/nvme0n1p1 │         512MiB
Linux swap │ /dev/nvme0n1p2 │           1GiB
Linux fs   │ /dev/nvme0n1p3 │      remainder

Notes:

  • The disk names are just an example and have to be adjusted for your system.
  • Don’t set disk labels, they don’t appear on the new install anyway and some UEFIs might not like it on your boot partition.
  • The size of the EFI partition can be smaller, in practive it’s unlikely that you need more than 300 MiB. However some UEFIs might be buggy and if you ever want to install an additional kernel or something like memtest86+ you will be happy to have the extra space.
  • The swap partition can be omitted, it is not strictly needed. If you need more swap for some reason you can also add more using a swap file later (see ArchWiki page). If you know you want to use suspend-to-RAM, you want to increase the size to something more than the size of your memory.
  • If you used GParted, create the EFI partition as FAT32 and set the esp flag. For the root partition use ext4 or F2FS if available.

Creating and mounting the root partition

To create the root partition, we need to install the f2fs-tools first:

sudo apt install f2fs-tools

Now we can create the file system with the correct flags:

mkfs.f2fs -O extra_attr,inode_checksum,sb_checksum,compression,encrypt /dev/nvme0n1p3

For details on the flags visit the ArchWiki page.

Next, we need to mount the partition with the correct flags. First, create a working directory:

mkdir boostrap
cd boostrap
mkdir root
export DFS=$(pwd)/root

Then we can mount the partition:

sudo mount -o compress_algorithm=zstd:6,compress_chksum,atgc,gc_merge,lazytime /dev/nvme0n1p3 $DFS

Again, for details on the mount options visit the above mentioned ArchWiki page.

Bootstrapping with debootstrap

First we need to install the debootstrap package:

sudo apt install debootstrap

Now we can do the bootstrapping:

debootstrap --arch=amd64 --components=main,contrib,non-free,non-free-firmware unstable $DFS http://deb.debian.org/debian

Notes:

  • --arch sets the CPU architecture (see Debian Wiki).
  • --components sets the archive components, if you don’t want non-free pacakges you might want to remove some entries here.
  • unstable is the Debian release, you might want to change that to testing or bookworm.
  • $DFS points to the mounting point of the root partition.
  • http://deb.debian.org/debian is the Debian mirror, you might want to set that to http://ftp.de.debian.org/debian or similar if you have a fast mirror in you area.

Chrooting into the system

Before we can chroot into the newly created system, we need to prepare and mount virtual kernel file systems. First create the directories:

sudo mkdir -p $DFS/dev $DFS/dev/pts $DFS/proc $DFS/sys $DFS/run $DFS/sys/firmware/efi/efivars $DFS/boot/efi

Then bind-mount the directories from your system to the mount point of the new system:

sudo mount -v -B /dev $DFS/dev
sudo mount -v -B /dev/pts $DFS/dev/pts
sudo mount -v -B /proc $DFS/proc
sudo mount -v -B /sys $DFS/sys
sudo mount -v -B /run $DFS/run
sudo mount -v -B /sys/firmware/efi/efivars $DFS/sys/firmware/efi/efivars

As a last step, we need to mount the EFI partition:

sudo mount -v -B /dev/nvme0n1p1 $DFS/boot/efi

Now we can chroot into new system:

sudo chroot $DFS /bin/bash

Configure the base system

The first step in the chroot is setting the locales. We need this since we might leak the locales from our base system into the chroot and if this happens we get a lot of annoying warnings.

export LC_ALL=C.UTF-8 LANG=C.UTF-8
apt install locales console-setup

Set your locales:

dpkg-reconfigure locales

Set your keyboard layout:

dpkg-reconfigure keyboard-configuration

Set your timezone:

dpkg-reconfigure tzdata

Now you have a fully functional Debian chroot! However, it is not bootable yet, so let’s fix that.

Define static file system information

The first step is to make sure the system mounts all partitions on startup with the correct mount flags. This is done in /etc/fstab (see ArchWiki page). Open the file and change its content to:

# file system                               mount point   type   options                                                            dump   pass

# NVME efi partition
UUID=XXXX-XXXX                              /boot/efi     vfat   umask=0077                                                         0      0

# NVME swap
UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX   none          swap   sw                                                                 0      0

# NVME main partition
UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX   /             f2fs   compress_algorithm=zstd:6,compress_chksum,atgc,gc_merge,lazytime   0      1

You need to fill in the UUIDs for the partitions. You can use

ls -lAph /dev/disk/by-uuid/

to match the UUIDs to the more readable disk name under /dev.

Installing the kernel and bootloader

First install the systemd-boot and efibootmgr packages:

apt install systemd-boot efibootmgr

Now we can install the bootloader:

bootctl install --path=/boot/efi

You can verify the procedure worked with

efibootmgr -v

The next step is to install the kernel, you can find a fitting image with:

apt search linux-image-*

In my case:

apt install linux-image-amd64

After the installation of the kernel, apt will add an entry for systemd-boot automatically. Neat!

However, since we are in a chroot the current settings are not bootable. The first reason is the boot partition, which will likely be the one from your current system. To change that, navigate to /boot/efi/loader/entries, it should contain one config file. When you open this file, it should look something like this:

title      Debian GNU/Linux bookworm/sid
version    6.1.0-3-amd64
machine-id 2967cafb6420ce7a2b99030163e2ee6a
sort-key   debian
options    root=PARTUUID=f81d4fae-7dec-11d0-a765-00a0c91e6bf6 ro systemd.machine_id=2967cafb6420ce7a2b99030163e2ee6a
linux      /2967cafb6420ce7a2b99030163e2ee6a/6.1.0-3-amd64/linux
initrd     /2967cafb6420ce7a2b99030163e2ee6a/6.1.0-3-amd64/initrd.img-6.1.0-3-amd64

The PARTUUID needs to point to the partition equivalent to /dev/nvme0n1p3 on your system. You can use

ls -lAph /dev/disk/by-partuuid/

to match the PARTUUIDs to the more readable disk name under /dev.

The second problem is the ro flag in options which tell the kernel to boot in read-only mode. The default is rw, so you can just remove the ro flag.

Once this is fixed, the new system should be bootable. You can change the boot order with:

efibootmgr --bootorder

However, before we reboot we might add well add a user and install some basic software.

Finishing touches

Add a user:

useradd -m -G sudo -s /usr/bin/bash -c 'Full Name' username

Debian provides a TUI to install Desktop Environment. To open it, run:

tasksel

Now you can finally reboot into your new system:

reboot

Resources for further reading

https://ivanb.neocities.org/blogs/y2022/debootstrap
https://www.debian.org/releases/stable/amd64/apds03.en.html
https://www.addictivetips.com/ubuntu-linux-tips/set-up-systemd-boot-on-arch-linux/
https://p5r.uk/blog/2020/using-systemd-boot-on-debian-bullseye.html
https://www.linuxfromscratch.org/lfs/view/stable/chapter07/kernfs.html
Thanks for reading!

07 February, 2023 10:37PM

February 06, 2023

Vincent Bernat

Fast and dynamic encoding of Protocol Buffers in Go

Protocol Buffers are a popular choice for serializing structured data due to their compact size, fast processing speed, language independence, and compatibility. There exist other alternatives, including Cap’n Proto, CBOR, and Avro.

Usually, data structures are described in a proto definition file (.proto). The protoc compiler and a language-specific plugin convert it into code:

$ head flow-4.proto
syntax = "proto3";
package decoder;
option go_package = "akvorado/inlet/flow/decoder";

message FlowMessagev4 {

  uint64 TimeReceived = 2;
  uint32 SequenceNum = 3;
  uint64 SamplingRate = 4;
  uint32 FlowDirection = 5;
$ protoc -I=. --plugin=protoc-gen-go --go_out=module=akvorado:. flow-4.proto
$ head inlet/flow/decoder/flow-4.pb.go
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
//      protoc-gen-go v1.28.0
//      protoc        v3.21.12
// source: inlet/flow/data/schemas/flow-4.proto

package decoder

import (
        protoreflect "google.golang.org/protobuf/reflect/protoreflect"

Akvorado collects network flows using IPFIX or sFlow, decodes them with GoFlow2, encodes them to Protocol Buffers, and sends them to Kafka to be stored in a ClickHouse database. Collecting a new field, such as source and destination MAC addresses, requires modifications in multiple places, including the proto definition file and the ClickHouse migration code. Moreover, the cost is paid by all users.1 It would be nice to have an application-wide schema and let users enable or disable the fields they need.

While the main goal is flexibility, we do not want to sacrifice performance. On this front, this is quite a success: when upgrading from 1.6.4 to 1.7.1, the decoding and encoding performance almost doubled! 🤗

goos: linux
goarch: amd64
pkg: akvorado/inlet/flow
cpu: AMD Ryzen 5 5600X 6-Core Processor
                            │ initial.txt  │              final.txt              │
                            │    sec/op    │   sec/op     vs base                │
Netflow/with_encoding-12      12.963µ ± 2%   7.836µ ± 1%  -39.55% (p=0.000 n=10)
Sflow/with_encoding-12         19.37µ ± 1%   10.15µ ± 2%  -47.63% (p=0.000 n=10)

Faster Protocol Buffers encoding

I use the following code to benchmark both the decoding and encoding process. Initially, the Decode() method is a thin layer above GoFlow2 producer and stores the decoded data into the in-memory structure generated by protoc. Later, some of the data will be encoded directly during flow decoding. This is why we measure both the decoding and the encoding.2

func BenchmarkDecodeEncodeSflow(b *testing.B) {
    r := reporter.NewMock(b)
    sdecoder := sflow.New(r)
    data := helpers.ReadPcapPayload(b,
        filepath.Join("decoder", "sflow", "testdata", "data-1140.pcap"))

    for _, withEncoding := range []bool{true, false} {
        title := map[bool]string{
            true:  "with encoding",
            false: "without encoding",
        }[withEncoding]
        var got []*decoder.FlowMessage
        b.Run(title, func(b *testing.B) {
            for i := 0; i < b.N; i++ {
                got = sdecoder.Decode(decoder.RawFlow{
                    Payload: data,
                    Source: net.ParseIP("127.0.0.1"),
                })
                if withEncoding {
                    for _, flow := range got {
                        buf := []byte{}
                        buf = protowire.AppendVarint(buf, uint64(proto.Size(flow)))
                        proto.MarshalOptions{}.MarshalAppend(buf, flow)
                    }
                }
            }
        })
    }
}

The canonical Go implementation for Protocol Buffers, google.golang.org/protobuf is not the most efficient one. For a long time, people were relying on gogoprotobuf. However, the project is now deprecated. A good replacement is vtprotobuf.3

goos: linux
goarch: amd64
pkg: akvorado/inlet/flow
cpu: AMD Ryzen 5 5600X 6-Core Processor
                            │ initial.txt │             bench-2.txt             │
                            │   sec/op    │   sec/op     vs base                │
Netflow/with_encoding-12      12.96µ ± 2%   10.28µ ± 2%  -20.67% (p=0.000 n=10)
Netflow/without_encoding-12   8.935µ ± 2%   8.975µ ± 2%        ~ (p=0.143 n=10)
Sflow/with_encoding-12        19.37µ ± 1%   16.67µ ± 2%  -13.93% (p=0.000 n=10)
Sflow/without_encoding-12     14.62µ ± 3%   14.87µ ± 1%   +1.66% (p=0.007 n=10)

Dynamic Protocol Buffers encoding

We have our baseline. Let’s see how to encode our Protocol Buffers without a .proto file. The wire format is simple and rely a lot on variable-width integers.

Variable-width integers, or varints, are an efficient way of encoding unsigned integers using a variable number of bytes, from one to ten, with small values using fewer bytes. They work by splitting integers into 7-bit payloads and using the 8th bit as a continuation indicator, set to 1 for all payloads except the last.

Variable-width integers encoding in Protocol Buffers: conversion of 150 to a varint
Variable-width integers encoding in Protocol Buffers

For our usage, we only need two types: variable-width integers and byte sequences. A byte sequence is encoded by prefixing it by its length as a varint. When a message is encoded, each key-value pair is turned into a record consisting of a field number, a wire type, and a payload. The field number and the wire type are encoded as a single variable-width integer called a tag.

Message encoded with Protocol Buffers: three varints, two sequences of bytes
Message encoded with Protocol Buffers

We use the following low-level functions to build the output buffer:

Our schema abstraction contains the appropriate information to encode a message (ProtobufIndex) and to generate a proto definition file (fields starting with Protobuf):

type Column struct {
    Key       ColumnKey
    Name      string
    Disabled  bool

    // […]
    // For protobuf.
    ProtobufIndex    protowire.Number
    ProtobufType     protoreflect.Kind // Uint64Kind, Uint32Kind, …
    ProtobufEnum     map[int]string
    ProtobufEnumName string
    ProtobufRepeated bool
}

We have a few helper methods around the protowire functions to directly encode the fields while decoding the flows. They skip disabled fields or non-repeated fields already encoded. Here is an excerpt of the sFlow decoder:

sch.ProtobufAppendVarint(bf, schema.ColumnBytes, uint64(recordData.Base.Length))
sch.ProtobufAppendVarint(bf, schema.ColumnProto, uint64(recordData.Base.Protocol))
sch.ProtobufAppendVarint(bf, schema.ColumnSrcPort, uint64(recordData.Base.SrcPort))
sch.ProtobufAppendVarint(bf, schema.ColumnDstPort, uint64(recordData.Base.DstPort))
sch.ProtobufAppendVarint(bf, schema.ColumnEType, helpers.ETypeIPv4)

For fields that are required later in the pipeline, like source and destination addresses, they are stored unencoded in a separate structure:

type FlowMessage struct {
    TimeReceived uint64
    SamplingRate uint32

    // For exporter classifier
    ExporterAddress netip.Addr

    // For interface classifier
    InIf  uint32
    OutIf uint32

    // For geolocation or BMP
    SrcAddr netip.Addr
    DstAddr netip.Addr
    NextHop netip.Addr

    // Core component may override them
    SrcAS     uint32
    DstAS     uint32
    GotASPath bool

    // protobuf is the protobuf representation for the information not contained above.
    protobuf      []byte
    protobufSet   bitset.BitSet
}

The protobuf slice holds encoded data. It is initialized with a capacity of 500 bytes to avoid resizing during encoding. There is also some reserved room at the beginning to be able to encode the total size as a variable-width integer. Upon finalizing encoding, the remaining fields are added and the message length is prefixed:

func (schema *Schema) ProtobufMarshal(bf *FlowMessage) []byte {
    schema.ProtobufAppendVarint(bf, ColumnTimeReceived, bf.TimeReceived)
    schema.ProtobufAppendVarint(bf, ColumnSamplingRate, uint64(bf.SamplingRate))
    schema.ProtobufAppendIP(bf, ColumnExporterAddress, bf.ExporterAddress)
    schema.ProtobufAppendVarint(bf, ColumnSrcAS, uint64(bf.SrcAS))
    schema.ProtobufAppendVarint(bf, ColumnDstAS, uint64(bf.DstAS))
    schema.ProtobufAppendIP(bf, ColumnSrcAddr, bf.SrcAddr)
    schema.ProtobufAppendIP(bf, ColumnDstAddr, bf.DstAddr)

    // Add length and move it as a prefix
    end := len(bf.protobuf)
    payloadLen := end - maxSizeVarint
    bf.protobuf = protowire.AppendVarint(bf.protobuf, uint64(payloadLen))
    sizeLen := len(bf.protobuf) - end
    result := bf.protobuf[maxSizeVarint-sizeLen : end]
    copy(result, bf.protobuf[end:end+sizeLen])

    return result
}

Minimizing allocations is critical for maintaining encoding performance. The benchmark tests should be run with the -benchmem flag to monitor allocation numbers. Each allocation incurs an additional cost to the garbage collector. The Go profiler is a valuable tool for identifying areas of code that can be optimized:

$ go test -run=__nothing__ -bench=Netflow/with_encoding \
>         -benchmem -cpuprofile profile.out \
>         akvorado/inlet/flow
goos: linux
goarch: amd64
pkg: akvorado/inlet/flow
cpu: AMD Ryzen 5 5600X 6-Core Processor
Netflow/with_encoding-12             143953              7955 ns/op            8256 B/op        134 allocs/op
PASS
ok      akvorado/inlet/flow     1.418s
$ go tool pprof profile.out
File: flow.test
Type: cpu
Time: Feb 4, 2023 at 8:12pm (CET)
Duration: 1.41s, Total samples = 2.08s (147.96%)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) web

After using the internal schema instead of code generated from the proto definition file, the performance improved. However, this comparison is not entirely fair as less information is being decoded and previously GoFlow2 was decoding to its own structure, which was then copied to our own version.

goos: linux
goarch: amd64
pkg: akvorado/inlet/flow
cpu: AMD Ryzen 5 5600X 6-Core Processor
                            │ bench-2.txt  │             bench-3.txt             │
                            │    sec/op    │   sec/op     vs base                │
Netflow/with_encoding-12      10.284µ ± 2%   7.758µ ± 3%  -24.56% (p=0.000 n=10)
Netflow/without_encoding-12    8.975µ ± 2%   7.304µ ± 2%  -18.61% (p=0.000 n=10)
Sflow/with_encoding-12         16.67µ ± 2%   14.26µ ± 1%  -14.50% (p=0.000 n=10)
Sflow/without_encoding-12      14.87µ ± 1%   13.56µ ± 2%   -8.80% (p=0.000 n=10)

As for testing, we use github.com/jhump/protoreflect: the protoparse package parses the proto definition file we generate and the dynamic package decodes the messages. Check the ProtobufDecode() method for more details.4

To get the final figures, I have also optimized the decoding in GoFlow2. It was relying heavily on binary.Read(). This function may use reflection in certain cases and each call allocates a byte array to read data. Replacing it with a more efficient version provides the following improvement:

goos: linux
goarch: amd64
pkg: akvorado/inlet/flow
cpu: AMD Ryzen 5 5600X 6-Core Processor
                            │ bench-3.txt  │             bench-4.txt             │
                            │    sec/op    │   sec/op     vs base                │
Netflow/with_encoding-12       7.758µ ± 3%   7.365µ ± 2%   -5.07% (p=0.000 n=10)
Netflow/without_encoding-12    7.304µ ± 2%   6.931µ ± 3%   -5.11% (p=0.000 n=10)
Sflow/with_encoding-12        14.256µ ± 1%   9.834µ ± 2%  -31.02% (p=0.000 n=10)
Sflow/without_encoding-12     13.559µ ± 2%   9.353µ ± 2%  -31.02% (p=0.000 n=10)

It is now easier to collect new data and the inlet component is faster! 🚅

Notice

Some paragraphs were editorialized by ChatGPT, using “editorialize and keep it short” as a prompt. The result was proofread by a human for correctness. The main idea is that ChatGPT should be better at English than me.


  1. While empty fields are not serialized to Protocol Buffers, empty columns in ClickHouse take some space, even if they compress well. Moreover, unused fields are still decoded and they may clutter the interface. ↩︎

  2. There is a similar function using NetFlow. NetFlow and IPFIX protocols are less complex to decode than sFlow as they are using a simpler TLV structure. ↩︎

  3. vtprotobuf generates more optimized Go code by removing an abstraction layer. It directly generates the code encoding each field to bytes:

    if m.OutIfSpeed != 0 {
        i = encodeVarint(dAtA, i, uint64(m.OutIfSpeed))
        i--
        dAtA[i] = 0x6
        i--
        dAtA[i] = 0xd8
    }
    

    ↩︎

  4. There is also a protoprint package to generate proto definition file. I did not use it. ↩︎

06 February, 2023 08:58AM by Vincent Bernat

Reproducible Builds

Reproducible Builds in January 2023

Welcome to the first report for 2023 from the Reproducible Builds project!

In these reports we try and outline the most important things that we have been up to over the past month, as well as the most important things in/around the community. As a quick recap, the motivation behind the reproducible builds effort is to ensure no malicious flaws can be deliberately introduced during compilation and distribution of the software that we run on our devices. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.


News

In a curious turn of events, GitHub first announced this month that the checksums of various Git archives may be subject to change, specifically that because:

… the default compression for Git archives has recently changed. As result, archives downloaded from GitHub may have different checksums even though the contents are completely unchanged.

This change (which was brought up on our mailing list last October) would have had quite wide-ranging implications for anyone wishing to validate and verify downloaded archives using cryptographic signatures. However, GitHub reversed this decision, updating their original announcement with a message that “We are reverting this change for now. More details to follow.” It appears that this was informed in part by an in-depth discussion in the GitHub Community issue tracker.


The Bundesamt für Sicherheit in der Informationstechnik (BSI) (trans: ‘The Federal Office for Information Security’) is the agency in charge of managing computer and communication security for the German federal government. They recently produced a report that touches on attacks on software supply-chains (Supply-Chain-Angriff). (German PDF)


Contributor Seb35 updated our website to fix broken links to Tails’ Git repository [][], and Holger updated a large number of pages around our recent summit in Venice [][][][].


Noak Jönsson has written an interesting paper entitled The State of Software Diversity in the Software Supply Chain of Ethereum Clients. As the paper outlines:

In this report, the software supply chains of the most popular Ethereum clients are cataloged and analyzed. The dependency graphs of Ethereum clients developed in Go, Rust, and Java, are studied. These client are Geth, Prysm, OpenEthereum, Lighthouse, Besu, and Teku. To do so, their dependency graphs are transformed into a unified format. Quantitative metrics are used to depict the software supply chain of the blockchain. The results show a clear difference in the size of the software supply chain required for the execution layer and consensus layer of Ethereum.


Yongkui Han posted to our mailing list discussing making reproducible builds & GitBOM work together without gitBOM-ID embedding. GitBOM (now renamed to OmniBOR) is a project to “enable automatic, verifiable artifact resolution across today’s diverse software supply-chains” []. In addition, Fabian Keil wrote to us asking whether anyone in the community would be at Chemnitz Linux Days 2023, which is due to take place on 11th and 12th March (event info).

Separate to this, Akihiro Suda posted to our mailing list just after the end of the month with a status report of bit-for-bit reproducible Docker/OCI images. As Akihiro mentions in their post, they will be giving a talk at FOSDEM in the ‘Containers’ devroom titled Bit-for-bit reproducible builds with Dockerfile and that “my talk will also mention how to pin the apt/dnf/apk/pacman packages with my repro-get tool.”


The extremely popular Signal messenger app added upstream support for the SOURCE_DATE_EPOCH environment variable this month. This means that release tarballs of the Signal desktop client do not embed nondeterministic release information. [][]



Distribution work

F-Droid & Android

There was a very large number of changes in the F-Droid and wider Android ecosystem this month:

On January 15th, a blog post entitled Towards a reproducible F-Droid was published on the F-Droid website, outlining the reasons why “F-Droid signs published APKs with its own keys” and how reproducible builds allow using upstream developers’ keys instead. In particular:

In response to […] criticisms, we started encouraging new apps to enable reproducible builds. It turns out that reproducible builds are not so difficult to achieve for many apps. In the past few months we’ve gotten many more reproducible apps in F-Droid than before. Currently we can’t highlight which apps are reproducible in the client, so maybe you haven’t noticed that there are many new apps signed with upstream developers’ keys.

(There was a discussion about this post on Hacker News.)

In addition:

Debian

As mentioned in last month’s report, Vagrant Cascadian has been organising a series of online sprints in order to ‘clear the huge backlog of reproducible builds patches submitted’ by performing NMUs (Non-Maintainer Uploads). During January, a sprint took place on the 10th, resulting in the following uploads:

During this sprint, Holger Levsen filed Debian bug #1028615 to request that the tracker.debian.org service display results of reproducible rebuilds, not just reproducible ‘CI’ results.

Elsewhere in Debian, strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, version 1.13.1-1 was uploaded to Debian unstable by Holger Levsen, including a fix by FC Stegerman (obfusk) to update a regular expression for the latest version of file(1) []. (#1028892)

Lastly, 65 reviews of Debian packages were added, 21 were updated and 35 were removed this month adding to our knowledge about identified issues.

Other distributions

In other distributions:


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb made the following changes to diffoscope, including preparing and uploading versions 231, 232, 233 and 234 to Debian:

  • No need for from __future__ import print_function import anymore. []
  • Comment and tidy the extras_require.json handling. []
  • Split inline Python code to generate test Recommends into a separate Python script. []
  • Update debian/tests/control after merging support for PyPDF support. []
  • Correctly catch segfaulting cd-iccdump binary. []
  • Drop some old debugging code. []
  • Allow ICC tests to (temporarily) fail. []

In addition, FC Stegerman (obfusk) made a number of changes, including:

  • Updating the test_text_proper_indentation test to support the latest version(s) of file(1). []
  • Use an extras_require.json file to store some build/release metadata, instead of accessing the internet. []
  • Updating an APK-related file(1) regular expression. []
  • On the diffoscope.org website, de-duplicate contributors by e-mail. []

Lastly, Sam James added support for PyPDF version 3 [] and Vagrant Cascadian updated a handful of tool references for GNU Guix. [][]


Upstream patches

The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. This month, we wrote a large number of such patches, including:


Testing framework

The Reproducible Builds project operates a comprehensive testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, the following changes were made by Holger Levsen:

  • Node changes:

  • Debian-related changes:

    • Only keep diffoscope’s HTML output (ie. no .json or .txt) for LTS suites and older in order to save diskspace on the Jenkins host. []
    • Re-create pbuilder base less frequently for the stretch, bookworm and experimental suites. []
  • OpenWrt-related changes:

    • Add gcc-multilib to OPENWRT_HOST_PACKAGES and install it on the nodes that need it. []
    • Detect more problems in the health check when failing to build OpenWrt. []
  • Misc changes:

    • Update the chroot-run script to correctly manage /dev and /dev/pts. [][][]
    • Update the Jenkins ‘shell monitor’ script to collect disk stats less frequently [] and to include various directory stats. [][]
    • Update the ‘real’ year in the configuration in order to be able to detect whether a node is running in the future or not. []
    • Bump copyright years in the default page footer. []

In addition, Christian Marangi submitted a patch to build OpenWrt packages with the V=s flag to enable debugging. []


If you are interested in contributing to the Reproducible Builds project, please visit the Contribute page on our website. You can get in touch with us via:

06 February, 2023 12:37AM

February 05, 2023

James Valleroy

A look back at FreedomBox project in 2022

This post is very late, but better late than never! I want to take a look back at the work that was done on FreedomBox during 2022.

Several apps were added to FreedomBox in 2022. The email server app (that was developed by a Google Summer of Code student back in 2021) was finally made available to the general audience of FreedomBox users. You will find it under the name “Postfix/Dovecot”, which are the main services configured by this app.

Another app that was added is Janus, which has the description “video room”. It is called “video room” instead of “video conference” because the room itself is persistent. People can join the room or leave, but there isn’t a concept of “calling” or “ending the call”. Actually, Janus is a lightweight WebRTC server that can be used as a backend for many different types of applications. But as implemented currently, there is just the simple video room app. In the future, more advanced apps such as Jangouts may be packaged in Debian and made available to FreedomBox.

RSS-Bridge is an app that generates RSS feeds for websites that don’t provide their own (for example, YouTube). It can be used together with any RSS news feed reader application, such as TT-RSS which is also available in FreedomBox.

There is now a “Privacy” page in the System menu, which allows enabling or disabling the Debian “popularity-contest” tool. If enabled, it reports the Debian packages that are installed on the system. The results can be seen at https://popcon.debian.org, which currently shows over 400 FreedomBoxes are reporting data.

A major feature added to FreedomBox in 2022 is the ability to uninstall apps. This feature is still considered experimental (it won’t work for every app), but many issues have been fixed already. There is an option to take a backup of the app’s data before uninstalling. There is also now an “operations queue” in case the user starts multiple install or uninstall operations concurrently.

XEP-0363 (HTTP File Upload) has been enabled for Ejabberd Chat Server. This allows files to be transferred between XMPP clients that support this feature.

There were a number of security improvements to FreedomBox, such as adding fail2ban jails for Dovecot, Matrix Synapse, and WordPress. Firewall rules were added to ensure that authentication and authorization for services proxied through Apache web server cannot be bypassed by programs running locally on the system. Also, we are no longer using libpam-tmpdir to provide temporary folder isolation, because it causes issues for several packages. Instead we use systemd’s sandboxing features, which provide even better isolation for services.

Some things were removed in 2022. The ez-ipupdate package is no longer used for Dynamic DNS, since it is replaced by a Python implementation of GnuDIP. An option to restrict who can log in to the system was removed, due to various issues that arose from it. Instead there is an option to restrict who can login through SSH. The DNSSEC diagnostic test was removed, because it caused confusion for many users (although use of DNSSEC is still recommended).

Finally, some statistics. There were 31 releases in 2022 (including
point releases). There were 68 unique contributors to the git
repository; this includes code contributions and translations (but not
contributions to the manual pages). In total, there were 980 commits to the git repository.

05 February, 2023 09:44PM by James Valleroy

hackergotchi for Mike Gabriel

Mike Gabriel

Call for translations: Lomiri / Ubuntu Touch 20.04

Prologue

For over a year now, Fre(i)e Software GmbH (my company) is involved in Ubuntu Touch development. The development effort currently is handled by a mix of paid and voluntary developers/contributors under the umbrella of the UBports Foundation. We are approaching the official first release of Ubuntu Touch 20.04 with rapid pace.

And, if you are a non-Englisch native speaker, we'd like to ask you for help... Read below.

light+love
Mike (aka sunweaver at debian.org, Mastodon, IRC, etc.)

Internationalization (i18n) of Ubuntu Touch 20.04

The UBports team has moved most of the translation workflows for localizing Ubuntu Touch over to Hosted Weblate:

To contribute to the UBports projects you need to register here:

The localization platform of all UBports / Lomiri components is sponsored by Hosted Weblate via their free hosting plan for Libre and Open Source Projects. Many thanks for providing this service.

Translating Lomiri

The translation components in the Lomiri project have already been set up and are ready for being updated by translators. Please expect some translation template changes for all those components to occur in the near future, but this should not hinder you from starting translation work right away.

Translating Lomiri will bring the best i18n experience to Ubuntu Touch 20.04 end users for the core libraries and the pre-installed (so called) Core Apps.

Translating Ubuntu Touch Apps

For App Developers (apps that are not among the Core Apps) we will now offer a translation slot under the UBports project on hosted.weblate.org:

If you are actively maintaining an Ubuntu Touch app, please ask for a translation component slot on hosted.weblate.org and we will set up your app's translation workflow for and with you. Using the translation service at Hosted Weblate is not a must for app developers, it's rather a service we offer to ease i18n work on Ubuntu Touch apps.

Who to contact?

To get translations for your app set up on Hosted Weblate, please get in touch with us on https://t.me/ubports, please highlight @sunweaver (Mike Gabriel), @BetaBreak (Raoul Kramer), @cibersheep and @Danfro with your request.

05 February, 2023 07:50PM by sunweaver

hackergotchi for Jonathan Dowland

Jonathan Dowland

2022 in reading

In 2022 I read 34 books (-19% on last year).

In 2021 roughly a quarter of the books I read were written by women. I was determined to push that ratio in 2022, so I made an effort to try and only read books by women. I knew that I wouldn't manage that, but by trying to, I did get the ratio up to 58% (by page count).

I'm not sure what will happen in 2023. My to-read pile has some back-pressure from books by male authors I postponed reading in 2022 (in particular new works by Christopher Priest and Adam Roberts). It's possible the ratio will swing back the other way, which would mean it would not be worth repeating the experiment. At least if the ratio is the point of the exercise. But perhaps it isn't: perhaps the useful outcome is more qualitative than quantitative.

I tried to read some new (to me) authors. I really enjoyed Shirley Jackson (The Haunting of Hill House, We Have Always Lived In The Castle). I Struggled with Angela Carter's Heroes and Villains although I plan to return to her other work, in particular, The Bloody Chamber. I also got through Donna Tartt's The Secret History on the recommendation of a friend. I had to push through the first 15% or so but it turned out to be worth it.


a book cover for Shirley Jackson's 'We have always lived in the castle'
a book cover for Margaret Atwood's 'The Handmaid's Tale'
a book cover for Adam Roberts' 'The This'
a book cover for Emily St. John Mandel's 'Sea of Tranquility'

I finally read (and loved) The Handmaid's Tale, which I had never read despite loving Atwood.

My top non-fiction book was The Nanny State Made Me by Stuart Maconie. I still read far more fiction than non-fiction. Or perhaps I'm not keeping track of non- fiction as well. I feel non-fiction requires a different approach to reading: not necessarily linear; it's not always important to read the whole book; it's often important to re-read sections. It might not make sense to consider them in the same bracket.

My favourite novels this year were Sea of Tranquility by Emily St. John Mandel, a standalone sort-of sequel to The Glass House but in a very different genre; and The This by Adam Roberts, which was equally remarkable. The This has an interesting narrative device in the first third where a stream of tweets is presented in parallel with the main text. This works well, and does a good job of capturing the figurative river of tweet-like stuff that is woven into our lives at the moment. But I can't help but wonder how they tackle that in the audiobook.

05 February, 2023 10:00AM

Russell Coker

Wayland in Bookworm

We are getting towards the freeze for Debian/Bookworm so the current state of packages isn’t going to change much before the release. Bugs will get fixed but missing features will mostly be missing until the next release.

Anarcat wrote an excellent blog post about using Wayland with the Sway window manager [1]. It seems pretty good if you like Sway, but I like KDE and plan to continue using it. Several of the important utility programs referenced by Anarcat won’t run with KDE/Wayland and give errors such as “Compositor doesn’t support wlr-output-management-unstable-v1”. One noteworthy thing about Wayland is the the Window manager and the equivalent to the X server are the same program so KDE has different Wayland code than Sway and doesn’t support some features. The lack of these features limits my ability to manage multiple displays and therefore makes KDE/Wayland unsuitable for many laptop uses. My work laptop runs Ubuntu 22.04 with KDE and wouldn’t correctly display on the pair of monitors on a USB-C dock that’s the standard desktop configuration where I work.

In my previous post about Wayland [2] I wrote about converting 2 of my systems to Wayland. Since then I had changed them back to X because of problems with supporting strange monitor configurations on laptops and also due to the KDE window manager crashing occasionally which terminates the session in Wayland but merely requires restarting the window manager in X. More recently I had a problem with the GPU in my main workstation sometimes not being recognised by the system (reporting no PCIe device), when I got a new one I couldn’t get X to work with the error “Cannot run in framebuffer mode. Please specify busIDs for all framebuffer devices” so I tried Wayland again. Now in the later stage of the Bookworm development process it seems that the problem with the KDE window manager crashing has been solved or mitigates and there is a new problem of the plasmashell process crashing. As I can restart plasmashell without logging out that’s much less annoying. So now my main workstation is running on Wayland with a slower GPU than I previously had while also giving a faster user experience so Wayland is providing a definite performance benefit.

Maybe for Trixie (the next release of Debian after Bookworm) we should have a release goal of having full Wayland support in all the major GUI systems.

05 February, 2023 08:41AM by etbe

hackergotchi for C.J. Collier

C.J. Collier

IPv6 with CenturyLink Fiber

In case you want to know how to configure IPv6 using CenturyLink’s 6rd tunneling service.

There are four files to update with my script:

There are a couple of environment variables in /etc/default/centurylink-6rd that you will want to set. Mine looks like this:

LAN_IFACE="ens18,ens21,ens22"
HEADER_FILE=/etc/radvd.conf.header

The script will configure radvd to advertise v6 routes to a /64 of clients on each of the interfaces delimited by a comma in LAN_IFACE. Current interface limit is 9. I can patch something to go up to 0xff, but I don’t want to at this time.

If you have some static configuration that you want to preserve, place them into the file pointed to by ${HEADER_FILE} and it will be prepended to the generated /etc/radvd.conf file. The up script will remove the file and re-create it every time your ppp link comes up, so keep that in mind and don’t manually modify the file and expect it to perist. You’ve been warned :-)

So here’s the script. It’s also linked above if you want to curl it.

#!/bin/bash
#
# Copyright 2023 Collier Technologies LLC
# https://wp.c9h.org/cj/?p=1844
#

# # These variables are for the use of the scripts run by run-parts
# PPP_IFACE="$1"
# PPP_TTY="$2"
# PPP_SPEED="$3"
# PPP_LOCAL="$4"
# PPP_REMOTE="$5"
# PPP_IPPARAM="$6"

${PPP_IFACE:="ppp0"}
if [[ -z ${PPP_LOCAL} ]]; then
    PPP_LOCAL=$(curl -s https://icanhazip.com/)
fi

6rd_prefix="2602::/24"

#printf "%x%x:%x%x\n" $(echo $PPP_LOCAL | tr . ' ')

# This configuration option came from CenturyLink:
# https://www.centurylink.com/home/help/internet/modems-and-routers/advanced-setup/enable-ipv6.html
border_router=205.171.2.64

TUNIFACE=6rdif

first_ipv6_network=$(printf "2602:%02x:%02x%02x:%02x00::" $(echo ${PPP_LOCAL} | tr . ' '))

ip tunnel del ${TUNIFACE} 2>/dev/null
ip -l 5 -6 addr flush scope global dev ${IFACE}

ip tunnel add ${TUNIFACE} mode sit local ${PPP_LOCAL} remote ${border_router}
ip tunnel 6rd dev ${TUNIFACE} 6rd-prefix "${first_ipv6_network}0/64" 6rd-relay_prefix ${border_router}/32
ip link set up dev ${TUNIFACE}
ip -6 route add default dev ${TUNIFACE}

ip addr add ${first_ipv6_network}1/64 dev ${TUNIFACE}

rm /etc/radvd.conf
i=1
DEFAULT_FILE="/etc/default/centurylink-6rd"
if [[ -f ${DEFAULT_FILE} ]]; then
    source ${DEFAULT_FILE}
    if [[ -f ${HEADER_FILE} ]]; then
	cp ${HEADER_FILE} /etc/radvd.conf
    fi
else
    LAN_IFACE="setThis"
fi
   
for IFACE in $( echo ${LAN_IFACE} | tr , ' ' ) ; do

    ipv6_network=$(printf "2602:%02x:%02x%02x:%02x0${i}::" $(echo ${PPP_LOCAL} | tr . ' '))

    ip addr add ${ipv6_network}1/64 dev ${TUNIFACE}
    ip -6 route replace "${ipv6_network}/64" dev ${IFACE} metric 1

    echo "
interface ${IFACE} {
   AdvSendAdvert on;
   MinRtrAdvInterval 3;
   MaxRtrAdvInterval 10;
   AdvLinkMTU 1280;
   prefix ${ipv6_network}/64 {
     AdvOnLink on;
     AdvAutonomous on;
     AdvRouterAddr on;
     AdvValidLifetime 86400;
     AdvPreferredLifetime 86400;
   };
};
" >> /etc/radvd.conf

    let "i=i+1"
done

systemctl restart radvd

05 February, 2023 12:49AM by C.J. Collier

February 04, 2023

hackergotchi for Jonathan Dowland

Jonathan Dowland

FreedomBox

personal servers

Moxie Marlinspike, former CEO of Signal, wrote a very interesting blog post about "web3", the crypto-scam1. It's worth a read if you are interested in that stuff. This blog post, however, is not about crypto-scams; but I wanted to quote from the beginning of the article:

People don’t want to run their own servers, and never will. The premise for web1 was that everyone on the internet would be both a publisher and consumer of content as well as a publisher and consumer of infrastructure.

We’d all have our own web server with our own web site, our own mail server for our own email, our own finger server for our own status messages, our own chargen server for our own character generation. However – and I don’t think this can be emphasized enough – that is not what people want. People do not want to run their own servers.

What's interesting to me about this is I feel that he's right: the vast, vast majority of people almost certainly do not want to run their own servers. Yet, I decided to. I started renting a Linux virtual server2 close to 20 years ago3, but more recently, decided to build and run a home NAS, which was a critical decision for getting my personal data under control.

FreedomBox and Debian

I am almost entirely dormant within the Debian project these days, and that's unlikely to change in the near future, at least until I wrap up some other commitments. I do sometimes mull over what I would do within Debian, if/when I return to the fold. And one thing I could focus on, since I am running my own NAS, would be software support for that sort of thing.

FreedomBox is a project that bills itself as a private server for non-experts: in other words, it's almost exactly the thing that Marlinspike states people don't want. Nonetheless, it is an interesting project. And, it's a Debian Pure Blend: which is to say (quoting the previous link) a subset of Debian that is tailored to be used out-of-the-box in a particular situation or by a particular target group. So FreedomBox is a candidate project for me to get involved with, especially (or more sensibly, assuming that) I end up using some of it myself.

But, that's not the only possibility, especially after a really, really good conversation I had earlier today with old friends Neil McGovern and Chris Boot…


  1. crypto-scam is my characterisation, not Marlinspike's.
  2. hosting, amongst other things, the site you are reading
  3. The Linux virtual servers replaced an ancient beige Pentium that was running as an Internet server from my parent's house in the 3-4 years before that.

04 February, 2023 06:44PM

The Horror Show!

the boy from the chemist is here to see you, Kerry Stuart, 1993

I was passing through London on Friday and I had time to get to The Horror Show! Exhibit at Somerset House, over by Victoria embankment. I learned about the exhibition from Gazelle Twin’s website: she wrote music for the final part of the exhibit, with Maxine Peake delivering a monologue over the top.

I loved it. It was almost like David Keenan’s book England’s Hidden Reverse had been made into an exhibition. It’s divided into three themes: Monster, Ghost and Witch, although the themes are loose with lots of cross over and threads running throughout.

Thatcher (right)

Thatcher (right)

Derek Jarman's Blue

Derek Jarman's Blue

The show is littered with artefacts from culturally significant works from a recently-bygone age. There’s a strong theme of hauntology. The artefacts that stuck out to me include some of Leigh Bowery’s costumes; the spitting image doll of Thatcher; the cover of a Radio Times featuring the cult BBC drama Threads; Nigel Kneale’s “the stone tape” VHS alongside more recent artefacts from Inside Number 9 and a MR James adaptation by Mark Gatiss (a clear thread of inspiration there); various bits relating to Derek Jarman including the complete “Blue” screening in a separate room; Mica Levi’s eerie score to “under the skin” playing in the “Witch” section, and various artefacts and references to the underground music group Coil. Too much to mention!

Having said that, the things which left the most lasting impression are the some of the stand-alone works of art: the charity box boy model staring fractured and refracted through a glass door (above); the glossy drip of blood running down a wall; the performance piece on a Betamax tape; the self portrait of a drowned man; the final piece, "The Neon Heiroglyph".

Jonathan Jones at the Guardian liked it.

The show runs until the 19th February and is worth a visit if you can.

04 February, 2023 06:22PM

Tim Retout

AlmaLinux and SBOMs

At CentOS Connect yesterday, Jack Aboutboul and Javier Hernandez presented a talk about AlmaLinux and SBOMs [video], where they are exploring a novel supply-chain security effort in the RHEL ecosystem.

Now, I have unfortunately ignored the Red Hat ecosystem for a long time, so if you are in a similar position to me: CentOS used to produce debranded rebuilds of RHEL; but Red Hat changed the project round so that CentOS Stream now sits in between Fedora Rawhide and RHEL releases, allowing the wider community to try out/contribute to RHEL builds before their release. This is credited with making early RHEL point releases more stable, but left a gap in the market for debranded rebuilds of RHEL; AlmaLinux and Rocky Linux are two distributions that aim to fill that gap.

Alma are generating and publishing Software Bill of Material (SBOM) files for every package; these are becoming a requirement for all software sold to the US federal government. What’s more, they are sending these SBOMs to a third party (CodeNotary) who store them in some sort of Merkle tree system to make it difficult for people to tamper with later. This should theoretically allow end users of the distribution to verify the supply chain of the packages they have installed?

I am currently unclear on the differences between CodeNotary/ImmuDB vs. Sigstore/Rekor, but there’s an SBOM devroom at FOSDEM tomorrow so maybe I’ll soon be learning that. This also makes me wonder if a Sigstore-based approach would be more likely to be adopted by Fedora/CentOS/RHEL, and whether someone should start a CentOS Software Supply Chain Security SIG to figure this out, or whether such an effort would need to live with the build system team to be properly integrated. It would be nice to understand the supply-chain story for CentOS and RHEL.

As I write this, I’m also reflecting that perhaps it would be helpful to explain what happens next in the SBOM consumption process; i.e. can this effort demonstrate tangible end user value, like enabling AlmaLinux to integrate with a vendor-neutral approach to vulnerability management? Aside from the value of being able to sell it to the US government!

04 February, 2023 04:37PM

February 03, 2023

Scarlett Gately Moore

KDE Snaps, snapcraft, Debian packages.

Sunset, Witch Wells ArizonaSunset, Witch Wells Arizona

Another busy week!

In the snap world, I have been busy trying to solve the problem of core20 snaps needing security updates and focal is no longer supported in KDE Neon. So I have created a ppa at https://launchpad.net/~scarlettmoore/+archive/ubuntu/kf5-5.99-focal-updates/+packages

Which of course presents more work, as kf5 5.99.0 requires qt5 5.15.7. Sooo this is a WIP.

Snapcraft kde-neon-extension is moving along as I learn the python ways of formatting, and fixing some issues in my tests.

In the Debian world, I am sad to report Mycroft-AI has gone bust, however the packaging efforts are not in vain as the project has been forked to https://github.com/orgs/OpenVoiceOS/repositories and should be relatively easy to migrate.

I have spent some time verifying the libappimage in buster is NOT vulnerable with CVE-2020-25265 as the code wasn’t introduced yet.

Skanpage, plasma-bigscreen both have source uploads so the can migrate to testing to hopefully make it into bookworm!

As many of you know, I am seeking employment. I am a hard worker, that thrives on learning new things. I am a self starter, knowledge sponge, and eager to be an asset to < insert your company here > !

Meanwhile, as interview processes are much longer than I remember and the industry exploding in layoffs, I am coming up short on living expenses as my unemployment lingers on. Please consider donating to my gofundme. Thank you for your consideration.

I still have a ways to go to cover my bills this month, I will continue with my work until I cannot, I hate asking, but please consider a donation. Thank you!

GoFundMe

03 February, 2023 06:54PM by sgmoore

Ian Jackson

derive-adhoc: powerful pattern-based derive macros for Rust

tl;dr

Have you ever wished that you could that could write a new derive macro without having to mess with procedural macros?

Now you can!

derive-adhoc lets you write a #[derive] macro, using a template syntax which looks a lot like macro_rules!.

It’s still 0.x - so unstable, and maybe with sharp edges. We want feedback!

And, the documentation is still very terse. It is doesn’t omit anything, but, it is severely lacking in examples, motivation, and so on. It will suit readers who enjoy dense reference material.

Read more... )

comment count unavailable comments

03 February, 2023 12:34AM

February 02, 2023

hackergotchi for Matt Brown

Matt Brown

2023 Writing Plan

To achieve my goal of publishing one high-quality piece of writing per week this year, I’ve put together a draft writing plan and a few organisational notes.

Please let me know what you think - what’s missing? what would you like to read more/less of from me?

I aim for each piece of writing to generate discussion, inspire further writing, and raise my visibility and profile with potential customers and peers. Some of the writing will be opinion, but I expect a majority of it will take a “learning by teaching” approach - aiming to explain and present useful information to the reader while helping me learn more!

Topic Backlog

The majority of my writing is going to fit into 4 series, allowing me to plan out a set of posts and narrative rather than having to come up with something novel to write about every week. To start with for Feb, my aim is to get an initial post in each series out the door. Long-term, it’s likely that the order of posts will reflect my work focus (e.g. if I’m spending a few weeks deep-diving into a particular product idea then expect more writing on that), but I will try and maintain some variety across the different series as well.

This backlog will be maintained as a living page at https://www.mattb.nz/w/queue.

Thoughts on SRE

This series of posts will be pitched primarily at potential consulting customers who want to understand how I approach the development and operations of distributed software systems. Initial topics to cover include:

  • What is SRE? My philosophy on how it relates to DevOps, Platform Engineering and various other “hot” terms.
  • How SRE scales up and down in size.
  • My approach to managing oncall responsibilities, toil and operational work.
  • How to grow an SRE team, including the common futility of SRE “transformations”.
  • Learning from incidents, postmortems, incident response, etc.

Business plan drafts

I have an ever-growing list of potential software opportunities and products which I think would be fun to build, but which generally don’t ever leave my head due to lack of time to develop the idea, or being unable to convince myself that there’s a viable business case or market for it.

I’d like to start sharing some very rudimentary business plan sketches for some of these ideas as a way of getting some feedback on my assessment of their potential. Whether that’s confirmation that it’s not worth pursuing, an expression of interest in the product, or potential partnership/collaboration opportunities - anything is better than the idea just sitting in my head.

Initial ideas include:

  • Business oriented Mastodon hosting.
  • PDF E-signing - e.g. A Docusign competitor, but with a local twist through RealMe or drivers license validation.
  • A framework to enable simple, performant per-tenant at-rest encryption for SaaS products - stop the data leaks.

Product development updates

For any product ideas that show merit and develop into a project, and particularly for the existing product ideas I’ve already committed to exploring, I plan to document my product investigation and market research findings as a way of structuring and driving my learning in the space.

To start with this will involve:

  • A series of explanatory posts diving into how NZ’s electricity system works with a particular focus on how operational data that will be critical to managing a more dynamic grid flows (or doesn’t flow!) today, and what opportunities or needs exist for generating, managing or distributing data that might be solvable with a software system I could build.
  • A series of product reviews and deep dives into existing farm management software and platforms in use by NZ farmers today, looking at the functionality they provide, how they integrate and generally testing the anecdotal feedback I have to date that they’re clunky, hard to use and not well integrated.
  • For co2mon.nz the focus will be less on market research and more on exploring potential distribution channels (e.g. direct advertising vs partnership with air conditioning suppliers) and pricing models (e.g. buy vs rent).

Debugging walk-throughs

Being able to debug and fix a system that you’re not intimately familiar with is valuable skill and something that I’ve always enjoyed, but it’s also a skill that I observe many engineers are uncomfortable with. There’s a set of techniques and processes that I’ve honed and developed over the years for doing this which I think make the task of debugging an unfamiliar system more approachable.

The idea, is that each post will take a problem or situation I’ve encountered, from the initial symptom or problem report and walk through the process of how to narrow down and identify the trigger or root cause of the behaviour. Along the way, discussing techniques used, their pros and cons. In addition to learning about the process of debugging itself, the aim is to illustrate lessons that can be applied when designing and building software systems that facilitate and improve our experiences in the operational stage of a systems lifecycle where debugging takes place.

Miscellaneous topics

In addition the regular series above, stand-alone posts on the other topics may include:

  • The pros/cons I see of bootstrapping a business vs taking VC or other funding.
  • Thoughts on remote work and hiring staff.
  • AI - a confessional on how I didn’t think it would progress in my lifetime, but maybe I was wrong.
  • Reflections on 15 years at Google and thoughts on subsequent events since my departure.
  • AWS vs GCP. Fight! Or with less click-bait, a level-headed comparison of the pros/cons I see in each platform.

Logistics

Discussion and comments

A large part of my motivation for writing regularly is to seek feedback and generate discussion on these topics. Typically this is done by including comment functionality within the website itself. I’ve decided not to do this - on-site commenting creates extra infrastructure to maintain, and limits the visibility and breadth of discussion to existing readers and followers.

To provide opportunities for comment and feedback I plan to share and post notification and summarised snippets of selected posts to various social media platforms. Links to these social media posts will be added to each piece of writing to provide a path for readers to engage and discuss further while enabling the discussion and visibility of the post to grow and extend beyond my direct followers and subscribers.

My current thinking is that I’ll distribute via the following platforms:

  • Mastodon @[email protected] - every post.
  • Twitter @xleem - selected posts. I’m trying to reduce Twitter usage in favour of Mastodon, but there’s no denying that it’s still where a significant number of people and discussions are happening.
  • LinkedIn - probably primarily for posts in the business plan series, and notable milestones in the product development process.

In all cases, my aim will be to post a short teaser or summary paragraph that poses an question or relays an interesting fact to give some immediate value and signal to readers as to whether they want to click through rather than simply spamming links into the feed.

Feedback

In addition to social media discussion I also plan to add a direct feedback path, particularly for readers who don’t have time or inclination to participate in written discussion, by providing a simple thumbs up/thumbs down feedback widget to the bottom of each post, including those delivered via RSS and email.

Organisation

To enable subscription to subsets of my writing (particularly for places like Planet Debian, etc where the more business focused content is likely to be off-topic), I plan to place each post into a set of categories:

  • Business
  • Technology
  • General

In addition to the categories, I’ll also use more free-form tags to group writing with linked themes or that falls within one of the series described above.

02 February, 2023 09:07PM

hackergotchi for Matthew Garrett

Matthew Garrett

Blocking free API access to Twitter doesn't stop abuse

In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.

There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?

To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.

The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.

comment count unavailable comments

02 February, 2023 10:21AM

John Goerzen

Using Yggdrasil As an Automatic Mesh Fabric to Connect All Your Docker Containers, VMs, and Servers

Sometimes you might want to run Docker containers on more than one host. Maybe you want to run some at one hosting facility, some at another, and so forth.

Maybe you’d like run VMs at various places, and let them talk to Docker containers and bare metal servers wherever they are.

And maybe you’d like to be able to easily migrate any of these from one provider to another.

There are all sorts of very complicated ways to set all this stuff up. But there’s also a simple one: Yggdrasil.

My blog post Make the Internet Yours Again With an Instant Mesh Network explains some of the possibilities of Yggdrasil in general terms. Here I want to show you how to use Yggdrasil to solve some of these issues more specifically. Because Yggdrasil is always Encrypted, some of the security lifting is done for us.

Background

Often in Docker, we connect multiple containers to a single network that runs on a given host. That much is easy. Once you start talking about containers on multiple hosts, then you start adding layers and layers of complexity. Once you start talking multiple providers, maybe multiple continents, then the complexity can increase. And, if you want to integrate everything from bare metal servers to VMs into this – well, there are ways, but they’re not easy.

I’m a believer in the KISS principle. Let’s not make things complex when we don’t have to.

Enter Yggdrasil

As I’ve explained before, Yggdrasil can automatically form a global mesh network. This is pretty cool! As most people use it, they join it to the main Yggdrasil network. But Yggdrasil can be run entirely privately as well. You can run your own private mesh, and that’s what we’ll talk about here.

All we have to do is run Yggdrasil inside each container, VM, server, or whatever. We handle some basics of connectivity, and bam! Everything is host- and location-agnostic.

Setup in Docker

The installation of Yggdrasil on a regular system is pretty straightforward. Docker is a bit more complicated for several reasons:

  • It blocks IPv6 inside containers by default
  • The default set of permissions doesn’t permit you to set up tunnels inside a container
  • It doesn’t typically pass multicast (broadcast) packets

Normally, Yggdrasil could auto-discover peers on a LAN interface. However, aside from some esoteric Docker networking approaches, Docker doesn’t permit that. So my approach is going to be setting up one or more Yggdrasil “router” containers on a given Docker host. All the other containers talk directly to the “router” container and it’s all good.

Basic installation

In my Dockerfile, I have something like this:

FROM jgoerzen/debian-base-security:bullseye
RUN echo "deb http://deb.debian.org/debian bullseye-backports main" >> /etc/apt/sources.list && \
    apt-get --allow-releaseinfo-change update && \
    apt-get -y --no-install-recommends -t bullseye-backports install yggdrasil
...
COPY yggdrasil.conf /etc/yggdrasil/
RUN set -x; \
    chown root:yggdrasil /etc/yggdrasil/yggdrasil.conf && \
    chmod 0750 /etc/yggdrasil/yggdrasil.conf && \
    systemctl enable yggdrasil

The magic parameters to docker run to make Yggdrasil work are:

--cap-add=NET_ADMIN --sysctl net.ipv6.conf.all.disable_ipv6=0 --device=/dev/net/tun:/dev/net/tun

This example uses my docker-debian-base images, so if you use them as well, you’ll also need to add their parameters.

Note that it is NOT necessary to use --privileged. In fact, due to the network namespaces in use in Docker, this command does not let the container modify the host’s networking (unless you use --net=host, which I do not recommend).

The --sysctl parameter was the result of a lot of banging my head against the wall. Apparently Docker tries to disable IPv6 in the container by default. Annoying.

Configuration of the router container(s)

The idea is that the router node (or more than one, if you want redundancy) will be the only ones to have an open incoming port. Although the normal Yggdrasil case of directly detecting peers in a broadcast domain is more convenient and more robust, this can work pretty well too.

You can, of course, generate a template yggdrasil.conf with yggdrasil -genconf like usual. Some things to note for this one:

  • You’ll want to change Listen to something like Listen: ["tls://[::]:12345"] where 12345 is the port number you’ll be listening on.
  • You’ll want to disable the MulticastInterfaces entirely by just setting it to [] since it doesn’t work anyway.
  • If you expose the port to the Internet, you’ll certainly want to firewall it to only authorized peers. Setting AllowedPublicKeys is another useful step.
  • If you have more than one router container on a host, each of them will both Listen and act as a client to the others. See below.

Configuration of the non-router nodes

Again, you can start with a simple configuration. Some notes here:

  • You’ll want to set Peers to something like Peers: ["tls://routernode:12345"] where routernode is the Docker hostname of the router container, and 12345 is its port number as defined above. If you have more than one local router container, you can simply list them all here. Yggdrasil will then fail over nicely if any one of them go down.
  • Listen should be empty.
  • As above, MulticastInterfaces should be empty.

Using the interfaces

At this point, you should be able to ping6 between your containers. If you have multiple hosts running Docker, you can simply set up the router nodes on each to connect to each other. Now you have direct, secure, container-to-container communication that is host-agnostic! You can also set up Yggdrasil on a bare metal server or VM using standard procedures and everything will just talk nicely!

Security notes

Yggdrasil’s mesh is aggressively greedy. It will peer with any node it can find (unless told otherwise) and will find a route to anywhere it can. There are two main ways to make sure your internal comms stay private: by restricting who can talk to your mesh, and by firewalling the Yggdrasil interface. Both can be used, and they can be used simultaneously.

By disabling multicast discovery, you eliminate the chance for random machines on the LAN to join the mesh. By making sure that you firewall off (outside of Yggdrasil) who can connect to a Yggdrasil node with a listening port, you can authorize only your own machines. And, by setting AllowedPublicKeys on the nodes with listening ports, you can authenticate the Yggdrasil peers. Note that part of the benefit of the Yggdrasil mesh is normally that you don’t have to propagate a configuration change to every participatory node – that’s a nice thing in general!

You can also run a firewall inside your container (I like firehol for this purpose) and aggressively firewall the IPs that are allowed to connect via the Yggdrasil interface. I like to set a stable interface name like ygg0 in yggdrasil.conf, and then it becomes pretty easy to firewall the services. The Docker parameters that allow Yggdrasil to run are also sufficient to run firehol.

Naming Yggdrasil peers

You probably don’t want to hard-code Yggdrasil IPs all over the place. There are a few solutions:

  • You could run an internal DNS service
  • You can do a bit of scripting around Docker’s --add-host command to add things to /etc/hosts

Other hints & conclusion

Here are some other helpful use cases:

  • If you are migrating between hosts, you could leave your reverse proxy up at both hosts, both pointing to the target containers over Yggdrasil. The targets will be automatically found from both sides of the migration while you wait for DNS caches to update and such.
  • This can make services integrate with local networks a lot more painlessly than they might otherwise.

This is just an idea. The point of Yggdrasil is expanding our ideas of what we can do with a network, so here’s one such expansion. Have fun!


Note: This post also has a permanent home on my webiste, where it may be periodically updated.

02 February, 2023 04:18AM by John Goerzen

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RInside 0.2.18 on CRAN: Maintenance

A new release 0.2.18 of RInside arrived on CRAN and in Debian today. This is the first release in ten months since the 0.2.17 release. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.

This release brings a contributed change to how the internal REPL is called: Dominick found the current form more reliable when embedding R on Windows. We also updated a few other things around the package.

The list of changes since the last release:

Changes in RInside version 0.2.18 (2023-02-01)

  • The random number initialization was updated as in R.

  • The main REPL is now running via 'run_Rmainloop()'.

  • Small routine update to package and continuous integration.

My CRANberries also provides a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page, or to issues tickets at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 February, 2023 12:17AM

Valhalla's Things

How To Verify Debian's ARM Installer Images

Posted on February 2, 2023

Thanks to Vagrant on the debian-arm mailing list I’ve found that there is a chain of verifiability for the images usually used to install Debian on ARM devices.

It’s not trivial, so I’m writing it down for future reference when I’ll need it again.

  1. Download the images from https://ftp.debian.org/debian/dists/bullseye/main/installer-armhf/current/images/ (choose either hd-media or netboot, then SD-card-images and download the firmware.* file for your board as well as partition.img.gz).

  2. Download the checksums file https://ftp.debian.org/debian/dists/bullseye/main/installer-armhf/current/images/SHA256SUMS

  3. Download the Release file from https://ftp.debian.org/debian/dists/bullseye/ ; for convenience the InRelease

  4. Verify the Release file:

    gpg --no-default-keyring \
        --keyring /usr/share/keyrings/debian-archive-bullseye-stable.gpg \
        --verify InRelease
  5. Verify the checksums file:

    awk '/installer-armhf\/current\/images\/SHA256SUMS/ {print $1 "
    SHA256SUMS"}' InRelease | tail -n 1 | sha256sum -c 

    (I know, I probably can use awk instead of that tail, but it’s getting late and I want to publish this).

  6. Verify the actual files, for hd-media:

    grep hd-media SHA256SUMS \
    | sed 's#hd-media/SD-card-images/##' \
    | sha256sum -c \
    | grep -v "No such file or directory" \
    | grep -v "FAILED open or read" 2> /dev/null

    and for netboot:

    grep netboot SHA256SUMS \
    | sed 's#netboot/SD-card-images/##' \
    | sha256sum -c \
    | grep -v "No such file or directory" \
    | grep -v "FAILED open or read" 2> /dev/null

    and check that all of the files you wanted are there with an OK; of course change hd-media with netboot as needed.

And I fully agree that fewer steps would be nice, but this is definitely better than nothing!

02 February, 2023 12:00AM

February 01, 2023

Simon Josefsson

Apt Archive Transparency: debdistdiff & apt-canary

I’ve always found the operation of apt software package repositories to be a mystery. There appears to be a lack of transparency into which people have access to important apt package repositories out there, how the automatic non-human update mechanism is implemented, and what changes are published. I’m thinking of big distributions like Ubuntu and Debian, but also the free GNU/Linux distributions like Trisquel and PureOS that are derived from the more well-known distributions.

As far as I can tell, anyone who has the OpenPGP private key trusted by a apt-based GNU/Linux distribution can sign a modified Release/InRelease file and if my machine somehow downloads that version of the release file, my machine could be made to download and install packages that the distribution didn’t intend me to install. Further, it seems that anyone who has access to the main HTTP server, or any of its mirrors, or is anywhere on the network between them and my machine (when plaintext HTTP is used), can either stall security updates on my machine (on a per-IP basis), or use it to send my machine (again, on a per-IP basis to avoid detection) a modified Release/InRelease file if they had been able to obtain the private signing key for the archive. These are mighty powers that warrant overview.

I’ve always put off learning about the processes to protect the apt infrastructure, mentally filing it under “so many people rely on this infrastructure that enough people are likely to have invested time reviewing and improving these processes”. Simultaneous, I’ve always followed the more free-software friendly Debian-derived distributions such as gNewSense and have run it on some machines. I’ve never put them into serious production use, because the trust issues with their apt package repositories has been a big question mark for me. The “enough people” part of my rationale for deferring this is not convincing. Even the simple question of “is someone updating the apt repository” is not easy to understand on a running gNewSense system. At some point in time the gNewSense cron job to pull in security updates from Debian must have stopped working, and I wouldn’t have had any good mechanism to notice that. Most likely it happened without any public announcement. I’ve recently switched to Trisquel on production machines, and these questions has come back to haunt me.

The situation is unsatisfying and I looked into what could be done to improve it. I could try to understand who are the key people involved in each project, and may even learn what hardware component is used, or what software is involved to update and sign apt repositories. Is the server running non-free software? Proprietary BIOS or NIC firmware? Are the GnuPG private keys on disk? Smartcard? TPM? YubiKey? HSM? Where is the server co-located, and who has access to it? I tried to do a bit of this, and discovered things like Trisquel having a DSA1024 key in its default apt trust store (although for fairness, it seems that apt by default does not trust such signatures). However, I’m not certain understanding this more would scale to securing my machines against attacks on this infrastructure. Even people with the best intentions, and the state of the art hardware and software, will have problems.

To increase my trust in Trisquel I set out to understand how it worked. To make it easier to sort out what the interesting parts of the Trisquel archive to audit further were, I created debdistdiff to produce human readable text output comparing one apt archive with another apt archive. There is a GitLab CI/CD cron job that runs this every day, producing output comparing Trisquel vs Ubuntu and PureOS vs Debian. Working with these output files has made me learn more about how the process works, and I even stumbled upon something that is likely a bug where Trisquel aramo was imported from Ubuntu jammy while it contained a couple of package (e.g., gcc-8, python3.9) that were removed for the final Ubuntu jammy release.

After working on auditing the Trisquel archive manually that way, I realized that whatever I could tell from comparing Trisquel with Ubuntu, it would only be something based on a current snapshot of the archives. Tomorrow it may look completely different. What felt necessary was to audit the differences of the Trisquel archive continously. I was quite happy to have developed debdistdiff for one purpose (comparing two different archives like Trisquel and Ubuntu) and discovered that the tool could be used for another purpose (comparing the Trisquel archive at two different points in time). At this time I realized that I needed a log of all different apt archive metadata to be able to produce an audit log of the differences in time for the archive. I create manually curated git-repositories with the Release/InRelease and the Packages files for each architecture/component of the well-known distributions Trisquel, Ubuntu, Debian and PureOS. Eventually I wrote scripts to automate this, which are now published in the debdistget project.

At this point, one of the early question about per-IP substitution of Release files were lingering in my mind. However with the tooling I now had available, coming up with a way to resolve this was simple! Merely have apt compute a SHA256 checksum of the just downloaded InRelease file, and see if my git repository had the same file. At this point I started reading the Apt source code, and now I had more doubts about the security of my systems than I ever had before. Oh boy how the name Apt has never before felt more… Apt?! Oh well, we must leave some exercises for the students. Eventually I realized I wanted to touch as little of apt code basis as possible, and noticed the SigVerify::CopyAndVerify function called ExecGPGV which called apt-key verify which called GnuPG’s gpgv. By setting Apt::Key::gpgvcommand I could get apt-key verify to call another tool than gpgv. See where I’m going? I thought wrapping this up would now be trivial but for some reason the hash checksum I computed locally never matched what was on my server. I gave up and started working on other things instead.

Today I came back to this idea, and started to debug exactly how the local files looked that I got from apt and how they differed from what I had in my git repositories, that came straight from the apt archives. Eventually I traced this back to SplitClearSignedFile which takes an InRelease file and splits it into two files, probably mimicking the (old?) way of distributing both Release and Release.gpg. So the clearsigned InRelease file is split into one cleartext file (similar to the Release file) and one OpenPGP signature file (similar to the Release.gpg file). But why didn’t the cleartext variant of the InRelease file hash to the same value as the hash of the Release file? Sadly they differ by the final newline.

Having solved this technicality, wrapping the pieces up was easy, and I came up with a project apt-canary that provides a script apt-canary-gpgv that verify the local apt release files against something I call a “apt canary witness” file stored at a URL somewhere.

I’m now running apt-canary on my Trisquel aramo laptop, a Trisquel nabia server, and Talos II ppc64el Debian machine. This means I have solved the per-IP substitution worries (or at least made them less likely to occur, having to send the same malicious release files to both GitLab and my system), and allow me to have an audit log of all release files that I actually use for installing and downloading packages.

What do you think? There are clearly a lot of work and improvements to be made. This is a proof-of-concept implementation of an idea, but instead of refining it until perfection and delaying feedback, I wanted to publish this to get others to think about the problems and various ways to resolve them.

Btw, I’m going to be at FOSDEM’23 this weekend, helping to manage the Security Devroom. Catch me if you want to chat about this or other things. Happy Hacking!

01 February, 2023 08:56PM by simon

Julian Andres Klode

Ubuntu 2022v1 secure boot key rotation and friends

This is the story of the currently progressing changes to secure boot on Ubuntu and the history of how we got to where we are.

taking a step back: how does secure boot on Ubuntu work?

Booting on Ubuntu involves three components after the firmware:

  1. shim
  2. grub
  3. linux

Each of these is a PE binary signed with a key. The shim is signed by Microsoft’s 3rd party key and embeds a self-signed Canonical CA certificate, and optionally a vendor dbx (a list of revoked certificates or binaries). grub and linux (and fwupd) are then signed by a certificate issued by that CA

In Ubuntu’s case, the CA certificate is sharded: Multiple people each have a part of the key and they need to meet to be able to combine it and sign things, such as new code signing certificates.

BootHole

When BootHole happened in 2020, travel was suspended and we hence could not rotate to a new signing certificate. So when it came to updating our shim for the CVEs, we had to revoke all previously signed kernels, grubs, shims, fwupds by their hashes.

This generated a very large vendor dbx which caused lots of issues as shim exported them to a UEFI variable, and not everyone had enough space for such large variables. Sigh.

We decided we want to rotate our signing key next time.

This was also when upstream added SBAT metadata to shim and grub. This gives a simple versioning scheme for security updates and easy revocation using a simple EFI variable that shim writes to and reads from.

Spring 2022 CVEs

We still were not ready for travel in 2021, but during BootHole we developed the SBAT mechanism, so one could revoke a grub or shim by setting a single EFI variable.

We actually missed rotating the shim this cycle as a new vulnerability was reported immediately after it, and we decided to hold on to it.

2022 key rotation and the fall CVEs

This caused some problems when the 2nd CVE round came, as we did not have a shim with the latest SBAT level, and neither did a lot of others, so we ended up deciding upstream to not bump the shim SBAT requirements just yet. Sigh.

Anyway, in October we were meeting again for the first time at a Canonical sprint, and the shardholders got together and created three new signing keys: 2022v1, 2022v2, and 2022v3. It took us until January before they were installed into the signing service and PPAs setup to sign with them.

We also submitted a shim 15.7 with the old keys revoked which came back at around the same time.

Now we were in a hurry. The 22.04.2 point release was scheduled for around middle of February, and we had nothing signed with the new keys yet, but our new shim which we need for the point release (so the point release media remains bootable after the next round of CVEs), required new keys.

So how do we ensure that users have kernels, grubs, and fwupd signed with the new key before we install the new shim?

upgrade ordering

grub and fwupd are simple cases: For grub, we depend on the new version. We decided to backport grub 2.06 to all releases (which moved focal and bionic up from 2.04), and kept the versioning of the -signed packages the same across all releases, so we were able to simply bump the Depends for grub to specify the new minimum version. For fwupd-efi, we added Breaks.

(Actually, we also had a backport of the CVEs for 2.04 based grub, and we did publish that for 20.04 signed with the old keys before backporting 2.06 to it.)

Kernels are a different story: There are about 60 kernels out there. My initial idea was that we could just add Breaks for all of them. So our meta package linux-image-generic which depends on linux-image-$(uname -r)-generic, we’d simply add Breaks: linux-image-generic (« 5.19.0-31) and then adjust those breaks for each series. This would have been super annoying, but ultimately I figured this would be the safest option. This however caused concern, because it could be that apt decides to remove the kernel metapackage.

I explored checking the kernels at runtime and aborting if we don’t have a trusted kernel in preinst. This ensures that if you try to upgrade shim without having a kernel, it would fail to install. But this ultimately has a couple of issues:

  1. It aborts the entire transaction at that point, so users will be unable to run apt upgrade until they have a recent kernel.
  2. We cannot even guarantee that a kernel would be unpacked first. So even if you got a new kernel, apt/dpkg might attempt to unpack it first and then the preinst would fail because no kernel is present yet.

Ultimately we believed the danger to be too large given that no kernels had yet been released to users. If we had kernels pushed out for 1-2 months already, this would have been a viable choice.

So in the end, I ended up modifying the shim packaging to install both the latest shim and the previous one, and an update-alternatives alternative to select between the two:

In it’s post-installation maintainer script, shim-signed checks whether all kernels with a version greater or equal to the running one are not revoked, and if so, it will setup the latest alternative with priority 100 and the previous with a priority of 50. If one or more of those kernels was signed with a revoked key, it will swap the priorities around, so that the previous version is preferred.

Now this is fairly static, and we do want you to switch to the latest shim eventually, so I also added hooks to the kernel install to trigger the shim-signed postinst script when a new kernel is being installed. It will then update the alternatives based on the current set of kernels, and if it now points to the latest shim, reinstall shim and grub to the ESP.

Ultimately this means that once you install your 2nd non-revoked kernel, or you install a non-revoked kernel and then reconfigure shim or the kernel, you will get the latest shim. When you install your first non-revoked kernel, your currently booted kernel is still revoked, so it’s not upgraded immediately. This has a benefit in that you will most likely have two kernels you can boot without disabling secure boot.

regressions

Of course, the first version I uploaded had still some remaining hardcoded “shimx64” in the scripts and so failed to install on arm64 where “shimaa64” is used. And if that were not enough, I also forgot to include support for gzip compressed kernels there. Sigh, I need better testing infrastructure to be able to easily run arm64 tests as well (I only tested the actual booting there, not the scripts).

shim-signed migrated to the release pocket in lunar fairly quickly, but this caused images to stop working, because the new shim was installed into images, but no kernel was available yet, so we had to demote it to proposed and block migration. Despite all the work done for end users, we need to be careful to roll this out for image building.

another grub update for OOM issues.

We had two grubs to release: First there was the security update for the recent set of CVEs, then there also was an OOM issue for large initrds which was blocking critical OEM work.

We fixed the OOM issue by cherry-picking all 2.12 memory management patches, as well as the red hat patches to the loader we take from there. This ended up a fairly large patch set and I was hesitant to tie the security update to that, so I ended up pushing the security update everywhere first, and then pushed the OOM fixes this week.

With the OOM patches, you should be able to boot initrds of between 400M and 1GB, it also depends on the memory layout of your machine and your screen resolution and background images. So OEM team had success testing 400MB irl, and I tested up to I think it was 1.2GB in qemu, I ran out of FAT space then and stopped going higher :D

other features in this round

  • Intel TDX support in grub and shim
  • Kernels are allocated as CODE now not DATA as per the upstream mm changes, might fix boot on X13s

am I using this yet?

The new signing keys are used in:

  • shim-signed 1.54 on 22.10+, 1.51.3 on 22.04, 1.40.9 on 20.04, 1.37~18.04.13 on 18.04
  • grub2-signed 1.187.2~ or newer (binary packages grub-efi-amd64-signed or grub-efi-arm64-signed), 1.192 on 23.04.
  • fwupd-signed 1.51~ or newer
  • various linux updates. Check apt changelog linux-image-unsigned-$(uname -r) to see if Revoke & rotate to new signing key (LP: #2002812) is mentioned in there to see if it signed with the new key.

If you were able to install shim-signed, your grub and fwupd-efi will have the correct version as that is ensured by packaging. However your shim may still point to the old one. To check which shim will be used by grub-install, you can check the status of the shimx64.efi.signed or (on arm64) shimaa64.efi.signed alternative. The best link needs to point to the file ending in latest:

$ update-alternatives --display shimx64.efi.signed
shimx64.efi.signed - auto mode
  link best version is /usr/lib/shim/shimx64.efi.signed.latest
  link currently points to /usr/lib/shim/shimx64.efi.signed.latest
  link shimx64.efi.signed is /usr/lib/shim/shimx64.efi.signed
/usr/lib/shim/shimx64.efi.signed.latest - priority 100
/usr/lib/shim/shimx64.efi.signed.previous - priority 50

If it does not, but you have installed a new kernel compatible with the new shim, you can switch immediately to the new shim after rebooting into the kernel by running dpkg-reconfigure shim-signed. You’ll see in the output if the shim was updated, or you can check the output of update-alternatives as you did above after the reconfiguration has finished.

For the out of memory issues in grub, you need grub2-signed 1.187.3~ (same binaries as above).

how do I test this (while it’s in proposed)?

  1. upgrade your kernel to proposed and reboot into that
  2. upgrade your grub-efi-amd64-signed, shim-signed, fwupd-signed to proposed.

If you already upgraded your shim before your kernel, don’t worry:

  1. upgrade your kernel and reboot
  2. run dpkg-reconfigure shim-signed

And you’ll be all good to go.

deep dive: uploading signed boot assets to Ubuntu

For each signed boot asset, we build one version in the latest stable release and the development release. We then binary copy the built binaries from the latest stable release to older stable releases. This process ensures two things: We know the next stable release is able to build the assets and we also minimize the number of signed assets.

OK, I lied. For shim, we actually do not build in the development release but copy the binaries upward from the latest stable, as each shim needs to go through external signing.

The entire workflow looks something like this:

  1. Upload the unsigned package to one of the following “build” PPAs:

  2. Upload the signed package to the same PPA

  3. For stable release uploads:

    • Copy the unsigned package back across all stable releases in the PPA
    • Upload the signed package for stable releases to the same PPA with ~<release>.1 appended to the version
  4. Submit a request to canonical-signing-jobs to sign the uploads.

    The signing job helper copies the binary -unsigned packages to the primary-2022v1 PPA where they are signed, creating a signing tarball, then it copies the source package for the -signed package to the same PPA which then downloads the signing tarball during build and places the signed assets into the -signed deb.

    Resulting binaries will be placed into the proposed PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed

  5. Review the binaries themselves

  6. Unembargo and binary copy the binaries from the proposed PPA to the proposed-public PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed-public.

    This step is not strictly necessary, but it enables tools like sru-review to work, as they cannot access the packages from the normal private “proposed” PPA.

  7. Binary copy from proposed-public to the proposed queue(s) in the primary archive

Lots of steps!

WIP

As of writing, only the grub updates have been released, other updates are still being verified in proposed. An update for fwupd in bionic will be issued at a later point, removing the EFI bits from the fwupd 1.2 packaging and using the separate fwupd-efi project instead like later release series.

01 February, 2023 01:40PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

February.

February. Working through crosvm dependencies and found that cargo-debstatus does not dump all dependencies; seems like it skips over optional ones. Haven't tracked down what is going on yet but at least it seems like crosvm does not have all dependencies and can't build yet.

01 February, 2023 12:19AM by Junichi Uekawa

Paul Wise

FLOSS Activities January 2023

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages cycle/pygopherd and ask about guile-2.2 reintroduction bugs
  • Debian IRC: fix topic/info of obsolete channel
  • Debian wiki: unblock IP addresses, approve accounts, approve domains.

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The celery, docutils, pyemd work was sponsored. All other work was done on a volunteer basis.

01 February, 2023 12:02AM