November 08, 2022

hackergotchi for Aurelien Jarno

Aurelien Jarno

riscv64 porterbox

For quite some time, many people asked for a riscv64 porterbox. Now we've got one called debian-riscv64-porterbox-01.debian.net.

A big thanks to SiFive for providing the HiFive Unmatched board and OSUOSL for assembling the hardware and hosting it.

08 November, 2022 10:52PM by aurel32

hackergotchi for Daniel Lange

Daniel Lange

Your software stores are a bad idea

There is significant effort involved to get your apt or dnf commands always have a consistent set of servers to talk to.

That's why running separate "software stores" is a bad idea:

Snap software store down

That way more admins need to learn how to run high availability services for dubious business opportunities to "later" monetize services. Services that nobody cares to pay for and thus opportunities that never materialize. But every company wants to find that out again. Because if Apple could do it, why shouldn't Canonical be able to do it? $$$!1!!

So, can't update Firefox on Ubuntu 22.04 right now.

At least there is https://status.snapcraft.io/:

Snap incodent / monitoring status page

So I can check back tomorrow if I can update my web browser ...

Update

09.11.2022 12:15 CET

The Snapcraft distribution system seems quite flaky, this is the downtime log:

Log of (frequent) Snapcraft outages

Bonus points for the bad client side implementation:


dl@laptop:~$ sudo snap refresh
All snaps up to date.

# ^this is a lie, just close Firefox and ...

dl@laptop:~$ sudo snap refresh
firefox 106.0.5-1 from Mozilla** refreshed
 

08 November, 2022 08:30PM by Daniel Lange

Antoine Beaupré

Using the bell as modern notification

Computer terminals have traditionally had an actual bell that would ring when a certain control character (the bell character, typically control-g or \a in an C escape sequence) would come in the input stream.

That feature actually predates computers altogether, and was present in Baudot code, "an early character encoding for telegraphy invented by Émile Baudot in the 1870s", itself superseding Morse code.

Modern terminal emulators have, of course, kept that feature: if you run this command in a terminal right now:

printf '\a'

... you may hear some annoying beep. Or not. It actually depends on a lot of factors, including which terminal emulator you're using, how it's configured, whether you have headphones on, or speakers connected, or, if you're really old school, a PC speaker, even.

Typically, I have this theory that it does the exact opposite of what you want, regardless of whether you have configured it or not. That is, if you want it to make noises, it won't, and if you want it to stay silent, it will make brutal, annoying noises in moments you would the least expect. I suspect this is a law in computer science, but I'm too lazy to come up with a formal definition.

Yet something can be done with this.

Making the bell useful and silent

Many terminal emulators have this feature where they can silence the bell somehow. It can be turned into a "visual bell" which basically flashes the screen when a bell arrives. Or that can also be disabled and the bell is just completely ignored.

What I did instead is turn the bell into a "urgency hint" (part of the ICCCM standard. In xterm, this is done with this X resource entry (typically in ~/.Xresources):

XTerm*bellIsUrgent:  true
XTerm*visualBell: false

Interestingly, this doesn't clearly say "bell is muted", but it's effectively what it does. Or maybe it works because I have muted "System Sounds" in Pulseaudio. Who knows. I do have this in my startup scripts though:

xset b off

... which, according to the xset(1) manpage, means

If the dash or 'off' are given, the bell will be turned off.

Interestingly, you have the option of setting the bell "volume", "pitch, in hertz, and [...] duration in milliseconds. Note that not all hardware can vary the bell characteristics." In any case, I think that's the magic trick to turn the darn thing off.

Now this should send urgency hints to your window manager:

sleep 3 ; printf '\a'

Try it: run the command, switch to another desktop, then wait 3 seconds. You should see the previous desktop show up in red or something.

In the i3 window manager I am currently using, this is the default, although I did set the colors (client.urgent and urgent_workspace in bar.colors).

Other window managers or desktop environments may require different configurations.

Sending a bell...

Now that, on itself, will only be useful when something sets a bell. One place I had found a trick like this, long ago, is this post (dead link, archived) which has various instructions for different tools. I'll recopy some of them here since the original site is dead, but credit goes to the Netbuz blog.

(Note that the blog post also features an Awesome WM configuration for urgency hints.)

Mutt

set beep=yes
set beep_new=yes

Irssi

/set beep_when_window_active ON
/set beep_when_away ON
/set beep_msg_level MSGS DCC DCCMSGS HILIGHT

It was also recommending this setting, but it appears to be deprecated and gives a warning in modern irssi versions:

/set bell_beeps ON

GNU Screen

This is an important piece of the puzzle, because by default, terminal multiplexers have their own opinion of what to do with the bell as well:

# disabled: we want to propagate bell to clients, which should handle
# it in their own terminal settings. this `vbell off` is also the
# upstream and tmux's default
#
# see also: http://netbuz.org/blog/2011/11/x-bells-and-urgency-hints/
vbell off
# propagate bell from other windows up to the terminal emulator as well
bell_msg 'Bell in window %n^G'

The bell_msg bit is an extra from me: it uses the bell message that pops up when screen detects a bell in another window to resend the bell control character up to the running terminal.

This makes it so a bell in any multiplexed window will also propagate to the parent terminal, which is not the default.

Tmux

Untested, but that is apparently how you do it:

# listen to alerts from all windows
set -g bell-action any
# notice bell in windows
set -g monitor-bell on
# only propagate bell, don't warn user, as it hangs tmux for a second
set -g visual-bell off
# send bell *and* notify when activity (if monitor-activity)
set -g visual-activity both

Note that this config goes beyond what we have in GNU screen in that inactivity or activity will trigger a bell as well. This might be useful for cases where you don't have the prompt hack (below) but it could also very well be very noisy. It will only generate noise when monitor-activity is enabled though.

bash and shell prompts

Now the icing on cake is to actually send the bell when a command completes. This is what I use this for the most, actually.

I was previously using undistract-me for this, actually. That was nice: it would send me a nice desktop notification when a command was running more than a certain amount of time, and if the window was unfocused. But configuring this was a nightmare: because it uses a complex PROMPT_COMMAND in bash, it would conflict with my (already existing and way too) complex bash prompt, and lead to odd behaviors. It would also not work for remote commands, of course, as it wouldn't have access to my local D-BUS to send notifications (thankfully!).

So instead, what I do now is systematically print a bell whenever a command terminates, in all my shells. I have this in my /root/.bashrc on all my servers, deployed in Puppet:

PROMPT_COMMAND='printf "\a"'

(I have the equivalent in my own .bashrc, although that thing is much more complex, featuring multi-command pipeline exit status, colors, terminal title setting, and more, which should probably warrant its own blog post.)

This sounds a little bonkers and really noisy, but remember that I turned off the audible bell. And urgency hints are going to show up only if the window is unfocused. So it's actually really nice and not really distracting.

Or, to reuse the undistract-me concept: it allows me to not lose focus too much when I'm waiting for a long process to complete.

That idea actually came from ahf, so kudos to him on that nice hack.

Caveats

That is not setup on all the machines I administer for work, that said. I'm afraid that would be too disruptive for people who do not have that configuration. This implies that I don't get notifications for commands that run for a long time on remote servers, most of the time. That said, I could simply run a command with a trailing printf '\a' to get a notification.

This might not work in Wayland, your window manager, your desktop environment, your Linux console, or your telegraphy session.

08 November, 2022 04:48PM

November 06, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.2.12 on CRAN: Maintenance

A new release 0.2.12 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now several others packages (four the last time we counted) include its sources too. Not ideal, but beyond our control.

This version adds support for NA values when parsing, and updates GitHub Action.

Changes in version 0.2.12 (2022-11-06)

  • Support NA values in numerical or character input

  • GitHub Actions were updated to checkout version 3.

Courtesy of my CRANberries, there is also a diffstat to the previous version. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

06 November, 2022 10:22PM

hackergotchi for Marco d'Itri

Marco d'Itri

Mastodon is not ready for me

I just did again the usual web search, and I have verified that Mastodon still does not support managing multiple domains on the same instance, and that there is still no way to migrate an account to a different instance without basically starting from scratch.

As much as I like the idea of a federated social network, open standards and so on, I do not think that it would be wise for me to spend time developing a so cial network identity on somebody else's instance which could disappear at any t ime.

I have managed my own email server since the '90s, but I do not feel that the system administration effort required to maintain a private Mastodon instance would be justified at this point: there is not even a Debian package! Mastodon either needs to become much simpler to maintain or become much more socially important, and so far it is neither.

06 November, 2022 05:15PM

Arturo Borrero González

Home network refresh: 10G and IPv6

Post header

A few days ago, my home network got a refresh that resulted in the enablement of some next-generation technologies for me and my family. Well, next-generation or current-generation, depending on your point of view. Per the ISP standards in Spain (my country), what I’ll describe next is literally the most and latest you can get.

The post title spoiled it already. I have now 10G internet uplink and native IPv6 since I changed my ISP to https://digimobil.es.

My story began a few months ago when a series of fiber deployments started in my neighborhood by a back-then mostly unknown ISP (digimobil). The workers were deploying the fiber inside the street sewers, and I noticed that they were surrounded by advertisements promoting the fastest FTTH deployment in Spain. Indeed, their website was promoting 1G and 10G fiber, so a few days later I asked the workers when would that be available for subscription. They told me to wait just a couple of months, and the wait ended this week.

I called the ISP, and a marketing person told me a lot of unnecessary information about how good service I was purchasing. I asked about IPv6 availability, but that person had no idea. They called me the next day to confirm that the home router they were installing would support both IPv6 and Wi-Fi 6. I was suspicious about nobody in the marketing department knowing anything about any of the two technologies, but I decided to proceed anyway. Just 24 hours after calling them, a technician came to my house and 45 minutes later the setup was ready.

The new home router was a ZTE ZXHN F8648P unit. By the way, it had Linux inside, but I got no GPL copyright notice or anything. It had 1x10G and 4x1G ethernet LAN ports. The optical speed tests that the technician did were giving between 8 Gbps to 9 Gbps in uplink speed, which seemed fair enough. Upon quick search, there is apparently a community of folks online which already know how to get the most out of this router by unbloking the root account (sorry, in spanish only) and using other tools.

When I plugged the RJ45 in my laptop, the magic happened: the interface got a native, public IPv6 from the router. I ran to run the now-classic IPv6 browser test at https://test-ipv6.com/. And here is the result:

IPv6 test

If you are curious, this was the IPv6 prefix whois information:

route6: 2a0c:5a80::/29
descr: Digi Spain Telecom S.L.U.
origin: AS57269

They were handing my router a prefix like 2a0c:5a80:2218:4a00::/56. I ignored if the prefix was somehow static, dynamic, just for me, or anything else. I’ve been waiting for native IPv6 at home for years. In the past, I’ve had many ideas and projects to host network services at home leveraging IPv6. But when I finally got it, I didn’t know what to do next. I had a 7 months old baby, and honestly I didn’t have the spare time to play a lot with the setup.

Actually, I had no need or use for such fast network either. But my coworker Andrew convinced me: given the price € 30 EUR / month, I didn’t have any reason not to buy it.

In fact, I didn’t have any 10G-enabled NIC at home. I had a few laptops with 2.5G ports, though, and that was enough to experience the new network speeds. Since this write-up was inspired by the now almost-legenday post by Michael Stapelberg My upgrade to 25 Gbit/s Fiber To The Home, I contacted him, and he suggested running a few speed tests using the Ookla suite against his own server. Here are the results:

$ docker run --net host --rm -it docker.io/stapelberg/speedtest:latest -s 50092
[..]
     Server: Michael Stapelberg - Zurich (id = 50092)
        ISP: Digi Spain
    Latency:    34.29 ms   (0.20 ms jitter)
   Download:  2252.42 Mbps (data used: 3.4 GB )
     Upload:  2239.27 Mbps (data used: 2.8 GB )
Packet Loss:     0.0%
 Result URL: https://www.speedtest.net/result/c/cc8d6a78-c6f8-4f71-b554-a79812e10106
$ docker run --net host --rm -it docker.io/stapelberg/speedtest:latest -s 50092
[..]
     Server: Michael Stapelberg - Zurich (id = 50092)
        ISP: Digi Spain
    Latency:    34.05 ms   (0.21 ms jitter)
   Download:  2209.85 Mbps (data used: 3.2 GB )
     Upload:  2223.45 Mbps (data used: 2.9 GB )
Packet Loss:     0.0%
 Result URL: https://www.speedtest.net/result/c/32f9158e-fc1a-47e9-bd33-130e66c25417

This is over IPv6. Very satisfying.

Bonus point: when I called my former ISP to cancel the old subscription the conversation was like:

  • I want to cancel the service.
  • What’s the reason?
  • I got upgraded to 10G by another ISP
  • The speed is measured in MB, not G.
  • Ok, I got upgraded to 10.000 MB
  • That’s not possible.
  • Well …

I didn’t even bother mentioning IPv6.

Cheers!

06 November, 2022 09:40AM

Russ Allbery

Review: Matrix

Review: Matrix, by Lauren Groff

Publisher: Riverhead Books
Copyright: 2021
ISBN: 0-698-40513-7
Format: Kindle
Pages: 260

Marie is a royal bastardess, a product of rape no less, and entirely out of place in the court in Westminster, where she landed after being kicked off her mother's farm. She had run the farm since her mother's untimely death, but there was no way that her relatives would let her inherit. In court, Marie is too tall, too ugly, and too strange, raised by women who were said to carry the blood of the fairy Mélusine. Eleanor of Aquitaine's solution to her unwanted house guest is a Papal commission. Marie is to become the prioress of an abbey.

I am occasionally unpleasantly reminded of why I don't read very much literary fiction. It's immensely frustrating to read a book in which the author cares about entirely different things than the reader, and where the story beats all land wrong.

This is literary historical fiction set in the 12th century. Marie is Marie de France, author of the lais about courtly love that are famous primarily due to their position as early sources for the legends of King Arthur. The lais are written on-screen very early in this book, but they disappear without any meaningful impact on the story. Matrix is, instead, about Shaftesbury Abbey and what it becomes during Marie's time as prioress and then abbess, following the theory that Marie de France was Mary of Shaftesbury.

What I thought I was getting in this book, from numerous reviews and recommendations, was a story of unexpected competence: how a wild, unwanted child of seventeen lands at a dilapidated and starving abbey, entirely against her will, and then over the next sixty years transforms it into one of the richest abbeys in England. This does happen in this book, but Groff doesn't seem to care about the details of that transformation at all.

Instead, Matrix takes the mimetic fiction approach of detailed and precise description of a few characters, with all of their flaws and complexities, and with all of the narrative's attention turned to how they are feeling and what they are seeing. It is also deeply, fully committed to a Great Man (or in this case a Great Woman) view of history.

Marie is singular. The narrative follows her alone, she makes all the significant decisions, and the development of the abbey is determined by her apparently mystical visions. (In typical mimetic fashion, these are presented as real to her, and the novel takes no position on whether that reality is objective.) She builds spy networks, maneuvers through local and church politics, and runs the abbey like her personal kingdom. The tiny amount of this that is necessarily done by other people is attributed to Marie's ability to judge character. Other people's motives are simply steamrolled over and have no effect.

Maddeningly, essentially all of this happens off-screen, and Groff is completely uninterested in the details of how any of it is accomplished. Marie decides to do something, the narrative skips forward a year, and it has happened. She decides to build something, and then it's built. She decides to collect the rents she's due, the novel gestures vaguely at how she's intimidating, and then everyone is happily paying up. She builds spy networks; who cares how? She maneuvers through crises of local and church politics that are vaguely alluded to, through techniques that are apparently too uninteresting to bother the reader with.

Instead, the narrative focuses on two things: her deeply dysfunctional, parasocial relationship with Eleanor, and her tyrannical relationship with the other nuns. I suspect that Groff would strongly disagree with my characterization of both of those narratives, and that's the other half of my problem with this book.

Marie is obsessed with and in love with Eleanor, a completely impossible love to even talk about, and therefore turns to courtly love from afar as a model into which she can fit her feelings. While this is the setup for a tragedy, it's a good idea for a story. But what undermined it for me is that Marie's obsession seems to be largely physical (she constantly dwells on Eleanor's beauty), and Eleanor is absolutely horrible to her in every way: condescending, contemptuous, dismissive, and completely uninterested. This does change a bit over the course of the book, but not enough to justify the crush that Marie maintains for this awful person through her entire life.

And Eleanor is the only person in the book who Marie treats like an equal. Everyone else is a subordinate, a daughter, a charge, a servant, or a worker. The nuns of the abbey prosper under her rule, so Marie has ample reason to justify this to herself, but no one else's opinions or beliefs matter to her in any significant way. The closest anyone can come to friendship is to be reliably obedient, perhaps after some initial objections that Marie overrules. Despite some quite good characterization of the other nuns, none of the other characters get to do anything. There is no delight in teamwork, sense of healthy community, or collaborative problem-solving. It's just all Marie, all the time, imposing her vision on the world both living and non-living through sheer force of will.

This just isn't entertaining, at least for me. The writing might be beautiful, the descriptions detailed and effective, and the research clearly extensive, but I read books primarily for characters, I read characters primarily for their relationships, and these relationships are deeply, horribly unhealthy. They are not, to be clear, unrealistic (although I do think there's way too much chosen one in Marie and way too many problems that disappear off-camera); there are certainly people in the world with dysfunctional obsessive relationships, and there are charismatic people who overwhelm everyone around them. This is just not what I want to read about.

You might think, with all I've said above, that I'm spoiling a lot of the book, but weirdly I don't think I am. Every pattern I mention above is well-established early in the novel. About the only thing that I'm spoiling is the hope that any of it is somehow going to change, a hope that I clung to for rather too long.

This is a great setup for a book, and I wish it were written by a fantasy author instead of a literary author. Perhaps I'm being too harsh on literary fiction here, but I feel like fantasy authors are more likely to write for readers who want to see the growth sequence. If someone is going to change the world, I want to see how they changed the world. The mode of fantasy writing tends to think that what people do (and how they do it) is as interesting or more interesting than what they feel or what they perceive.

If this idea, with the exact same level of (minor) mysticism and historic realism, without added magic, were written by, say, Guy Gavriel Kay or Nicola Griffith, it would be a far different and, in my opinion, a much better book. In fact, Hild is part of this book written by Nicola Griffith, and it is a much better book.

I have seen enough people rave about this book to know that this is a personal reaction that is not going to be shared by everyone, or possibly even most people. My theory is that this is due to the different reading protocols between literary fiction readers and fantasy readers. I put myself in the latter camp; if you prefer literary fiction, you may like this much better (and also I'm not sure you'll find my book reviews useful). I may be wrong, though; maybe there are fantasy readers who would like this. I will say that the sense of place is very strong and the writing has all the expected literary strengths of descriptiveness and rhythm.

But, sadly, this was not at all my thing, and I'm irritated that I wasted time on it.

Rating: 4 out of 10

06 November, 2022 01:51AM

Michael Ablassmeier

virtnbdbackup in unstable/bookworm

Besides several bugfixes, the latest version now supports using higher compression levels and logging to syslog facility. I also finished packaging and official packages are now available,

06 November, 2022 12:00AM

November 05, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppEigen 0.3.3.9.3 on CRAN: Maintenance

A new release 0.3.3.9.3 of RcppEigen arrived on CRAN moments ago (and just went to Debian). Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.

This update is again mostly maintenance. To accomodate one small aspect of the Fortran interface, we now require R 3.6.0 or later. Jonah Gabry spotted a really old typo and fixed it. The fastLm support code now uses the unabbreviated df.residual. We accomodated clang-15 in one signature as is common with many recent uploads, and also updated one aspect of GitHub Actions.

And once again as we said for the last four releases:

One additional and recent change was the accomodation of a recent CRAN Policy change to not allow gcc or clang to mess with diagnostic messages. A word of caution: this may make your compilation of packages using RcppEigen very noisy so consider adding -Wno-ignored-attributes to the compiler flags added in your ~/.R/Makevars.

We still find this requirement rather annoying. Eigen is only usable if you set, say,

-Wno-deprecated-declarations -Wno-parentheses -Wno-ignored-attributes -Wno-unused-function

asoptions in~/.R/Makevars`. But CRAN makes the rules. Maybe if a few of us gently and politely nudge them they may relent one day. One can only hope.

The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.3.9.3 (2022-11-04)

  • The dependency on R is now versioned to 3.6.0 or later for support for USE_FC_LEN_T from Fortran.

  • An old example typo was corrected (Jonah Gabry in #114).

  • The fastLm methods now reference df.residual by its full name (Closes #115).

  • A function prototype was updated for clang-15.

  • GitHub Actions were updated to checkout version 3.

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

05 November, 2022 10:05PM

November 04, 2022

RcppSpdlog 0.0.9 on CRAN: Extended Usability and New Upstream

A brand-new version 0.0.9 of RcppSpdlog got onto CRAN overnight. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich.

This release contains two changes. First, we extend usability by offering both a set of simple R functions for logging from R, and a matching set of functions at the C++ level that are directly callable (and exported) from this package (so that client packages do not need to compile). Both these changes are described (as sections seven and eight) in the vignette. Second, while we were working on this Gabi released version 1.11.0 upstream so we included this as well.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.9 (2022-11-04)

  • Add both an R and an C++ interface to spdlog

  • Update GitHub Actions to checkout@v3

  • Add a shorter aliased namespace for C++

  • Upgraded to upstream releases spdlog 1.11.0

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

04 November, 2022 10:53PM

Thorsten Alteholz

My Debian Activities in October 2022

FTP master

This month I accepted 484 and rejected 55 packages. The overall number of packages that got accepted was 492.

Debian LTS

This was my hundredth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.  Woohoo, There is a party. (yes I am old). Unfortunately there are already 101 completed month listed in the debian-lts-announce archive, so I seem to have counted wrong once. *sigh*, yes I am old.

This month my all in all workload has been 14h.

During that time I uploaded:

  • [DLA 3168-1] openvswitch security update for one CVE
  • [DLA 3167-1] ncurses security update for one CVE
  • [#1020596] bullseye-pu: mod-wsgi/4.7.1-3+deb11u1 upload
  • [graphicsmagick] debdiff for Bullseye sent to security team (update as DLA or via PU?)
  • [graphicsmagick] prepared upload for Buster
  • [libvncserver] debdiff for Buster and Bullseye sent to maintainer (no upload yet :-()

I also started to work on virglrenderer.

Last but not least I took care of NEW packages on security-master.

Debian ELTS

This month was the fifty first ELTS month.

During my allocated time I uploaded:

  • [ELA-719-1] graphicsmagick security update of Jessie and Stretch for one CVE
  • [ELA-720-1] bluez security update of Jessie and Stretch for three CVEs
  • marked two CVEs of curl as not-affected for Jessie and Stretch
  • checked that all patches for dpdk need to be backported, unfortunately that was beyond my capabilities

I also started to work on snapd.

Last but not least I finally managed to become familiar with the git workflow and imported several packages to the salsa repository.

Debian Astro

This month I uploaded new upstream versions or improved packaging of:

I also uploaded a new package pysqm. This software supports the Sky Quality Meters made by Unihedron. I was kindly given an SQM-LU for USB and SQM-LE with network adapter. I plan to put a working Python3 version of the old PySQM software into Debian, package the UDM (Unihedron Device Manager) and finally check the support within Indi.

Debian IoT

This month I uploaded new upstream versions or improved packaging of:

Debian Mobcom

This month I finished the transition of the Osmocom packages, except
osmo-mgw and osmo-msc seem to have problems. I have no idea how I can solve this, so help is appreciated.

Other stuff

This month I uploaded new packages:

04 November, 2022 03:24PM by alteholz

hackergotchi for Junichi Uekawa

Junichi Uekawa

Found the reason podman build was failing as normal user.

Found the reason podman build was failing as normal user. dbus-user-session was missing and podman wasn't able to connect to systemd user session. This resulted in trying to use cgroupfs instead of systemd for managing cgroups and other things also broke.

04 November, 2022 09:33AM by Junichi Uekawa

Alastair McKinstry

€1.3 billion announced for new Forestry Support

€1.3 billion announced for new Forestry Support

Funds to be delivered through new Forestry Programme
Premiums for planting trees to be increased by between 46% and 66% and extended to 20 years for farmers

 #GreensInGovernment

The Taoiseach, Micheál Martin TD, Minister of State with responsibility for Forestry, Senator Pippa Hackett, and Minister for Agriculture, Food and the Marine, Charlie McConalogue T.D today announced a proposed investment by the Government of €1.3 billion in Irish forestry. The funding will be for the next national Forestry Programme and represents the largest ever investment by an Irish Government in tree-planting. The programme will now be the subject of state-aid approval by the European Commission.

The Taoiseach said:

“This commitment by the Government to such a substantial financial package reflects the seriousness with which we view the climate change and biodiversity challenges, which affect all of society. Forestry is at the heart of delivering on our sustainability goals and strong support is needed to encourage engagement from all our stakeholders in reaching our objectives. “

 

Minister Hackett said:

‘I’m delighted to have secured a package of €1.318 billion for forestry. This will support the biggest and best-funded Forestry Programme ever in Ireland. It comes at an appropriate time, given the urgency of taking climate mitigation measures. Planting trees is one of the most effective methods of tackling climate change as well as contributing to improved biodiversity and water quality. One of my main aims is to re-engage farmers in afforestation. I’m delighted therefore to be proposing a new 20-year premium term exclusively for farmers, as well as introducing a new small-scale native woodland scheme which will allow farmers to plant up to 1 hectare of native woodland on farmland and along watercourses outside of the forestry licensing process.”

 

“Minister McConalogue said:

“Today we commit to providing unprecedented incentives to encourage the planting of trees that can provide a valuable addition to farm income and help to meet national climate and biodiversity objectives. This funding guarantees continued payments to those forest owners who planted under the current scheme and who are still in receipt of premiums. It also offers new and improved financial supports to those who undertake planting and sustainable forest management under the new Programme. We intend to increase premiums for planting trees by between 46% and 66% and to extend the premium period from 15 to 20 years for farmers.

 

"We are approaching a new and exciting period for forestry in Ireland. The new Forestry Programme will drive a new and brighter future for forestry, for farmers and for our climate.”

 

The proposed new Forestry Programme is currently out to public consultation as part of the Strategic Environmental Assessment and Appropriate Assessment process. The Programme is the main implementation mechanism for the new Forest Strategy (2023 -2030) which reflects the ambitions contained in the recently published Shared National Vision for Trees, Woods and Forests in Ireland until 2050. The public consultation closes on 29th November, 2022 and any changes which result from this process will be incorporated into the Programme and the Forest Strategy.

Minister Hackett commented: “The draft Forestry Programme and Forest Strategy are the product of extensive stakeholder consultation and feedback, and both documents are open to public consultation for the next number of weeks. I would strongly encourage all interested parties to engage with the consultation in advance of the Strategy and Programme being finalised.”

The new Programme is built around the principle of right trees in the right places for the right reasons with the right management. It aims to deliver more diverse forest which will meet multiple societal objectives, economic, social and environmental. Higher grant rates for forest establishment are also proposed with increases of approximately 20% to reflect rising living costs. The new one hectare native tree area scheme will also make it easier for landowners who wish to plant small areas of trees on their farm.

 

The Taoiseach concluded “I welcome this milestone and I believe that this funding injection will be an important catalyst in delivering on the ambition outlined in the new Forest Strategy. Our environmental challenges are huge but so is our commitment to overcoming them and this Forestry Programme is key to delivering so many of our priorities.

 

The new Programme will be 100% Exchequer funded and is subject to State Aid approval from the EU Commission. The Department is in contact with the Commission in relation to this approval which is a rigorous process.

04 November, 2022 08:45AM by Alastair McKinstry

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Book Review: Chokepoint Capitalism, by Rebecca Giblin and Cory Doctorow

Two weeks ago, I had the chance to go see Cory Doctorow at my local independent bookstore, in Montréal. He was there to present his latest essay, co-written with Rebecca Giblin1. Titled Chokepoint Capitalism: How Big Tech and Big Content Captured Creative Labor Markets and How We'll Win Them Back, it focuses on the impact of monopolies and monopsonies (more on this later) on creative workers.

The book is divided in two main parts:

  • Part one, Culture has been captured (chapters 1 to 11), is a series of case studies that focus on different examples of market failure. The specific sectors analysed are the book market, the news media, the music industry, Hollywood, the mobile apps industry and the online video platforms.
  • Part two, Braking anticompetitive flywheels (chapters 12 to 19), looks at different solutions to try to fix these failures.

A picture of the book cover

Although Doctorow is known for his strong political stances, I have to say I'm quite surprised by the quality of the research Giblin and he did for this book. They both show a pretty advanced understanding of the market dynamics they look at, and even though most of the solutions they propose aren't new or groundbreaking, they manage to be convincing and clear.

That is to say, you certainly don't need to be an economist to understand or enjoy this book :)

As I have mentioned before, the book heavily criticises monopolies, but also monopsonies — a market structure that has only one buyer (instead of one seller). I find this quite interesting, as whereas people are often familiar with the concept of monopolies, monopsonies are frequently overlooked.

The classic example of a monopsony is a labor market with a single employer: there is a multitude of workers trying to sell their labor power, but in the end, working conditions are dictated by the sole employer, who gets to decide who has a job and who hasn't. Mining towns are good real-world examples of monopsonies.

In the book, the authors argue most of the contemporary work produced by creative workers (especially musicians and writers) is sold to monopsonies and oligopsonies, like Amazon2 or major music labels. This creates a situation where the consumers are less directly affected by the lack of competition in the market (they often get better prices), but where creators have an increasingly hard time making ends meet. Not only this, but natural monopsonies3 are relatively rare, making the case for breaking the existing ones even stronger.

Apart from the evident need to actually start applying (the quite good) antitrust laws in the USA, some of the other solutions put forward are:

  • Transparency Rights — giving creative workers a way to audit the companies that sell their work and make sure they are paid what they are due.
  • Collective Action
  • Time Limits on Copyright Contracts — making sure creators that sell their copyrights to publishers or labels can get them back after a reasonable period of time.
  • Radical Interoperability — forcing tech giants to make their walled-gardens interoperable.
  • Minimum Wages for Creative Work — enforcing minimum legal rates for workers in certain domains, like what is already done for screenplays in the US by members of the Writers Guild of America.
  • Collective Ownership

Overall, I found this book quite enjoying and well written. Since I am not a creative worker myself and don't experience first-hand the hardships presented in the book, it was the occasion for me to delve more deeply in this topic. Chances are I'll reuse some of the exposés in my classes too.


  1. Professor at the Melbourne Law School and Director of the Intellectual Property Research Institute of Australia, amongst other things. More on her here

  2. Amazon owns more than 50% of the US physical book retail market and has an even higher market share for ebooks and audiobooks (via Audible). Not only this, but with the decline of the physical book market, audiobooks are an increasingly important source of revenue for authors. 

  3. Natural monopolies happen when it does not make economic sense for multiple enterprises to compete in a market. Critical infrastructures, like water supply or electricity, make for good examples of natural monopolies. It simply wouldn't be efficient to have 10 separate electrical cables connecting your house to 10 separate electric grids. In my opinion, such monopolies are acceptable (and even desirable), as long as they are collectively owned, either by the State or by local entities (municipalities, non-profits, etc.). 

04 November, 2022 04:00AM by Louis-Philippe Véronneau

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.20 on CRAN: Maintenance

A new release 0.4.20 of RProtoBuf arrived on CRAN earlier today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release is somewhat mechanical and in the spirit of many other recent releases. clang-15 is, as more recent compilers do, more stringent on definitions and wants to see some void in argument-less signatures. Happy to oblige. At the same time, GitHub Actions started to nag us about minimum versions of node code so an upgrade to a newer action is warranted, again as with many other affected packages. We also found another http:// URL hiding somewhere so that was cleaned. Lastly, it appears Protocol Buffers themselves moved on and now need / prefer C++17 so were happy to oblige.

The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.20 (2022-11-02)

  • Two function prototype were updated for clang-15.

  • GitHub Actions were updated to checkout version 3.

  • One more http:// URL was updated to https://

  • The package compilation is now donw with C++17 as the Protocol Buffer headers require it (and clang-15 nags otherwise).

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

04 November, 2022 12:36AM

Reproducible Builds (diffoscope)

diffoscope 226 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 226. This version includes the following changes:

[ Christopher Baines ]
* Add an lzip comparator with tests.

[ Chris Lamb ]
* Add support for comparing the "text" content of HTML files using html2text.
  (Closes: #1022209, reproducible-builds/diffoscope#318)
* Misc/test improvements:
  * Drop the ALLOWED_TEST_FILES test; it's mostly just annoying.
  * Drop other copyright notices from lzip.py and test_lzip.py.
  * Use assert_diff helper in test_lzip.py.
  * Pylint tests/test_source.py.

[ Mattia Rizzolo ]
* Add lzip to debian dependencies.

You find out more by visiting the project homepage.

04 November, 2022 12:00AM

November 03, 2022

Arturo Borrero González

New OpenPGP key and new email

Post logo

I’m trying to replace my old OpenPGP key with a new one. The old key wasn’t compromised or lost or anything bad. Is still valid, but I plan to get rid of it soon. It was created in 2013.

The new key id fingerprint is: AA66280D4EF0BFCC6BFC2104DA5ECB231C8F04C4

I plan to use the new key for things like encrypted emails, uploads to the Debian archive, and more. Also, the new key includes an identity with a newer personal email address I plan to use soon: [email protected]

The new key has been uploaded to some public keyservers.

If you would like to sign the new key, please follow the steps in the Debian wiki.

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBGNjvX4BEADE4w5x0SQmxWLAI1R17RCC98ngTkD/FMyos0GF5xmv0VJeLYhw
x6oJRmiNGHY8+gjq7SyVCWmlwbLKBEPFNI1k5WcrTB+ClgGkWB5KBnbLKm6CSP4N
ccSbrUQrZW+zxk3Q5h3CJljZpmflB2dvRfnDMSSaw8zOc37EtszW3AVVKNYAu3wj
mXpfwI72/OSELhSvhkr51L+ZlEYUMCITeO+jpiWsnU+sA8oKKPjW4+X8cjrN4eFa
1PAPILDf+Omst5SKM2aV5LGZ8rBzb5wNJF6yDexDw2XmfbFWLOfYzFRY6GTXJz/p
8Fh6O1wkHM9RnwmesCXTtkaGQsVFiVsoqGFyzrkIdWPUruB3RG5EzOkapWi/cnbD
1sy7yrUgy99Ew5yzmLaZ40hmRyq/gBBw4yRkdQaddbkErx+9hT+2tJELa5wrmWkb
FtaVZ38xC6gacOZqRjp0Xqtr0jobI0vED8vzIyY0zJwWM0Hu6qqq4hkLWZHjCy8a
T5Oe/Cb78Kqwa2mzJfncDahPxcgxpnbkYdvKokRtNBDftLVEz+Do8Dczw7Me4BoK
HmU8wLyeGeDTmeoBXpxKH90T+rQokgsiiD13bWZ+nBxILun1tjOTVVONG6SHdP3f
unolq8SU3K+m67lLa+pWjyYcNRS2OTWGOz/1zsH2R39ZOyfGD09/10aAKwARAQAB
tC1BcnR1cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvLmJnQGFydHVyby5iZz6J
AlQEEwEKAD4WIQSqZigNTvC/zGv8IQTaXssjHI8ExAUCY2O9fgIbAwUJA8JnAAUL
CQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRDaXssjHI8ExCZdD/9Z3vR4sV7vBED4
+mCjdNWWf/mw5YlkZo+XQiMVVss4HfQLdt7VxXgGdcOz5Hond9ax3+qeCEo4DdXq
TC0ACpSCu/TPil6vzbE/kO6i6a4oZjFyteAbbcMXP35stbtDM0U5EZH0adIKknfF
msIPTIdJ/dpkcshtBJIoPqjuuTEBa7bF3OYCajHVqwP4Wsgjy4TvDOwl3hy7bhrQ
ZZHqbh7kW40+alQYaJ8jDvbDh/jhN1/pEiZS9ETu0JfBAF3PYPRLW6XedvwZiPWd
jTXwJd0E+vN5LE1Go8OaYvZb9iitZ21UaYOUnFuhw7SEOSQGfEUBs39+41gBj6vW
05HKCEA6kda9NpfptMbUoSSU+hwRfNA5TdnlxtcRv4NqUigzqa1LoXLdxTsyus+K
BL7dRpKXc72JCrEA3vClisD2FgsxLLRCCSDVM8UM/it/YW7tv42XuhQkTW+okQX4
c5laMzTL+ZV8UOoshseTDOsQsdXhskdnWbnuSwAez2/Dd1gHczuN/+lPiiEnyaTF
XgH17K/F25+92MmwPQcFRVPQcYcbyx1VylA6aCgK6gOEqHCejlZv5XLouzbQh1j1
k6MjUR1ncz8vPV5xSuOMAISqozJ9GxUZT2O3o9Vc9pNg5UEzqTvyURgLOdie8yM4
T93S3nKuHVZ++ZVxEOlPnfEfbFP+xbQrQXJ0dXJvIEJvcnJlcm8gR29uemFsZXog
PGFydHVyb0BkZWJpYW4ub3JnPokCVAQTAQoAPhYhBKpmKA1O8L/Ma/whBNpeyyMc
jwTEBQJjY73LAhsDBQkDwmcABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJENpe
yyMcjwTEMKQQAIe18Np+jdhwxHEFZNppBQ69BtyrnPQg4K5VngZ0NUZdVi+/FU7q
Tc9Z1qNydnXgmav3dafL2/l5zDX9wz7mQD2F0a6luOxZwl1PE6iP5f3cUD7uC9zb
148i1bZGEJbO4iNZKTlJKlbNR9m1PG47pv964CHZnNGp6lsnEspxe2G8DJD48Pje
gbhYukgOtIhQ1CaB1fc8aVwZvXZVSbNBLAqp7pAGhTFJqzHE8/U0sn1/V/wPzFAd
TZtWzKfYAkIIFJI5Rr6LVApIwIe7nWymTdgH4crCd2GZkGR+d6ihPKVSxUAUfoAx
EJQUSJY8rYi39gSDhPuEoK8BYXS1nWFGJiNV1o8xaljQo8rNT9myCaeZuQBLX41/
LRzK4XrxYPvjZpKNucc7fSK+UFriQGzdcAaWtW45Kp/8GmAoLVyCD0DPZNWNJdxp
IORhB33aWakhvDKgaLQa16MJ8fSc3ytn/1lxWzDXA1j05i81y/AOKPtCwBKzQWPF
biuZs3kJgZagLq6L6VOQDHlKqf+jqfl1fWeo04iDg98e0TYKABUfiTz8/MdQcV/X
8VkCgtuZ8BcPPyYzBjvuXWZTvdu0n2pikqAPL4u2cbWfD8JIP2AVCJp9HMGKvENo
XcJgY4h6T3rrC/9EidxECfXlsDbUJxLq0WfJLik84+LRtde3kZiReaIRtC5BcnR1
cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvQG5ldGZpbHRlci5vcmc+iQJUBBMB
CgA+FiEEqmYoDU7wv8xr/CEE2l7LIxyPBMQFAmNjvd8CGwMFCQPCZwAFCwkIBwMF
FQoJCAsFFgIDAQACHgECF4AACgkQ2l7LIxyPBMSP/g/+MHmxCAi/X+NMHodg9Qou
wEG4Vf1uluAE6c+c1QECCdtSsRjBs1dZoJzGsA23t4LWqluyaptuLDWJQEz+EVKR
mG0bvvropNaoOEShnY069pg7lUHuO/GLeDRhfEH3KT45sIVbLly8QkoGaINSCDLe
RBNaHC6feIC8NfQzQEt72nbi4SgdSQUg0F3lj4WxxECVhXsw/YCqh1d3QYqwRVEE
lCGQ4EbavjtRhO8U7dcL1VwHemKHNq3XvM3PJf1OoPgxWqFW5rHbAdlXdN3WAI6u
DAy7kY+qihz3w6rIDTFq6I3YBTrZ44J+5mN21ZC2iDXAsa/C3Uam0vFsjs/pizuq
WgGI9Vmsyap+bOOjuRSX4hemZoOT4a2GC723fS1dFresYWo3MmwfA3sjgV5tK3ZN
XIpxYIvi6HAHLOAarDaE8Sha1GHvrmPwfZ+cEgTL0mqW3efSF3AFmGHduMB+agzK
rM9sksrRQhbY2fHnBLo1t06SQx3rmhlz5mD1ljQEIzna9D6QKleRu4hgImRLHnCB
CN3o+mZa1MHhaIFzViaD2i3Fv2+bYgT7vnS4QAneLW8O/ZgpAc2MUxMoci5JNyfJ
mWdae7Kbs4Z8rrt/mH2gYyioSB0po4VtVwKWEUW9cLtZusA6mFnMviFpfjakb9TX
MimBAv9hAYpxd+HdfHinmqS0MEFydHVybyBCb3JyZXJvIEdvbnphbGV6IDxhYm9y
cmVyb0B3aWtpbWVkaWEub3JnPokCVAQTAQoAPhYhBKpmKA1O8L/Ma/whBNpeyyMc
jwTEBQJjY735AhsDBQkDwmcABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJENpe
yyMcjwTEGooP/20PR5N34m7CNtyaO96H5W0ULuAuSNuoXaKWDo5LGU6zzDriXbIu
ryYtR66vWF5suf7fHZYX8Ufq4PEsG1UNYEGA9hnjPg3oVwGzBJI7f6Rl2P5Pc8wJ
Eq2kN/xKmfUKIrvgh1f5xgFqC4hzcLDkVlLsPowZWfep8dLY4mtVrsrCD1URhelw
zRDGZ3rTVHWXmfXbSHWR2bgZIIrCtVF8BHStg5b6HuAWpj4Oa0eMfBde0N2RZkLE
ye/r2y/lraHfpT7MXnRMcEmltrv8fic7yvj/Nh4ESWr7UmfbV+GiSw9dc/AlVMXM
ihaW0eXv4F5uMtLJOiqI7bv3UfWSvoqwf2a8EPnzOeBBHhQOOJN7O4UzKBK5GAO8
C3k0I1AV3cTmrXrqT/5yoYAHSekDFCIPES//6Y/pO0ITtCbXkA5e8vaulJbtyXpE
g0Z7I7M1kikL6reZ2PuzsR0psEb/x81bWXODIegyOJolPXMRAY7n9J0xpCnSW9yr
CN4j6YT3Oame04JslwX5Xg1cyheuiusotETYNSKRaGaYBCxYffOWoTLNIBa+RCGc
SVOzJq5pd8fVRM1h2ZZFnfpPJBUb62qPsbk6VwmesGoGevB70zcNQYEI+c35kRfM
IOuJWRIN3Wxx0rpxb5E3i/3TASHM86Dix1VW9vsC/atGU/cgaoTOiNVztDdBcnR1
cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvLmJvcnJlcm8uZ2xlekBnbWFpbC5j
b20+iQJUBBMBCgA+FiEEqmYoDU7wv8xr/CEE2l7LIxyPBMQFAmNjvg8CGwMFCQPC
ZwAFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ2l7LIxyPBMS7NA/9F7OL/j7a
xnTDjxAHEiyrCzrBQc/DEAM/yim8E+0UBeTJSZR/bShtbvLbSukeL43tKksPhN/X
skjRF8sJ8KWUnpmSWjv1DQTh7AtkJqACnq7+VtQZq3yuKUCNRNpM8lSFxtmYDUqE
XXD4eMXKoJfdphQ+qpViba+RGXg6sd69Dq739zT/OFMuKZ33z8h7hVNXmoWGcBz6
txvN3cWVJhTLdiBvtn38/0dX7IupQLypLOtP0oZdjoUjkRxTo5biOxt3hUGnxS4x
97PPeRGc4j7lv5ADwFV8bo+g54ZMGRjOcyZmA7dlWFN51JrTx3udW2jgXkYqm7UM
xP4lNwDs9TmT3jan6wR08uwlDakOXfDm3gCQEviN+350sJs2tY+JKBN4QR7NpqeU
2aDFOo0G/0ggf0QbFsMkaTSozerVHRGXMdAi+pbYA6pPWPu8lHIkvvdoj4xUu+Ko
cHX0DCRxmL9mylTbZEanrp5gSpne79McrkbQX2/Yc8lWykCtL5/jHVTD4iNiO5Rf
IJYPAVmC2nlj2URfzwGjjoL5apTStZfng4H2Ccq+3cmhwOXI7pb+PsGeI5PND00A
qHFxe590HFhPxLHoftMIlspstoCvHYGcWQxHNbXW6ccmhHdNYT8Pn4ecKgfr6pCt
0ysilOD2ppPJ88hffKA4nTdtX2Tz2ZwOYwG5Ag0EY2O9fgEQALrapVuv1IcLDit8
9gejdA/Dtlufb2/baImVaQD+dTx2QdMxxEiNKl00a5OhMzXDj9tFrB1Lv4z0t8cY
iDJ+NuydDGgz3MlJgWW0GlpAz8yiul2iqTnkWl3cWeiI+VaX8wzL+acmmkPvlrN8
hM7I55BPr8uBWVIQ7VDmI+ts8gi73xE+Etzzrh13GSSnnYnezfGUQrNfYFcip7D0
hB3bpUIGiPdQ45vSZqXUQx/B6FlabiIGRau8Rt4vaEBGXGFZ9rIR+rMJWx6GqYX4
uY1KM2JZ3SKHk++MWGYdzHdM2oaP6xckZq+u/WiwutkYLLO2hnr03lcAu1IDT1C1
YNPrbTKfqUt+3r0oUK5BrG1Cjdc1mZqcXzYcexOLp79FJLb0t5wPdfgU8dT10kjE
uQxeSYiS4oSpikVQkKoFk++/U95d/z/y/81A6v+cfRus6mW+wRSFSwks7Q5ct7zW
UyKELLC4i4EDgnJXmavVcBD0TWzhH/rZpz9FsO4Mb18IYwbV1/144019/RjiPk5Z
MMNdsjorjV2MtrCIoeAGRgZhbFP2P7CcZOp6ZWzjj40ENlElbLp3VCfkYcTiPHJv
2iaiDz2Mhfmhb1Q/5d/a9tYTYINPmv2QVo+m5Zf+1/U29d2HZMRhD4aqDsivvgtd
GpAnKeus6ePSMqpwjO6v2bmQhjpbABEBAAGJAjwEGAEKACYWIQSqZigNTvC/zGv8
IQTaXssjHI8ExAUCY2O9fgIbDAUJA8JnAAAKCRDaXssjHI8ExA5AD/9VWS1/jHM9
aE3HKCDL4CpiXQPc4ds+3/ft6LXwuCMA/tkt8I4svKZGCCi/X5NfiQetVD+cSzVO
nmloctMt/24yjnGNNSFsDozkn/RqzZIhLJBI69gX4JWR4wpeh4kXMItNM5ZlYw3H
DmuLrf/ey8E2NzbFdzj1VQNoENuwtL2pIJrvK92AcS7acvP0FpiS8riLc5a933SW
oPgelQ1j/04WAH8cyKXB/pruq3OhtK0/b8ylIeI0f7a57dxQj5wysyBVKl+EJd/n
UhypVqMDRWL7N0FttGb9gZ6OVvQnt7iwbtS3tYqAK479+GZwi/Wh/RB2dCDyz8jk
zE0j6y7huP4XzpbBbPVntLDdVAYmpW6iIaTWYxlu79FEUw4JmZdY7hJoEDpHuDIz
ylo0YQgjnRfRfWSdnGCosFrY5UgThPVTaQAILCPtdVyWY4/6s1UaeNs3H0PRA5mz
UT4vDKxGq9gXHnE+qg3dfwMcLR3cDPPWUFVeTfNitZ3Y9eV7SdbQXt5NeOXzFadz
DBc9ZzNx3rBEyUUooU0MEmbltyUFM7R/hVcdpFxs12SgHrvgh13tuxVVVNBXTwwo
pSxmap42vHJERQ8ZJQ4lrvnxNZcuwLHSZK7xVzb0b/1wMooNnhw18vlStMWQJwKl
DiXs/L/ifab2amg9jshULAPgVSw7QeP2OQ==
=UABf
-----END PGP PUBLIC KEY BLOCK-----

If you are curious about what that long code block contains, check this https://cirw.in/gpg-decoder/

For the record, the old key fingerprint is: DD9861AB23DC3333892E07A968E713981D1515F8

Cheers!

03 November, 2022 04:00PM

Alastair McKinstry

New Planning and Environment Court will reform planning appeals pro...

New Planning and Environment Court will reform planning appeals process

#GreensInGovernment

The Green Party has fulfilled an important Programme for Government commitment following the cabinet's decision to establish a new division of the High Court to specialise in environmental and planning issues. This is a major reform that will allow planning law to operate in a more efficient and environmentally friendly manner.

Steven Matthews TD, Green Party Spokesperson for Planning and Local Government, explained the aim of the new court;

“The goal is to ensure that our judicial system has the capacity to decide cases as quickly as possible within a growing and increasingly complex body of Irish and EU environmental law. Timely access to justice is a cornersto ne of the effective rule of law and an international legal commitment Ireland has entered under the Aarhus Convention. This is extremely pressing given the urgent need to ease the pressure on the housing system and develop the infrastructure we need to transition to a zero-carbon economy."

The Green Party is committed to ensuring the new Court will have the resources to fulfil its role. The Government will pass new legislation to allow the number of judges to be increased and provide the necessary exchequer funding to pay for these judges.

The new Court will hear all the cases currently taken by the High Court’s Commercial Planning and Strategic Infrastructure Development List and other major environmental cases. This is likely to include all major infrastructure cases as well as many cases relating to EU Environmental Law such as Environmental Impact Assessment, Strategic Environmental Assessment, Birds/Habitats, the Water Framework Directive and the Industrial Emissions Directive.

03 November, 2022 12:26PM by Alastair McKinstry

Department of Education confirms 12,544 Ukrainian pupils currently ...

Department of Education confirms 12,544 Ukrainian pupils currently enrolled in Irish schools

#GreensInGovernment

The Department of Education can confirm that as of 1 November, 12,544 Ukrainian pupils have been enrolled in schools across Ireland. Out of that figure, 7,948 of these pupils have been accommodated in primary schools while 4,596 pupils have enrolled in post-primary schools.

To assist with the transition of Ukrainian refugees and their families into Irish schools Regional Education and Language Teams (REALT) continue to operate, hosted by the 16 regional education and training boards around Ireland and staffed by regionally based education support personnel.

 
These teams are ensuring that clear, accessible information flows are in place between schools, local education support services and national support structures in relation to people arriving from Ukraine.

Information for parents in Ukrainian and Russian on accessing school places is available on gov.ie/Ukraine.

The Department will continue to publish figures on the enrolment of Ukrainian children each month.

 

03 November, 2022 10:55AM by Alastair McKinstry

hackergotchi for Junichi Uekawa

Junichi Uekawa

user namespace not enabled when running podman.

user namespace not enabled when running podman. Completely forgot about this but seems like I need to enable user namespace for podman. sudo sysctl -w kernel.unprivileged_userns_clone=1. Not quite sure when this started being required.

03 November, 2022 09:54AM by Junichi Uekawa

Russ Allbery

Review: Carpe Jugulum

Review: Carpe Jugulum, by Terry Pratchett

Series: Discworld #23
Publisher: Harper
Copyright: 1998
Printing: May 2014
ISBN: 0-06-228014-7
Format: Mass market
Pages: 409

Carpe Jugulum is the 23rd Discworld novel and the 6th witches novel. I would not recommend reading it before Maskerade, which introduces Agnes.

There are some spoilers for Wyrd Sisters, Lords and Ladies, and Maskerade in the setup here and hence in the plot description below. I don't think they matter that much, but if you're avoiding all spoilers for earlier books, you may want to skip over this one. (You're unlikely to want to read it before those books anyway.)

It is time to name the child of the king of Lancre, and in a gesture of good will and modernization, he has invited his neighbors in Uberwald to attend. Given that those neighbors are vampires, an open invitation was perhaps not the wisest choice.

Meanwhile, Granny Weatherwax's invitation has gone missing. On the plus side, that meant she was home to be summoned to the bedside a pregnant woman who was kicked by a cow, where she makes the type of hard decision that Granny has been making throughout the series. On the minus side, the apparent snub seems to send her into a spiral of anger at the lack of appreciation.

Points off right from the start for a plot based on a misunderstanding and a subsequent refusal of people to simply talk to each other. It is partly engineered, but still, it's a cheap and irritating plot.

This is an odd book.

The vampires (or vampyres, as the Count wants to use) think of themselves as modern and sophisticated, making a break from the past by attempting to overcome such traditional problems as burning up in the sunlight and fear of religious symbols and garlic. The Count has put his family through rigorous training and desensitization, deciding such traditional vulnerabilities are outdated things of the past. He has, however, kept the belief that vampires are at the top of a natural chain of being, humans are essentially cattle, and vampires naturally should rule and feed on the population. Lancre is an attractive new food source. Vampires also have mind control powers, control the weather, and can put their minds into magpies.

They are, in short, enemies designed for Granny Weatherwax, the witch expert in headology. A shame that Granny is apparently off sulking. Nanny and Agnes may have to handle the vampires on their own, with the help of Magrat.

One of the things that makes this book odd is that it seemed like Pratchett was setting up some character growth, giving Agnes a chance to shine, and giving Nanny Ogg a challenge that she didn't want. This sort of happens, but then nothing much comes of it. Most of the book is the vampires preening about how powerful they are and easily conquering Lancre, while everyone else flails ineffectively. Pratchett does pull together an ending with some nice set pieces, but that ending doesn't deliver on any of the changes or developments it felt like the story was setting up.

We do get a lot of Granny, along with an amusingly earnest priest of Om (lots of references to Small Gods here, while firmly establishing it as long-ago history). Granny is one of my favorite Discworld characters, so I don't mind that, but we've seen Granny solve a lot of problems before. I wanted to see more of Agnes, who is the interesting new character and whose dynamic with her inner voice feels like it has a great deal of unrealized potential.

There is a sharp and condensed version of comparative religion from Granny, which is probably the strongest part of the book and includes one of those Discworld quotes that has been widely repeated out of context:

"And sin, young man, is when you treat people as things. Including yourself. That's what sin is."

"It's a lot more complicated than that—"

"No. It ain't. When people say things are a lot more complicated than that, they means they're getting worried that they won’t like the truth. People as things, that's where it starts."

This loses a bit in context because this book is literally about treating people as things, and thus the observation feels more obvious when it arrives in this book than when you encounter it on its own, but it's still a great quote.

Sadly, I found a lot of this book annoying. One of those annoyances is a pet peeve that others may or may not share: I have very little patience for dialogue in phonetically-spelled dialect, and there are two substantial cases of that here. One is a servant named Igor who speaks with an affected lisp represented by replacing every ess sound with th, resulting in lots of this:

"No, my Uncle Igor thtill workth for him. Been thtruck by lightning three hundred timeth and thtill putth in a full night'th work."

I like Igor as a character (he's essentially a refugee from The Addams Family, which adds a good counterpoint to the malicious and arrogant evil of the vampires), but my brain stumbles over words like "thtill" every time. It's not that I can't decipher it; it's that the deciphering breaks the flow of reading in a way that I found not at all fun. It bugged me enough that I started skipping his lines if I couldn't work them out right away.

The other example are the Nac Mac Feegles, who are... well, in the book, they're Pictsies and a type of fairy, but they're Scottish Smurfs, right down to only having one female (at least in this book). They're entertainingly homicidal, but they all talk like this:

"Ach, hins tak yar scaggie, yer dank yowl callyake!"

I'm from the US and bad with accents and even worse with accents reproduced in weird spellings, and I'm afraid that I found 95% of everything said by Nac Mac Feegles completely incomprehensible to the point where I gave up even trying to read it. (I'm now rather worried about the Tiffany Aching books and am hoping Pratchett toned the dialect down a lot, because I'm not sure I can deal with more of this.)

But even apart from the dialect, I thought something was off about the plot structure of this book. There's a lot of focus on characters who don't seem to contribute much to the plot resolution. I wanted more of the varied strengths of Lancre coming together, rather than the focus on Granny. And the vampires are absurdly powerful, unflappable, smarmy, and contemptuous of everyone, which makes for threatening villains but also means spending a lot of narrative time with a Discworld version of Jacob Rees-Mogg. I feel like there's enough of that in the news already.

Also, while I will avoid saying too much about the plot, I get very suspicious when older forms of oppression are presented as a good alternatives to modernizing, rationalist spins on exploitation. I see what Pratchett was trying to do, and there is an interesting point here about everyone having personal relationships and knowing their roles (a long-standing theme of the Lancre Discworld stories). But I think the reason why there is some nostalgia for older autocracy is that we only hear about it from stories, and the process of storytelling often creates emotional distance and a patina of adventure and happy outcomes. Maybe you can make an argument that classic British imperialism is superior to smug neoliberalism, but both of them are quite bad and I don't want either of them.

On a similar note, Nanny Ogg's tyranny over her entire extended clan continues to be played for laughs, but it's rather unappealing and seems more abusive the more one thinks about it. I realize the witches are not intended to be wholly good or uncomplicated moral figures, but I want to like Nanny, and Pratchett seems to be writing her as likable, even though she has an astonishing lack of respect for all the people she's related to. One might even say that she treats them like things.

There are some great bits in this book, and I suspect there are many people who liked it more than I did. I wouldn't be surprised if it was someone's favorite Discworld novel. But there were enough bits that didn't work for me that I thought it averaged out to a middle-of-the-road entry.

Followed by The Fifth Elephant in publication order. This is the last regular witches novel, but some of the thematic thread is picked up by The Wee Free Men, the first Tiffany Aching novel.

Rating: 7 out of 10

03 November, 2022 05:16AM

November 02, 2022

Antoine Beaupré

A typical yak shaving session

Someone recently asked what yak shaving means and, because I am a professional at this pastime, I figured I would share my most recent excursion in the field.

As a reminder, "yak shaving" describes a (anti?) pattern by which you engage in more and more (possibly useless) tasks that lead you further and further away from your original objective.

The path I took through the yak heard is this:

  1. i wondered if i can use my home network to experiment with another VPN software (e.g. Wireguard instead of IPsec)

  2. then i tried Tailscale because I heard good things about it, and they have an interesting approach to opensource

  3. I wasn't happy with that, so i tried an IPv6 tunnel

  4. that broke after a few minutes, so i went on to try deploying Wireguard with Puppet), which involved reviewing about 4 different Puppet modules

  5. while I was there, I might as well share those findings with the community, so I publish that as a blog post

  6. someone else mentions that Nebula (from Slack) is a thing, but after investigation, it's not well packaged in Debian, so didn't test it, but add it to the blog post

  7. now that I found the right Puppet module, I tried to deploy it with Puppet's g10k, which requires me to input a checksum

  8. I got lazy and figured if i would put the checksum wrong, it would tell me what the right checksum was, but it didn't: instead it silently succeeded instead of failing, which seemed really bad

  9. then I looked upstream for such a bug report and saw that the Debian package was many versions behind and, because I'm on the Golang packaging team, I figured I would just do the upgrade myself

  10. then there were problems with the Debian-specific patch trying to disable network tests, so i rewrote the patch

  11. ... but ended up realizing basically all tests require the network, so I just disabled the build-time tests

  12. ... but then tried to readd it to Debian CI instead, which didn't work

At that point, I had given up, after shaving a 12th yak. Thankfully, a kind soul provided a working test suite and I was able to roll back all those parenthesis and:

  1. test the g10k package and confirm it works (and checks the checksums)

  2. upload the package to the Debian archive

  3. deploy the package in my Puppet manifests

  4. deploy a first tunnel

You'll also notice the work is not complete at all. I still need to:

  • make a full mesh between all nodes, probably with exported resources

  • have IP addresses in DNS so I don't need to remember them

  • hook up Prometheus into Puppet to monitor all nodes

  • deploy this at work (torproject.org), replacing the IPsec module I was originally trying to publish

Also notice the 8th yak, above, which might be a security issue. I wasn't able to confirm it, because g10k does some pretty aggressive caching, and I could "reproduce" it in the sense that the checksum wasn't checked if it exists in the cache. So it might have just been that I had actually already deployed the module before adding the checksum... but I still had that distressing sentiment:

<anarcat> there's a huge yak breathing down my neck with "CVE" written in large red letters on the side
<anarcat> i'm trying to ignore it, it stinks like hell

Hopefully it's nothing to worry about. Right? Riiight.

Oh. And obviously, writing this blog post is the sugar on top, the one last yak that is self-documented here.

02 November, 2022 05:40PM

hackergotchi for Robert McQueen

Robert McQueen

Many thanks & good luck to Neil McGovern

As President of the GNOME Foundation, I wanted to post a quick note to pass on the thanks from the Board, the Foundation staff team and membership to our outgoing Executive Director, Neil McGovern. I had the pleasure of passing on GNOME’s thanks in person at the Casa Bariachi this summer at GUADEC in Guadelajara, at the most exellent mariachi celebration of GNOME’s 25th Anniversary. 🤠 Kindly they stopped the music and handed me the microphone for the whole place, although I think many of the other guests celebrating their own birthdays were less excited about Neil’s tenure as Executive Director and the Free and Open Source desktop in general. 🤣

Neil’s 6-month handover period came to an end last month and he handed over the reins to myself and Thibault Martin on the Executive Committee, and Director of Operations Rosanna Yuen has stepped up to act as Chief of Staff and interface between the Board and the staff team for the time being. Our recruitment is ongoing for a new Executive Director although the search is a little behind schedule (mostly down to me!), and we’re hugely grateful to a few volunteers who have joined our search committee to help us source, screen and interview applicants.

I have really enjoyed working closely with Neil in my time on the GNOME board, and we are hugely grateful for his contributions and achievements over the past 5 years which I posted about earlier in the year. Neil is this month starting a new role as the Executive Director of Ruby Central. Our very best wishes from the GNOME community and good luck with your new role. See you soon!

(also posted to Discourse if you wish to add any thanks or comments of your own)

02 November, 2022 12:34PM by ramcq

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, October 2022

In October I was not assigned additional time by Freexian's Debian LTS initiative, but carried over 9 hours from September and worked all those hours.

I updated the linux (4.19) package to the latest stable update, but didn't upload it. I merged the latest bullseye security update into the linux-5.10 package, uploaded that, and issued DLA-3173-1.

02 November, 2022 11:36AM

November 01, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

tint 0.1.4 on CRAN: Miscellaneous Updates

A new version 0.1.4 of the tint package arrived at CRAN today. tint provides a style ‘not unlike Tufte’ for use in html and pdf documents created from markdown. The github repo shows several examples in its README, more as usual in the package documentation.

This is the first release in over two years and it brings a few incremental changes which are described in detail below. The trigger was an email from CRAN desiring compacted pdf files inside the package sources. Which, coincidentally, was made a default about a year ago in the build.r script from littler I use to make packages. But as we had not updated tint this has not reached CRAN. So there.

The set of changes follows, which includes a helpful contributed PR for which we are always thankful. I also improved table rendering, most other changes are packaging mechanics.

Changes in tint version 0.1.4 (2022-11-01)

  • Added continuous integration at GitHub using run.sh

  • Non-margin reference in html mode now have an improved display (David Selby and Dirk in #47)

  • The LaTeX package array is now installed for improved table support (Dirk addressing #48)

  • Minor updates made the package directory and README.md

  • GitHub Actions were updated to checkout version 3.

  • Several http:// URLs were updated to https://

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

01 November, 2022 11:51PM

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Decoding LDPC: k-Bit Brute Forcing

Before you go on: I've been warned off implementing this in practice on a few counts; namely, the space tradeoff isn't worth it, and it's unlikely to correct meaningful errors. I'm going to leave this post up, but please do take the content with a very large grain of salt!

My initial efforts to build a PHY and Data Link layer – from scratch using my own code – have been progressing nicely since the initial BPSK based protocol I’ve documented under the PACKRAT series. As part of that, I’ve been diving deep into FEC, and in particular, LDPC.

I won’t be able to do an overview of LDPC justice in this post – with any luck that’ll come in a later post to come as part of the RATPACK series, so some knowledge is assumed. As such this post is less useful for those looking to learn about LDPC, and a bit more targeted to those who enjoy talking and thinking about FEC.

Hey, heads up! - This post contains extremely unvalidated and back of the napkin quality work without any effort to prove this out generally. Hopefully this work can be of help to others, but please double check anything below if you need it for your own work!

While implementing LDPC, I’ve gotten an encoder and checker working, enough to use LDPC like a checksum. The next big step is to write a Decoder, which can do error correction. The two popular approaches for the actual correction that I’ve seen while reading about LDPC are Belief Propagation, and some class of linear programming that I haven’t dug into yet. I’m not thrilled at how expensive this all is in software, so while implementing the stack I’ve been exploring every shady side ally to try and learn more about how encoders and decoders work, both in theory - and in practice.

Processing an LDPC Message

Checking if a message is correct is fairly straightforward with LDPC (as with encoding, I’ll note). As a quick refresher – given the LDPC H (check) matrix of width N, you can check your message vector (msg) of length N by multipling H and msg, and checking if the output vector is all zero.

 // scheme contains our G (generator) and
 // H (check) matrices.
 scheme := {G: Matrix{...}, H: Matrix{...}}
// msg contains our LDPC message (data and
 // check bits).
 msg := Vector{...}
// N is also the length of the encoded
 // msg vector after check bits have been
 // added.
 N := scheme.G.Width
// Now, let's generate our 'check' vector.
 ch := Multiply(scheme.H, msg)

We can now see if the message is correct or not:

 // if the ch vector is all zeros, we know
 // that the message is valid, and we don't
 // need to do anything.
 if ch.IsZero() {
// handle the case where the message
 // is fine as-is.
 return ...
}
// Expensive decode here

This is great for getting a thumbs up / thumbs down on the message being correct, but correcting errors still requires pulling the LDPC matrix values from the g (generator) matrix out, building a bipartite graph, and iteratively reprocessing the bit values, until constraints are satisfied and the message has been corrected.

This got me thinking - what is the output vector when it’s not all zeros? Since 1 values in the output vector indicates consistency problems in the message bits as they relate to the check bits, I wondered if this could be used to speed up my LDPC decoder. It appears to work, so this post is half an attempt to document this technique before I put it in my hot path, and half a plea for those who do like to talk about FEC to tell me what name this technique actually is.

k-Bit Brute Forcing

Given that the output Vector’s non-zero bit pattern is set due to the position of errors in the message vector, let’s use that fact to build up a table of k-Bit errors that we can index into.

 // for clarity's sake, the Vector
 // type is being used as the lookup
 // key here, even though it may
 // need to be a hash or string in
 // some cases.
 idx := map[Vector]int{}
for i := 0; i < N; i++ {
// Create a vector of length N
 v := Vector{}
v.FlipBit(i)
// Now, let's use the generator matrix to encode
 // the data with checksums, and then use the
 // check matrix on the message to figure out what
 // bit pattern results
 ev := Multiply(scheme.H, Multiply(v, scheme.G))
idx[ev] = i
}

This can be extended to multiple bits (hence: k-Bits), but I’ve only done one here for illustration. Now that we have our idx mapping, we can now go back to the hot path on Checking the incoming message data:

 // if the ch vector is all zeros, we know
 // that the message is valid, and we don't
 // need to do anything.
 if ch.IsZero() {
// handle the case where the message
 // is fine as-is.
 return ...
}
errIdx, ok := idx[ch]
if ok {
msg.FlipBit(errIdx)
// Verify the LDPC message using
 // H again here.
 return ...
}
// Expensive decode here

Since map lookups wind up a heck of a lot faster than message-passing bit state, the hope here is this will short-circuit easy to solve errors for k-Bits, for some value of k that the system memory can tolerate.

Does this work?

Frankly – I have no idea. I’ve written a small program and brute forced single-bit errors in all bit positions using random data to start with, and I’ve not been able to find any collisions in the 1-bit error set, using the LDPC matrix from 802.3an-2006. Even if I was to find a collision for a higher-order k-Bit value, I’m tempted to continue with this approach, and treat each set of bits in the Vector’s bin (like a hash-table), checking the LDPC validity after each bit set in the bin. As long as the collision rate is small enough, it should be possible to correct k-Bits of error faster than the more expensive Belief Propagation approach. That being said, I’m not entirely convinced collisions will be very common, but it’ll take a bit more time working through the math to say that with any confidence.

Have you seen this approach called something official in publications? See an obvious flaw in the system? Send me a tip, please!

01 November, 2022 11:00PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Halloween playlist 2022

I hope you had a nice Halloween!

I've collected together some songs that I've enjoyed over the last couple of years that loosely fit a theme: ambient, instrumental, experimental, industrial, dark, disconcerting, etc. I've prepared a Spotify playlist of most of them, but not all. The list is inline below as well, with many (but not all) tracks linking to Bandcamp, if I could find them there.

This is a bit late, sorry. If anyone listens to something here and has any feedback I'd love to hear it.

(If you are reading this on an aggregation site, it's possible the embeds won't work. If so, click through to my main site)

Spotify playlist: https://open.spotify.com/playlist/3bEvEguRnf9U1RFrNbv5fk?si=9084cbf78c364ac8;

The list, with Bandcamp embeds where possible:

Some sources

  1. Via Stuart Maconie's Freak Zone
  2. Via Mary Anne Hobbs
  3. Via Lose yourself with…
  4. Soma FM - Doomed (Halloween Special)

01 November, 2022 01:51PM

Russ Allbery

Review: What Makes This Book So Great

Review: What Makes This Book So Great, by Jo Walton

Publisher: Tor
Copyright: January 2014
ISBN: 0-7653-3193-4
Format: Hardcover
Pages: 447

Jo Walton, in addition to being an excellent science fiction and fantasy writer, is a prodigious reader and frequent participant in on-line SFF book discussion going back to the Usenet days. This book is a collection of short essays previously published on Tor.com between July 2008 and February 2011. The unifying theme is that Walton regularly re-reads her favorite books, and each essay (apart from some general essays on related topics) is about why this specific book is one that she re-reads, and (as the title says) what makes it so great.

Searching for the title of one of the essays turns it up on Tor.com still, so this is one of those collections that you don't have to buy since you can read its contents on-line for free. That said, it looks like these essays were from before Tor.com started classifying posts into series, so it's going to be challenging to track them down in the huge number of other articles Walton has written for the site. (That said, you can't go far wrong by reading any of her essays at random.)

I read these essays as they were originally published, so this was also a re-read for me, but it had been a while. I'm happy to report that they were just as much fun the second time.

In the introduction and in the final essay of this collection, Walton draws a distinction between what she's doing, criticism, and reviewing. As someone else who writes about books (in a far more amateur fashion), I liked this distinction.

The way I'd characterize it is that criticism is primarily about the work: taking it apart to see what makes it tick, looking for symbolism and hidden meanings, and comparing and contrasting other works that are tackling similar themes. I've often finished a work of criticism and still had no idea if the author enjoyed reading the work being criticized or not, since that isn't the point.

Reviewing is assistance to consumers and focuses more on the reader: would you enjoy this book? Is it enjoyable to read? Does it say something new? What genre and style is it in, so that you can match that to your tastes?

Talking about books is neither of those things, although it's a bit closer to reviewing. But the emphasis is on one's personal enjoyment instead of attempting to review a product for others. When I talk about books with friends, I talk primarily about what bits I liked, what bits I didn't like, where the emotional beats were for me, and what interesting things the book did that surprised me or caught my attention. One can find a review in there, and sometimes even criticism, but the focus and the formality is different. (And, to be honest, my reviews are more on the "talking about the book" side than fully proper reviews.)

These essays are indeed talking about books. They're all re-reads; in some cases the first re-read, but more frequently the latest of many re-reads. There are lots of spoilers, which makes for bad reviews (the target audience of a review hasn't read the book yet) but good fodder for conversations about books. (The spoilers are mostly marked, but if you're particularly averse to spoilers, you'll need to read carefully.) Most of the essays are about a single book, but there are a few on more general topics, such as Walton's bafflement that anyone would skim a novel.

Since these are re-reads, and the essays collected here are more than a decade old, the focus is on older books. Some of them are famous: Vinge's A Fire Upon the Deep and A Deepness in the Sky, early Le Guin, Samuel Delaney's SF novels, Salmon Rushdie's Midnight's Children. Some of them are more obscure. C.J. Cherryh, for example, is a writer who never seems to get much on-line attention, but who is one of Walton's favorites.

Most of the essays stand alone or come in small clusters about a writer, often sprinkled through the book instead of clumped together. (The book publishes the essays in the same order they originally appeared on Tor.com.) The two largest groups of essays are re-readings of every book in Steven Brust's Vlad Taltos universe (including Brokedown Palace and the Paarfi books) up to Jhegaala, and every book in Lois McMaster Bujold's Miles Vorkosigan series up to Diplomatic Immunity. This is fitting: those are two of the great series of science fiction, but don't seem to be written about nearly as much as I would expect.

There are over 130 essays in a 447 page book, so there's a lot of material here and none of them outlive their welcome. Walton has a comfortable, approachable style that bubbles with delight and appreciation for books. I think it's impossible to read this collection without wanting to read more, and without adding several more books to the ever-teetering to-read pile.

This is perhaps not the best source of reading recommendations if you dislike spoilers, although it can be used for that if you read carefully. But if you love listening to conversations about the genre and talking about how books bounce off each other, and particularly if you have read most of these books already or don't mind spoilers, this collection is a delight. If you're the type of SFF reader who likes reading the reviews in Locus or is already reading Tor.com, highly recommended.

Rating: 8 out of 10

01 November, 2022 04:59AM

Paul Wise

FLOSS Activities October 2022

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages nautilus-image-converter, swift-im, runit-services
  • Debian IRC: removed 2 spammers from OFTC, disable anti-spam channel modes for some channels
  • Debian servers: restart processes due to OOM
  • Debian wiki: approve accounts

Communication

  • Initiate discussion about the apt hook protocol
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

All work was done on a volunteer basis.

01 November, 2022 04:20AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - October 2022

Our local Debian user group gathered on Sunday October 30th to chat, work on Debian and do other, non-Debian related hacking :) This time around, we met at EfficiOS's1 offices. As you can see from the following picture, it's a great place and the view they have is pretty awesome. Many thanks for hosting us!

The view from EfficiOS' offices, overlooking the Mont-Royal

This was our 4th meeting this year and once again, attendance was great: 10 people showed up to work on various things.

Following our bi-monthly schedule, our next meeting should be in December, but I'm not sure it'll happen. December can be a busy month here and I will have to poke our mailing list to see if people have the spoons for an event.

This time around, I was able to get a rough log of the Debian work people did:

pollo:

  • followed up with the DPL for upcoming Debian Python Team Sprint
  • opened a wishlist bug (#1023140) on the devscripts package for uscan to verify git tags signed with SSH keys
  • updated python-mediafile and mutagen to the latest upstream version

mjeanson:

  • fixed RC bug #1022419 on babeltrace
  • kindly hosted us, with the help of Mathieu Desnoyers

viashimo:

  • started updating puppet-strings to 3.0.1
  • was blocked in the process by tests requiring ruby-mdl, which is not packaged yet
  • was blocked in packaging ruby-mdl by rubocop not being >= ~1.2x

lavamind:

  • updated jnffi to 1.3.9, as part of the work on jruby he's been doing

anarcat:

  • updated g10k to latest upstream

babelouest:

  • updated node-jose to 4.10.4
  • worked on bug #1021779 in libical3

tvaz:

  • worked on updating apticron

As always, thanks to the Debian project for granting us a budget to buy some food!


  1. Makers of the awesome LTTng project, amongst other things. 

01 November, 2022 04:00AM by Louis-Philippe Véronneau

October 31, 2022

Scarlett Gately Moore

KDE Gear Snaps fixes and new releases

Another continuation of https://www.scarlettgatelymoore.dev/kde-gear-snaps-round-3 ! This week I solved a few more problems.

  • Lokalize: Dictionaries are now available via the hunspell content snap! To enable:

sudo snap install hunspell-dictionaries

sudo snap connect lokalize:hunspell-dictionaries-plug hunspell-dictionaries:hunspell-dictionaries

  • Ktorrent: First time release on both amd64 and arm64! Closes bug 412734
  • Kdenlive: Glaximate animations are now available! Closes bug 459338
Kdenlive with Glaximate animations!
  • Okular
  • Kgeography
  • Kpatience
  • Palapeli: Now launches! Closes bug 426177
  • Krdc: Now on arm64!
  • Kdiamond: Now with sound!

More coming tomorrow!

Please consider donating! Helps keep the lights on so I can get more Kool software out to you.

https://www.patreon.com/sgmoore

Cash App $ScarlettMoore0903

Stripe: https://buy.stripe.com/28o16y3PHcISfaE8ww

31 October, 2022 09:30PM by sgmoore

hackergotchi for Jonathan Dowland

Jonathan Dowland

Cosey Fanni Tutti - Guitar/Amplitude

I haven't posted a crate digging picture for a while: Amongst other distractions, I needed to replace my turntable needle. The dead needle lasted me about 20 years, although a fair chunk of that was not spent not doing anything.

yellow 7inch record playing

I love Cosey Fanni Tutti's solo work, as well as her stuff with Chris Carter and some other collaborations: although I was familiar with Throbbing Gristle, it was the live experiment Carter Tutti Void and the recording of it pressed as Transverse that got me interested. "Tutti" was one of my top albums of 2019. Her autobiography Art, Sex, Music was fascinating. I haven't read the follow-up, Re-Sisters yet, but it's on my list. The Arena documentary she was involved with DELIA DERBYSHIRE: THE MYTHS AND THE LEGENDARY TAPES was fabulous. I've yet to hear Tutti's soundtrack album taken from it.

This morning's record is a 7" from Electronic Sound magazine with two dark ambient instrumental pieces.

31 October, 2022 09:22AM

Alastair McKinstry

An Bord Pleanála conceded that the N6 Galway City Ring Road was in ...

An Bord Pleanála conceded that the N6 Galway City Ring Road was in contravention with the National Climate Action plan.
In planning for how we adapt to and mitigate climate change, the UN IPCC AR6 report summarised the state of research globally and modelled nearly 3000 scenarios. The scenarios that kept us below 2 degrees of warming (our Paris agreement commitments) overwhelmingly depended on a move away from car-dependent societies and relied on car-free cities. This report was signed off by every world government.

All EU and national modelling agrees. Accordingly our national Climate Action plan prepares for this global move from cars. It involves a reduction in car traffic, heavy investment in public transport and active travel (walking and cycling). This is essential as the research shows this works, and takes up to 84% less energy for transport in cities.

The N6GCRR plan ignored this, and assumed an increase in car traffic of up to 49%. It in practice would have locked in car dependency when we don’t expect there to be enough materials (lithium, etc for batteries) or energy to support that number of cars.
Across Europe and the world we have seen how to move from cars in cities: rail, reliable bus services with dedicated bus corridors; safe, segregated cycle lanes, public space design that enables walking, cycling by keeping distances short – the “15 minute city” concept of having schools, shops, transport hubs less than 15 minutes from your door. From Paris, removing 40% of cars from the city, now removing a lane from the Peripherique (its ring road), to Korea where expressways in cities have been removed to improve traffic as part of its better design. In Ireland, Dublin has seen a 49% drop in peak-time traffic over recent decades with improved public transport and cycling.

But in Galway we’ve ignored all this: not a kilometre of segregated cycle lane added, as there has been an over 80% drop in pupils cycling to school since the 1980s, amplifying traffic as parents commute to drop them to school, making roads more dangerous. No bus route passes over the Quincentennial bridge.

Rather than fix the traffic in Galway, a ring road would induce more traffic as the existing Bothar na dTreabh / Quincintennial bridge did; the current road opened up Knocknacarra for development; a new motorway to Barna would open up Barna for more development.

This is zombie planning: a roads plan from the 1990s lives on, while our County Development Plan has moved the focus to creating new satellite towns in Ardaun and Garraun instead. When all new business wants to be on the East, the new developments need to be near them; new schools are being built on the East, not the West. Rail is being double-tracked to Oranmore (and on to Athenry) to provide commuter rail to Renmore and the growing towns. This reduces the demand for cross-city commuting.

Active travel and public transport development in the city has been stalled awaiting the ring road. This needs to be implemented as quickly as possible. The bus corridor from Salthill (via the schools) to Oranmore (via the businesses at Ballybrit, Parkmore) needs to be completed; this will involve a bus lane on the bridge. The N6GCRR said this was impossible without a new bridge for cars; Dublin’s experience with the quays shows otherwise. If we get a fraction of the drop Dublin has seen, the traffic problem will be solved, as shown when the schools are out for holidays.

The NTA and Galway councils have been stuck in thinking from 30 years ago. They assume that traffic cannot be reduced, only added; the ABP inspector agreed, despite overwhelming evidence elsewhere. We need to implement what has been shown to work elsewhere as quickly as possible; the funding is in place but the will and vision to do so has been lacking.

31 October, 2022 08:19AM by Alastair McKinstry

October 29, 2022

Mike Hommey

Announcing git-cinnabar 0.5.11 and 0.6.0rc2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get version 0.5.11 on github. Or get version 0.6.0rc2 on github.

What’s new in 0.5.11?

  • Fixed compatibility with python 3.11.
  • Disabled inexact copy/rename detection, that was enabled by accident.
  • Updated git to 2.38.1 for the helper.

What’s new in 0.6.0rc2?

  • Improvements and bug fixes to git cinnabar self-update. Note: to upgrade from 0.6.0rc1, don’t use the self-update command except on Windows. Please use the download.py script instead, or install from the release artifacts on https://github.com/glandium/git-cinnabar/releases/tag/0.6.0rc2.
  • Disabled inexact copy/rename detection, that was enabled by accident.
  • Removed dependencies on msys DLLs on Windows.
  • Based on git 2.38.1.
  • Other minor fixes.

29 October, 2022 09:48PM by glandium

Russ Allbery

California general election

As usual with these every-two-year posts, probably of direct interest only to California residents. Maybe the more obscure things we're voting on will be a minor curiosity to people elsewhere.

Apologies to Planet Debian readers for the explicitly political post because I'm too lazy to change my blog software to do more fine-grained post classification. For what it's worth, most of the discussion here will be about the more fiddly and nuanced things we vote on, not on the major hot-button proposition.

As in 2020, I'm only going to cover the ballot propositions, as all of the state-wide and most of the district races are both obvious to me and boring to talk about. The hyperlocal races are more interesting this year, but the number of people who would care and who are also reading this blog is essentially nonexistent, so I won't bother writing them up.

This year, everything except Proposition 1 is an initiative (not put on the ballot by the legislature), which means I default to voting against them because they're usually poorly-written.

Proposition 1: YES. Adds reproductive rights to the California state constitution. I'm fairly sure everyone reading this has already made up their mind on this topic and certainly nothing will ever change my mind, so I'll leave it at that.

Proposition 26: YES. This mushes two different things together in an unhelpful way: allowing sports betting at some racetracks, and allowing a wider variety of gambling on tribal lands.

I have no strong opinion about the former (I'll get into that more with the next proposition). For the latter, my starting point is that Native American tribes are and should be treated like independent governments with their own laws (which is what we promised them by treaty and then have systematically and maliciously betrayed ever since). I am not a citizen of any of the tribes and therefore fundamentally I should not get a say on this. I'm not a big fan of gambling or of the companies they're likely to hire to run casinos, but it should be their land and their decision.

Proposition 27: NO. This, on the other hand, is about on-line sports betting outside of tribal lands, and looks to be a lot more about corruption and corporate greed.

I am fairly dubious that outlawing gambling in general is that good of an idea. I think the harms are overstated given the existence of even wilder forms of gambling (crypto and financial derivatives) that are perfectly legal, and I'm always suspicious of attempting to solve social problems with police and prohibition systems. If there were a ballot proposition to simply legalize gambling in California, I'd have to think hard about that.

But this is not that. This requires companies that want to offer on-line gambling to pay substantial up-front costs (which will restrict this to only huge gambling companies). In return, they are allowed entry into what is essentially a state-constructed partial monopoly. As usual, there's a typical vice tax deal attached where those companies are taxed to fund some program (in this case, homeless services and mental health treatment), but these sorts of taxes tend to be regressive in effect. We could just tax richer people like me to pay for those services instead. I'm also dubious that the money for homelessness will be used to build housing, which is what we need to do to address the problem.

Proposition 28: YES. Sets aside money for art and music funding in public and charter schools. This is a reluctant yes because this sort of law should not be done via proposition; it should be done through a normal legislative process that balances all of the priorities for school funding. But despite the broken process by which this was put forward, it seems like a reasonable law and no one is opposing it, so okay, fine.

Proposition 29: NO. The attempt to force all dialysis clinics to have licensed doctors on site is back again.

Everything about the way dialysis health care is provided in California makes me angry. We should have a state health care system similar to the NHS. We should open dialysis clinics based on the number of people requiring dialysis in that area. Every one of them should be unionized. We absolutely should not allow for-profit companies to have primary responsibility for basic life-saving medical care like dialysis.

But this proposition does not solve any of those problems, and what it claims to do is false. It claims that by setting credential requirements on who has to be on-site at a dialysis clinic, the clinics will become safer. This is simply not true, for all of the reasons discussed in Still Not Safe. This is not how safety works.

The safest person to do dialysis is someone with extensive experience in performing dialysis, who has seen all the problems and has an intuition for what to watch out for. That has less to do with credentials than with good training specifically in dialysis, apprenticeship, and practice, not to mention reasonable hours and good pay so that the workers are not stressed. Do I think the private dialysis clinics are likely doing a good job with this? Hah. (Do I think dialysis clinics run by large medical non-profits would do a good job with this? Also hah.) But this would enshrine into law a fundamentally incorrect solution to the problem that makes dialysis more expensive without addressing any of the other problems with the system.

It's the same tactic that was used on abortion clinics, with the same bogus argument that having people with specific credentials on-site would make them safer. It was false then and it's still false now. I would agree with better regulation of dialysis clinics, but this specific regulation is entirely wrong-headed.

Also, while this isn't an overriding factor, I get annoyed when the same proposition shows up again without substantial changes. For matters of fundamental rights, okay, sure. But for technical regulation fixes like this one, the proponents should consider taking no for an answer and trying a different approach. Like going to the legislature, which is where this kind of regulation should be designed anyway.

Proposition 30: YES. Raises taxes on the personal income of extremely rich Californians (over $2 million in income in one year) to fund various climate change mitigation programs. This is another reluctant yes vote, because once again this shouldn't be done by initiative and should be written properly by the legislature. I also don't like restricting tax revenue to particular programs, which reduce budget flexibilty to no real purpose. It's not important to me that these revenues go to these specific programs, although the programs seem like good ones to fund.

But the reality remains that wealthy Silicon Valley executives are undertaxed and the only way we can ever manage to raise taxes is through voting for things like this, so fine.

Proposition 31: NO. The Calfornia legislature banned the sale of flavored vape products. If NO beats YES on this proposition, that ban will be overturned.

Drug prohibition has never, ever worked, and yet we keep trying it over and over again in the hope that this time we'll get a different outcome. As usual, the pitch in favor of this is all about the children, specifically the claim that flavored tobacco products are only about increasing their appeal to kids because... kids like candy? Or something? I am extremely dubious of this argument; it's obvious to me from walking around city streets that adults prefer the flavored products as well and sale to kids is already prohibited and unchanged by this proposition.

I don't like vaping. I wish people would stop, at least around me, because the scent is obnoxious and the flavored stuff is even more obnoxious, even apart from whatever health problems it causes. But I'm never going to vote for drug prohibition because drug prohibition doesn't work. It just creates a black market and organized crime and makes society overall worse. Yes, the tobacco companies are some of the worst corporations on the planet, and I hope they get sued into oblivion (and ideally prosecuted) for all the lying they do, but I'm still not going to vote for prohibition. Even the best kind of prohibition that only outlaws sale and not possession.

Also, secondarily but still significant, bans like this just frustrate a bunch of people and burn good will and political capital, which we should be trying to preserve to tackle far more important problems. The politics of outlawing people's pleasures for their own good are not great. We have a lot of serious problems to deal with; maybe let's not pick fights we don't have to.

29 October, 2022 07:35PM

François Marier

Making the mounting of an encrypted /home optional on a home server

I have a computer that serves as a home server as well as a desktop machine. It has an encrypted home directory to protect user files and, in the default configuration, that unfortunately interferes with unattended reboots since someone needs to be present to enter the encryption password.

Here's how I added a timeout and made /home optional on that machine.

I started by adding a one-minute timeout on the password prompt by adding timeout=60 in my /etc/crypttab:

crypt  UUID=7e12c123-abcd-5555-8c40-900d1f8cc281  none  luks,timeout=60

then I made /home optional by adding nofail to the appropriate mount point in /etc/fstab:

/dev/mapper/crypt  /home  ext4  nodev,noatime,nosuid,nofail  0  2

Before that, the password prompt would timeout but the system would be unable to boot since one of the required partitions had failed to mount.

Now, to ensure that I don't accidentally re-create home directories for users when the system is mounted without a /home, I made the /home directory on the non-encrypted drive read-only:

umount /home
cd /home
chmod a-w .

Finally, with all of this in place, I was now happy to configure the machine to automatically reboot after a kernel panic by putting the following in /etc/sysctl.d/local.conf:

# Automatic reboot 10 seconds after a kernel panic
kernel.panic = 10

since I know that the machine will come back up just fine and that all services will be running. I simply won't be able to log into that machine as any other user than root until I manually unlock and mount /home.

29 October, 2022 06:45AM

Russ Allbery

Review: The Last Continent

Review: The Last Continent, by Terry Pratchett

Series: Discworld #22
Publisher: Harper
Copyright: 1998
Printing: May 2014
ISBN: 0-06-228019-8
Format: Mass market
Pages: 392

This is the 22nd Discworld novel and follows Interesting Times in internal continuity. Like some of the other Rincewind novels, it stands alone well enough that you could arguably start reading here, but I have no idea why you'd want to.

When we last saw Rincewind, he was being magically yanked out of the Agatean Empire. The intent was to swap him with a cannon and land him back in Ankh-Morpork, but an unfortunate expansion of the spell to three targets instead of two meant that a kangaroo had a very bad day. Ever since then, Rincewind has been trying to survive the highly inhospitable land of FourEcks (XXXX), so called because no one in Ankh-Morpork knows where it is.

The faculty at the Unseen University didn't care enough about Rincewind to bother finding him until the Librarian fell sick. He's feverish and miserable, but worse, he's lost control of his morphic function, which means that he's randomly turning into other things and is unable to take care of the books. When those books are magical, this is dangerous. One possible solution is to stabilize the Librarian's form with a spell, but to do that they need his real name. The only person who might know it is the former assistant librarian: Rincewind.

I am increasingly convinced that one of the difficulties in getting people hooked on Discworld is that the series starts with two Rincewind books, and the Rincewind books just aren't very good.

The fundamental problem is that Rincewind isn't a character, he's a gag. Discworld starts out as mostly gags, but then the characterization elsewhere gets deeper, the character interactions become more complex, and Pratchett adds more and more philosophy. That, not the humor, is what I think makes these books worth reading. But none of this applies to Rincewind. By this point, he's been the protagonist of six novels, and still the only thing I know about him is that he runs away from everything. Other than that, he's just sort of... there.

In the better Rincewind novels, some of the gap is filled by Twoflower, the Luggage, Cohen the barbarian, the Librarian (who sadly is out of commission for most of this book), or the Unseen University faculty. But they're all supporting characters. Most of them are also built around a single (if better) gag. As a result, the Rincewind books tend more towards joke collections than the rest of Discworld. There isn't a philosophical or characterization through line to hold them together.

The Last Continent is, as you might have guessed, a parody of Australia. And by that I mean it's a mash-up of Crocodile Dundee, Mad Max, The Adventures of Priscilla, Queen of the Desert, and every dad joke about Australia that you've heard. Pratchett loves movie references and I do not love movie references, so there's always part of his books that doesn't click for me, but this one was just Too Much. Yes, everything in Australia is poisonous. Yes, Australians talk funny. Oh look, there's another twist on a Crocodile Dundee quote. Yes, yes, that's a knife. Gah. The Rincewind sections were either confusing (there's some sort of drug-trip kangaroo god because reasons) or cliched and boring. Sometimes both.

The second plot, following the Unseen University faculty in their inept attempts to locate Rincewind, is better. Their bickering is still a bit one-trick and works better in the background of stronger characters (such as Death and Susan), but Pratchett does make their oblivious overconfidence entertaining. It's enough to sustain half of the book, but not enough to make up for the annoyances of the Rincewind plot.

To his credit, I think Pratchett was really trying to say something interesting in this novel about Discworld metaphysics. There are bits in the Australian plot that clearly are references to Aboriginal beliefs, which I didn't entirely follow but which I'm glad were in there. The Unseen University faculty showing up in the middle of a creation myth and completely misunderstanding it was a good scene. But the overall story annoyed me and failed to hold my interest.

I don't feel qualified to comment on the Priscilla scenes, since I've never seen the movie and have only a vague understanding of its role in trans history. I'm not sure his twists on the story quite worked, but I'm glad that Pratchett is exploring gender; that wasn't as common when these books were written.

Overall, though, this was forgettable and often annoying. There are a few great lines and a few memorable bits in any Pratchett book, including this one, but the Rincewind books just aren't... good. Not like the rest of the series, at least. I will be very happy to get back to the witches in the next book.

Followed in publication order by Carpe Jugulum, and later thematically by The Last Hero.

Rating: 5 out of 10

29 October, 2022 04:47AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Extruded Schiit Stack

I've been a fan of the products manufactured by Schiit Audio for a while now. They are affordable (for high-end audio gear), sound great, are made in the USA1 and I think their industrial design looks great.

I first started with one of their classic "Schiit Stack"2, but eventually upgraded to the Modi Multibit (I wanted the TOSLINK input), added a physical EQ (the Loki) and eventually got a Sys when I bought a Vidar speaker amp.

The original Schiit Stack — being 2 devices high — was pretty manageable as-is. With my current 4-high stack though, things became unstable and I had to resort to finding a way to bolt them together.

Mooching from a friend with a 3D printer, I printed this clever mount from Thingiverse. It worked well enough, but was somewhat imperfect for multiple reasons:

  1. The plastic tabs had a tendency of breaking in two when the screws where tight enough for the stack to feel solid.

  2. The plastic wasn't really rigid enough to support the 4 devices properly and the stack, being back-heavy from the cables, was unstable and tipped over easily.

  3. Due to the plastic tabs being fragile, it was pretty much impossible to disassemble the stack.

This last issue was what killed this solution for me. When I tried to replace my Modi 2 by the Modi Multibit, the mount pretty much crumbled away. Sadly, my friend warped a bunch of pieces on his 3D printer while trying to print ABS and I couldn't have him print me replacement parts either.

After a while, I grew tired of having these four devices laying around my desk and wasting valuable space. I had tasted the 4-stack and knew how better things could be!

That's when I realised the solution was to ditch 3D printing altogether, use aluminum framing extrusions and build my own stack out of metal.

The 4 different Schiit devices with the hardware needed to build the extruded frame

This was my first time working with aluminium frame extrusions and I had tons of fun! I specced the first version using 10mm x 10mm rails from McMaster-Carr, but discovered they do not ship to residential addresses in Canada...

After looking at local options, I then decided to use 15mm x 15mm rails from Misumi. I went with this option since the rails are still small enough not to be an eyesore, but also because this system uses M3 screws, which the Schiit mini series also uses, making assembly much easier.

I choose to make the assembled stack quite a bit taller than the previous one made with 3D printed plastic, as I found the headphone amp got pretty hot during the summer and I wanted to provide better airflow.

If you are interested in replicating this stack, here are the parts I used, all from Misumi:

I didn't order any since I had some already, but you'll also need M3 screws, namely:

  • 16 x M3-10 screws, to replace the ones in the Schiit devices
  • 16 x M3-6 screws, for the aluminium framing brackets

You can also cheap out and use only M3-10 screws (as I did), but you'll have to use the extra nuts you ordered as spacers.

The assembled stack, complete with my lucky cat

For the curious ones, the cabling is done this way:

        ┌────────────────┐                  ┌────────────────┐
        │                │                  │                │
        │ Magni (hp amp) │◄──────┐   ┌─────►│ Vidar (sp amp) │
        │                │       │   │      │                │
        └────────────────┘       │   │      └────────────────┘
                                 │   │
                           ┌─────┴───┴──────┐
                           │                │
                           │  Sys (switch)  │◄───────┐
                           │                │        │
                           └────────────────┘        │


             ┌────────────────┐             ┌────────┴───────┐
             │                │             │                │
             │   Modi (DAC)   ├────────────►│    Loki (EQ)   │
             │                │             │                │
             └────────────────┘             └────────────────┘

The Vidar is not part of the actual stack, as it's a 600W amp that weights 10kg :D.

The last thing I think I want to change in this setup is the cables. The ones I have are too long for the stack. Shorter ones would reduce the wasted space in the back and make the whole thing more elegant.


  1. As in, designed, manufactured and assembled in the USA, from parts, transformers and boards made in the USA. I find this pretty impressive. 

  2. A USB DAC and a headphone amp you can stack one of top of the other. 

29 October, 2022 04:00AM by Louis-Philippe Véronneau

October 28, 2022

Antoine Beaupré

Debating VPN options

In my home lab(s), I have a handful of machines spread around a few points of presence, with mostly residential/commercial cable/DSL uplinks, which means, generally, NAT. This makes monitoring those devices kind of impossible. While I do punch holes for SSH, using jump hosts gets old quick, so I'm considering adding a virtual private network (a "VPN", not a VPN service) so that all machines can be reachable from everywhere.

I see three ways this can work:

  1. a home-made Wireguard VPN, deployed with Puppet
  2. a Wireguard VPN overlay, with Tailscale or equivalent
  3. IPv6, native or with tunnels

So which one will it be?

Wireguard Puppet modules

As is (unfortunately) typical with Puppet, I found multiple different modules to talk with Wireguard.

module score downloads release stars watch forks license docs contrib issue PR notes
halyard 3.1 1,807 2022-10-14 0 0 0 MIT no requires firewall and Configvault_Write modules?
voxpupuli 5.0 4,201 2022-10-01 2 23 7 AGPLv3 good 1/9 1/4 1/61 optionnally configures ferm, uses systemd-networkd, recommends systemd module with manage_systemd to true, purges unknown keys
abaranov 4.7 17,017 2021-08-20 9 3 38 MIT okay 1/17 4/7 4/28 requires pre-generated private keys
arrnorets 3.1 16,646 2020-12-28 1 2 1 Apache-2 okay 1 0 0 requires pre-generated private keys?

The voxpupuli module seems to be the most promising. The abaranov module is more popular and has more contributors, but it has more open issues and PRs.

More critically, the voxpupuli module was written after the abaranov author didn't respond to a PR from the voxpupuli author trying to add more automation (namely private key management).

It looks like setting up a wireguard network would be as simple as this on node A:

wireguard::interface { 'wg0':
  source_addresses => ['2003:4f8:c17:4cf::1', '149.9.255.4'],
  public_key       => $facts['wireguard_pubkeys']['nodeB'],
  endpoint         => 'nodeB.example.com:53668',
  addresses        => [{'Address' => '192.168.123.6/30',},{'Address' => 'fe80::beef:1/64'},],
}

This configuration come from this pull request I sent to the module to document how to use that fact.

Note that the addresses used here are examples that shouldn't be reused and do not confirm to RFC5737 ("IPv4 Address Blocks Reserved for Documentation", 192.0.2.0/24 (TEST-NET-1), 198.51.100.0/24 (TEST-NET-2), and 203.0.113.0/24 (TEST-NET-3)) or RFC3849 ("IPv6 Address Prefix Reserved for Documentation", 2001:DB8::/32), but that's another story.

(To avoid boostrapping problems, the resubmit-facts configuration could be used so that other nodes facts are more immediately available.)

One problem with the above approach is that you explicitly need to take care of routing, network topology, and addressing. This can get complicated quickly, especially if you have lots of devices, behind NAT, in multiple locations (which is basically my life at home, unfortunately).

Concretely, basic Wireguard only support one peer behind NAT. There are some workarounds for this, but they generally imply a relay server of some sort, or some custom registry, it's kind of a mess. And this is where overlay networks like Tailscale come in.

Tailscale

Tailscale is basically designed to deal with this problem. It's not fully opensource, but pretty close, and they have an interesting philosophy behind that. The client is opensource, and there is an opensource version of the server side, called headscale. They have recently (late 2022) hired the main headscale developer while promising to keep supporting it, which is pretty amazing.

Tailscale provides an overlay network based on Wireguard, where each peer basically has a peer-to-peer encrypted connexion, with automatic key rotation. They also ship a multitude of applications and features on top of that like file sharing, keyless SSH access, and so on. The authentication layer is based on an existing SSO provider, you don't just register with Tailscale with new account, you login with Google, Microsoft, or GitHub (which, really, is still Microsoft).

The Headscale server ships with many features out of that:

  • Full "base" support of Tailscale's features
  • Configurable DNS
    • Split DNS
    • MagicDNS (each user gets a name)
  • Node registration
    • Single-Sign-On (via Open ID Connect)
    • Pre authenticated key
  • Taildrop (File Sharing)
  • Access control lists
  • Support for multiple IP ranges in the tailnet
  • Dual stack (IPv4 and IPv6)
  • Routing advertising (including exit nodes)
  • Ephemeral nodes
  • Embedded DERP server (AKA NAT-to-NAT traversal)

Neither project (client or server) is in Debian (RFP 972439 for the client, none filed yet for the server), which makes deploying this for my use case rather problematic. Their install instructions are basically a curl | bash but they also provide packages for various platforms. Their Debian install instructions are surprisingly good, and check most of the third party checklist we're trying to establish. (It's missing a pin.)

There's also a Puppet module for tailscale, naturally.

What I find a little disturbing with Tailscale is that you not only need to trust Tailscale with authorizing your devices, you also basically delegate that trust also to the SSO provider. So, in my case, GitHub (or anyone who compromises my account there) can penetrate the VPN. A little scary.

Tailscale is also kind of an "all or nothing" thing. They have MagicDNS, file transfers, all sorts of things, but those things require you to hook up your resolver with Tailscale. In fact, Tailscale kind of assumes you will use their nameservers, and have suffered great lengths to figure out how to do that. And naturally, here, it doesn't seem to work reliably; my resolv.conf somehow gets replaced and the magic resolution of the ts.net domain fails.

(I wonder why we can't opt in to just publicly resolve the ts.net domain. I don't care if someone can enumerate the private IP addreses or machines in use in my VPN, at least I don't care as much as fighting with resolv.conf everywhere.)

Because I mostly have access to the routers on the networks I'm on, I don't think I'll be using tailscale in the long term. But it's pretty impressive stuff: in the time it took me to even review the Puppet modules to configure Wireguard (which is what I'll probably end up doing), I was up and running with Tailscale (but with a broken DNS, naturally).

(And yes, basic Wireguard won't bring me DNS either, but at least I won't have to trust Tailscale's Debian packages, and Tailscale, and Microsoft, and GitHub with this thing.)

IPv6

IPv6 is actually what is supposed to solve this. Not NAT port forwarding crap, just real IPs everywhere.

The problem is: even though IPv6 adoption is still growing, it's kind of reaching a plateau at around 40% world-wide, with Canada lagging behind at 34%. It doesn't help that major ISPs in Canada (e.g. Bell Canada, Videotron) don't care at all about IPv6 (e.g. Videotron in beta since 2011). So we can't rely on those companies to do the right thing here.

The typical solution here is often to use a tunnel like HE's tunnelbroker.net. It's kind of tricky to configure, but once it's done, it works. You get end-to-end connectivity as long as everyone on the network is on IPv6.

And that's really where the problem lies here; the second one of your nodes can't setup such a tunnel, you're kind of stuck and that tool completely breaks down. IPv6 tunnels also don't give you the kind of security a VPN provides as well, naturally.

The other downside of a tunnel is you don't really get peer-to-peer connectivity: you go through the tunnel. So you can expect higher latencies and possibly lower bandwidth as well. Also, HE.net doesn't currently charge for this service (and they've been doing this for a long time), but this could change in the future (just like Tailscale, that said).

Concretely, the latency difference is rather minimal, Google:

--- ipv6.l.google.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 136,8ms
RTT[ms]: min = 13, median = 14, p(90) = 14, max = 15

--- google.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 136,0ms
RTT[ms]: min = 13, median = 13, p(90) = 14, max = 14

In the case of GitHub, latency is actually lower, interestingly:

--- ipv6.github.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 134,6ms
RTT[ms]: min = 13, median = 13, p(90) = 14, max = 14

--- github.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 293,1ms
RTT[ms]: min = 29, median = 29, p(90) = 29, max = 30

That is because HE.net peers directly with my ISP and Fastly (which is behind GitHub.com's IPv6, apparently?), so it's only 6 hops away. While over IPv4, the ping goes over New York, before landing AWS's Ashburn, Virginia datacenters, for a whopping 13 hops...

I managed setup a HE.net tunnel at home, because I also need IPv6 for other reasons (namely debugging at work). My first attempt at setting this up in the office failed, but now that I found the openwrt.org guide, it worked... for a while, and I was able to produce the above, encouraging, mini benchmarks.

Unfortunately, a few minutes later, IPv6 just went down again. And the problem with that is that many programs (and especially OpenSSH) do not respect the Happy Eyeballs protocol (RFC 8305), which means various mysterious "hangs" at random times on random applications. It's kind of a terrible user experience, on top of breaking the one thing it's supposed to do, of course, which is to give me transparent access to all the nodes I maintain.

Even worse, it would still be a problem for other remote nodes I might setup where I might not have acess to the router to setup the tunnel. It's also not absolutely clear what happens if you setup the same tunnel in two places... Presumably, something is smart enough to distribute only a part of the /48 block selectively, but I don't really feel like going that far, considering how flaky the setup is already.

Other options

If this post sounds a little biased towards IPv6 and Wireguard, it's because it is. I would like everyone to migrate to IPv6 already, and Wireguard seems like a simple and sound system.

I'm aware of many other options to make VPNs. So before anyone jumps in and says "but what about...", do know that I have personnally experimented with:

  • tinc: nice, automatic meshing, used for the Montreal mesh, serious design flaws in the crypto that make it generally unsafe to use; supposedly, v1.1 (or 2.0?) will fix this, but that's been promised for over a decade by now

  • ipsec, specifically strongswan: hard to configure (especially configure correctly!), harder even to debug, otherwise really nice because transparent (e.g. no need for special subnets), used at work, but also considering a replacement there because it's a major barrier to entry to train new staff

  • OpenVPN: mostly used as a client for [VPN service][]s like Riseup VPN or Mullvad, mostly relevant for client-server configurations, not really peer-to-peer, shared secrets or TLS, kind of an hassle to maintain, see also SoftEther for an alternative implementation

All of those solutions have significant problems and I do not wish to use any of those for this project.

Also note that Tailscale is only one of many projects laid over Wireguard to do that kind of thing, see this LWN review for others (basically NetbBird, Firezone, and Netmaker).

Future work

Those are options that came up after writing this post, and might warrant further examination in the future.

  • Meshbird, a "distributed private networking" with little information about how it actually works other than "encrypted with strong AES-256"

  • Nebula, "A scalable overlay networking tool with a focus on performance, simplicity and security", written by Slack people to replace IPsec, docs, runs as an overlay for Slack's 50k node network, only packaged in Debian experimental, lagging behind upstream (1.4.0, from May 2021 vs upstream's 1.6.1 from September 2022), requires a central CA, Golang, I'm in "wait and see" mode for now

  • n2n: "layer two VPN", seems packaged in Debian but inactive

  • ouroboros: "peer-to-peer packet network prototype", sounds and seems complicated

  • QuickTUN is interesting because it's just a small wrapper around NaCL, and it's in Debian... but maybe too obscure for my own good

  • unetd: Wireguard-based full mesh networking from OpenWRT, not in Debian

  • vpncloud: "high performance peer-to-peer mesh VPN over UDP supporting strong encryption, NAT traversal and a simple configuration", sounds interesting, not in Debian

  • Yggdrasil: actually a pretty good match for my use case, but I didn't think of it when starting the experiments here; packaged in Debian, with the Golang version planned, Puppet module; major caveat: nodes exposed publicly inside the global mesh unless configured otherwise (firewall suggested), requires port forwards, alpha status

Conclusion

Right now, I'm going to deploy Wireguard tunnels with Puppet. It seems like kind of a pain in the back, but it's something I will be able to reuse for work, possibly completely replacing strongswan.

I have another Puppet module for IPsec which I was planning to publish, but now I'm thinking I should just abort that and replace everything with Wireguard, assuming we still need VPNs at work in the future. (I have a number of reasons to believe we might not need any in the near future anyways...)

28 October, 2022 05:01PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

Shantaram, The Pyramid, Japan’s Hikikomori & Backpack

Shantaram

I know I have been quite behind in review of books but then that’s life. First up is actually not as much as a shocker but somewhat of a pleasant surprise. So, a bit of background before I share the news. If you have been living under a rock, then about 10-12 years ago a book called Shantaram was released. While the book is said to have been released in 2003/4 I got it in my hand around 2008/09 or somewhere around that. The book is like a good meal, a buffet. To share the synopsis, Lin a 20 something Australian guy gets involved with a girl, she encourages him to get into heroin, he becomes a heroin user. And drugs, especially hard drugs need constant replenishment, it is a chemical thing. So, to fund those cravings, he starts to steal, rising to rob a bank and while getting away shoots a cop who becomes dead. Now either he surrenders or is caught is unclear, but he is tortured in the jail. So one day, he escapes from prison, lands up at home of somebody who owes him a favor, gets some money, gets a fake passport and lands up in Mumbai/Bombay as it was then known. This is from where the actual story starts. And how a 6 foot something Australian guy relying on his street smartness and know how the transformation happens from Lin to Shantaram. Now what I have shared is perhaps just 5% of the synopsis, as shared the real story starts here.

Now the good news, last week 4 episodes of Shantaram were screened by Apple TV. Interestingly, I have seen quite a number people turning up to buy or get this book and also sharing it on Goodreads. Now there seems to have been some differences from the book to TV. Now I’m relying on 10-12 year back memory but IIRC Khaderbhai, one of the main characters who sort of takes Lin/Shantaram under his wing is an Indian. In the series, he is a western or at least looks western/Middle Eastern to me. Also, they have tried to reproduce 1980s in Mumbai/Bombay but dunno how accurate they were 😦 My impression of that city from couple of visits at that point in time where they were still more tongas (horse-ridden carriages), an occasional two wheelers and not many three wheelers. Although, it was one of the more turbulent times as lot of agitation for worker rights were happening around that time and a lot of industrial action. Later that led to lot of closure of manufacturing in Bombay and it became more commercial. It would be interesting to know whether they shot it in actual India or just made a set somewhere in Australia, where it possibly might have been shot. The chawl of the book needs a bit of arid land and Australia has lots of it.

It is also interesting as this was a project that had who’s who interested in it for a long time but somehow none of them was able to bring the project to fruition, the project seems to largely have an Australian cast as well as second generations of Indians growing in Australia. To take names, Amitabh Bacchan, Johnny Depp, Russel Crowe each of them wanted to make it into a feature film. In retrospect, it is good it was not into a movie, otherwise they would have to cut a lot of material and that perhaps wouldn’t have been sufficient. Making it into a web series made sure they could have it in multiple seasons if people like it. There is a lot between now and 12 episodes to even guess till where it would leave you then. So, if you have not read the book and have some holidays coming up, can recommend it. The writing IIRC is easy and just flows. There is a bit of action but much more nuance in the book while in the web series they are naturally more about action. There is also quite a bit of philosophy between him and Kaderbhai and while the series touches upon it, it doesn’t do justice but then again it is being commercially made.

Read the book, see the series and share your thoughts on what you think. It is possible that the series might go up or down but am sharing from where I see it, may do another at the end of the season, depending on where they leave it and my impressions.

Update – A slight update from the last blog post. Seems Rishi Sunak seems would be made PM of UK. With Hunt as chancellor and Rishi Sunak, Austerity 2.0 seems complete. There have been numerous articles which share how austerity gives rises to fascism and vice-versa. History gives lot of lessons about the same. In Germany, when the economy was not good, it was all blamed on the Jews for number of years. This was the reason for rise of Hitler, and while it did go up by a bit, propaganda by him and his loyalists did the rest. And we know and have read about the Holocaust. Today quite a few Germans deny it or deny parts of it but that’s how misinformation spreads. Also Hitler is looked now more as an aberration rather than something to do with the German soul. I am not gonna talk more as there is still lots to share and that actually perhaps requires its own blog post to do justice for the same.

The Pyramid by Henning Mankell

I had actually wanted to review this book but then the bomb called Shantaram appeared and I had to post it above. I had read two-three books before it, but most of them were about multiple beheadings and serial killers. Enough to put anybody into depression. I do not know if modern crime needs to show crime and desperation of and to such a level. Why I and most loved and continue to love Sherlock Holmes as most stories were not about gross violence but rather a homage to the art of deduction, which pretty much seems to be missing in modern crime thrillers rather than grotesque stuff.

In that, like a sort of fresh air I read/am reading the Pyramid by Henning Mankell. The book is about a character made by Monsieur Henning Mankell named Kurt Wallender. I am aware of the series called Wallender but haven’t yet seen it. The book starts with Wallender as a beat cop around age 20 and on his first case. He is ambitious, wants to become a detective and has a narrow escape with death. I wouldn’t go much into it as it basically gives you an idea of the character and how he thinks and what he does. He is more intuitive by nature and somewhat of a loner. Probably most detectives IRL are, dunno, have no clue. At least in the literary world it makes sense, in real world think there would be much irony for sure. This is speculation on my part, who knows.

Back to the book though. The book has 5 stories a sort of prequel one could say but also not entirely true. The first case starts when he is a beat cop in 1969 and he is just a beat cop. It is a kind of a prequel and a kind of an anthology as he covers from the first case to the 1990s where he is ending his career sort of.

Before I start sharing about the stories in the book, I found the foreword also quite interesting. It asks questions about the interplay of the role of welfare state and the Swedish democracy. Incidentally did watch couple of videos about a sort of mixed sort of political representation that happens in Sweden. It uses what is known as proportional representation. Ironically, Sweden made a turn to the far right this election season. The book was originally in Swedish and were translated to English by Ebba Segerberg and Laurie Thompson.

While all the stories are interesting, would share the last three or at least ask the questions of intrigue. Of course, to answer them you would need to read the book 😉

So the last three stories I found the most intriguing.

The first one is titled Man on the Beach. Apparently, a gentleman goes to one of the beaches, a sort of lonely beach, hails a taxi and while returning suddenly dies. The Taxi driver showing good presence of mind takes it to hospital where the gentleman is declared dead on arrival. Unlike in India, he doesn’t run away but goes to the cafeteria and waits there for the cops to arrive and take his statement. Now the man is in his early 40s and looks to be fit. Upon searching his pockets he is found to relatively well-off and later it turns out he owns a couple of shops. So then here are the questions ?

What was the man doing on a beach, in summer that beach is somewhat popular but other times not so much, so what was he doing there?

How did he die, was it a simple heart attack or something more? If he had been drugged or something then when and how?

These and more questions can be answered by reading the story ‘Man on the Beach’.

2. The death of a photographer – Apparently, Kurt lives in a small town where almost all the residents have been served one way or the other by the town photographer. The man was polite and had worked for something like 40 odd years before he is killed/murdered. Apparently, he is murdered late at night. So here come the questions –

a. The shop doesn’t even stock any cameras and his cash box has cash. Further investigation reveals it is approximate to his average takeout for the day. So if it’s not for cash, then what is the motive ?

b. The body was discovered by his cleaning staff who has worked for almost 20 years, 3 days a week. She has her own set of keys to come and clean the office? Did she give the keys to someone, if yes why?

c. Even after investigation, there is no scandal about the man, no other woman or any vices like gambling etc. that could rack up loans. Also, nobody seems to know him and yet take him for granted till he dies. The whole thing appears to be quite strange. Again, the answers lie in the book.

3. The Pyramid – Kurt is sleeping one night when the telephone rings. The ‘scene’ starts with a Piper Cherokee, a single piston aircraft flying low and dropping something somewhere or getting somebody from/on the coast of Sweden. It turns and after a while crashes. Kurt is called to investigate it. Turns out, the plane was supposed to be destroyed. On crash, both the pilot and the passenger are into pieces so only dental records can prove who they are. Same day or a day or two later, two seemingly ordinary somewhat elderly women, spinsters, by all accounts, live above the shop where they sell buttons and all kinds of sewing needs of the town. They seem middle-class. Later the charred bodies of the two sisters are found :(. So here come the questions –

a.Did the plane drop something or pick something somebody up ? The Cherokee is a small plane so any plane field or something it could have landed up or if a place was somehow marked then could be dropped or picked up without actually landing.

b. The firefighter suspects arson started at multiple places with the use of petrol? The question is why would somebody wanna do that? The sisters don’t seem to be wealthy and practically everybody has bought stuff from them. They weren’t popular but weren’t also unpopular.

c. Are the two crimes connected or unconnected? If connected, then how?

d. Most important question, why the title Pyramid is given to the story. Why does the author share the name Pyramid. Does he mean the same or the original thing? He could have named it triangle. Again, answers to all the above can be found in the book.

One thing I also became very aware of during reading the book that it is difficult to understand people’s behavior and what they do. And this is without even any criminality involved in. Let’s say for e.g. I die in some mysterious circumstances, the possibility of the police finding my actions in last days would be limited and this is when I have hearing loss. And this probably is more to do with how our minds are wired. And most people I know are much more privacy conscious/aware than I am.

Japan’s Hikikomori

Japan has been a curious country. It was more or less a colonizer and somewhat of a feared power till it dragged the U.S. unnecessarily in World War 2. The result of the two atom bombs and the restitution meant that Japan had to build again from the ground up. It is also in a seismically unstable place as they have frequent earthquakes although the buildings are hardened/balanced to make sure that vibrations don’t tear buildings apart. Had seen years ago on Natgeo a documentary that explains all that. Apart from that, Japan was helped by the Americans and there was good kinship between them till the 1980s till it signed the Plaza Accord which enhanced asset price bubbles that eventually burst. Something from which they are smarting even today. Japan has a constitutional monarchy. A somewhat history lesson or why it exists even today can be found here. Asset price bubbles of the 1980s, more than 50 percent of the population on zero hour contracts and the rest tend to suffer from overwork. There is a term called Karoshi that explains all. An Indian pig-pen would be two, two and a half times larger than a typical Japanese home. Most Japanese live in micro-apartments called ‘konbachiku’. All of the above stresses meant that lately many young Japanese people have become Hikikomori. Bloomberg featured about the same a couple of years back. I came to know about it as many Indians are given the idea of Japan being a successful country without knowing the ills and issues it faces. Even in that most women get the wrong end of the short stick i.e. even it they manage to find jobs, it would be most back-breaking menial work. The employment statistics of Japan’s internal ministry tells its own story.

If you look at the data above, it seems that the between 2002 and 2019, the share of zero hour contracts has increased while regular work has decreased. This also means that those on the bottom of the ladder can no longer afford a home. There is and was a viral video called ‘Lost in Manboo‘ that went viral few years ago. It is a perfect set of storms. Add to that the Fukushima nuclear incident about which I had shared a few years ago. While the workers are blamed but all design decisions are taken by the management. And as was shown in numerous movies, documentaries etc. Interestingly, and somewhat ironically, the line workers knew the correct things to do and correct decisions to take unlike the management. The shut-ins story is almost a decade or two decades old. It is similar story in South Korea but not as depressive as the in Japan. It is somewhat depressive story but needed to be shared. The stories shared in the bloomberg article makes your heart ache 😦

Backpacks

In and around 2015, I had bought a Targus backpack, very much similar to the Targus TSB194US-70 Motor 16-inch Backpack. That bag has given me a lot of comfort over the years but now has become frayed the zip sometimes work and sometimes doesn’t. Unlike those days there are a bunch of companies now operating in India. There are eight different companies that I came to know about, Aircase, Harrisons Sirius, HP Oddyssey, Mokobara, Artic Hunter, Dell Pro Hybrid, Dell Roller Backpack and lastly the Decathlon Quechua Hiking backpack 32L – NH Escape 500 . Now of all the above, two backpacks seem the best, the first one is Harrisons Sirius, with 45L capacity, I don’t think I would need another bag at all. The runner-up is the Decathlon Quecha Hiking Backpack 32L. One of the better things in all the bags is that all have hidden pockets for easy taking in and out of passport while having being ant-theft. I do not have to stress how stressful it is to take out the passport and put it back in. Almost all the vendors have made sure that it is not a stress point anymore. The good thing about the Quecha is that they are giving 10 years warranty, the point to be asked is if that is does the warranty cover the zip. Zips are the first thing that goes out in bags.That actually has what happened to my current bag. Decathlon has a store in Wakad, Pune while I have reached out to the gentleman in charge of Harrisons India to see if they have a reseller in Pune. So hopefully, in next one week I should have a backpack that isn’t spilling with things all over the place, whichever I’m able to figure out.

28 October, 2022 04:09AM by shirishag75

Reproducible Builds (diffoscope)

diffoscope 225 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 225. This version includes the following changes:

[ Chris Lamb ]
* Add support for detecting ordering-only differences in XML files.
  (Closes: #1022146)
* Fix an issue with detecting ordering differences. (Closes: #1022145)
* Add support for ttx(1) from fonttools.
  (Re: reproducible-builds/diffoscope#315)
* Test improvements:
  - Tidy up the JSON tests and use assert_diff over get_data and manual
    assert in XML tests.
  - Rename order1.diff to json_expected_ordering_diff for consistency.
  - Temporarily allow the stable-po pipeline to fail in the CI.
* Use consistently capitalised "Ordering" everywhere we use the word in
  diffoscope's output.

You find out more by visiting the project homepage.

28 October, 2022 12:00AM

October 27, 2022

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Convert a root filesystem to a bootable disk image

The year is 2022, and it is still that complicated to install GRUB2 externally onto a disk image.

But using the wonders of libguestfs, you can create a bootable diskimage using a qemu VM abstraction very easily. The steps here imply we want to create a disk with a single partition containing the root filesystem.

Create an empty disk image, partition it

$ truncate --size 40G target.img
$ virt-format --add target.img --partition=mbr --filesystem=ext4

copy the root file system into a partition

cd path/to/root/fs
sudo tar --numeric-owner -cvf - . | guestfish --rw --add ../target.img --mount /dev/sda1:/ -- tar-in - /

install grub using guestfish

$ guestfish --add target.img --inspector

and in the guestfish prompt:

>> command 'grub-install /dev/sda'
>> command 'update-grub'
# also make sure init can mount our root partition
>> write /etc/fstab '/dev/sda1 / ext4 defaults 0 1'
>> exit

test boot the disk image

$ kvm -m 1024 -drive file=target.img,format=raw

27 October, 2022 04:21PM by Manu

Michael Ablassmeier

fun with pygame

Next year my son will turn 4. I have quit playing computer games for a pretty long time now, but recently i questioned myself: what will be the first computer game hes going to play? Why not create a simple game by myself?

Living on the landside, his attention has been drawn to farming machines for quite some time now and that topic never grows old for him, which makes for a perfect game setting.

The game logic should be pretty simple: A tiling 2d jump game where you have to make an tractor jump over appearing objects. Different vehicles and backgrounds to choose and a set of “lives” with randomly generated “coins” which you have to catch to undo happened failures.

Never having done anything related to pygame the learning curve has been quite good so far :-)

The part i spent most time with was searching for free assets and pixel art which im able to use. Gimp also made me lose quite some hair while failing to canvas/crop images to the right size so the placements within the different maps matched..

I used pyinstaller to make it somewhat portable (needs to run on windows too) and building the artifacts using github actions was quite a nice experience.. Lets see where this goes next, lots of ideas come to my mind :)

image image image image

https://github.com/abbbi/trktor

27 October, 2022 12:00AM

October 26, 2022

Scarlett Gately Moore

KDE Gear snaps round 3!

While trying to stay warm in our first snow of the year, I got several apps tested and released in round 3 of https://www.scarlettgatelymoore.dev/kde-gear-snaps-round-2/ ! All of these are being ( re ) tested on both arm64 and amd64. Hence, the release is going slower than it will in the future. Thank you for your patience.

  • Kalzium ( again ) Molecule editor works. Known bug downloading molecules. Just go to Home/Documents and rename the file hotsuff-access.xml to themolecule.cml and load molecule as a temporary work around until a permanent fix comes about. https://bugs.kde.org/show_bug.cgi?id=461027
  • Ktuberling
  • Kbounce
  • Kmplot
  • Kbreakout
  • Knavalbattle
  • Kubrick ( Not on chromebook unfortunately, opengl errors need looking into. )
  • Kbruch
  • Knetwalk
  • Kwordquiz
  • Knights ( Sound is fixed! )
  • Kollision
  • Lokalize
  • Kfourinline

More rounds of bug gardening has been done, if you find bugs, or know one of your bugs is related to it being a snap, please assign to me. https://bugs.kde.org

Please consider a donation so I may bring many more apps your way.

https://www.patreon.com/sgmoore

I am working on one time donations.

26 October, 2022 08:52PM by sgmoore

October 25, 2022

Arturo Borrero González

Netfilter Workshop 2022 summary

Netfilter logo

This is my report from the Netfilter Workshop 2022. The event was held on 2022-10-20/2022-10-21 in Seville, and the venue was the offices of Zevenet. We started on Thursday with Pablo Neira (head of the project) giving a short welcome / opening speech. The previous iteration of this event was in virtual fashion in 2020, two years ago. In the year 2021 we were unable to meet either in person or online.

This year, the number of participants was just eight people, and this allowed the setup to be a bit more informal. We had kind of an un-conference style meeting, in which whoever had something prepared just went ahead and opened a topic for debate.

In the opening speech, Pablo did a quick recap on the legal problems the Netfilter project had a few years ago, a topic that was settled for good some months ago, in January 2022. There were no news in this front, which was definitely a good thing.

Moving into the technical topics, the workshop proper, Pablo started to comment on the recent developments to instrument a way to perform inner matching for tunnel protocols. The current implementation supports VXLAN, IPIP, GRE and GENEVE. Using nftables you can match packet headers that are encapsulated inside these protocols. He mentioned the design and the goals, that was to have a kernel space setup that allows adding more protocols by just patching userspace. In that sense, more tunnel protocols will be supported soon, such as IP6IP, UDP, and ESP. Pablo requested our opinion on whether if nftables should generate the matching dependencies. For example, if a given tunnel is UDP-based, a dependency match should be there otherwise the rule won’t work as expected. The agreement was to assist the user in the setup when possible and if not, print clear error messages. By the way, this inner thing is pure stateless packet filtering. Doing inner-conntracking is an open topic that will be worked on in the future.

Pablo continued with the next topic: nftables automatic ruleset optimizations. The times of linear ruleset evaluation are over, but some people have a hard time understanding / creating rulesets that leverage maps, sets, and concatenations. This is where the ruleset optimizations kick in: it can transform a given ruleset to be more optimal by using such advanced data structures. This is purely about optimizing the ruleset, not about validating the usefulness of it, which could be another interesting project. There were a couple of problems mentioned, however. The ruleset optimizer can be slow, O(n!) in worst case. And the user needs to use nested syntax. More improvements to come in the future.

Next was Stefano Brivio’s turn (Red Hat engineer). He had been involved lately in a couple of migrations to nftables, in particular libvirt and KubeVirt. We were pointed to https://libvirt.org/firewall.html, and Stefano walked us through the 3 or 4 different virtual networks that libvirt can create. He evaluated some options to generate efficient rulesets in nftables to instrument such networks, and commented on a couple of ideas: having a “null” matcher in nftables set expression. Or perhaps having kind of subsets, something similar to a ‘view’ in a SQL database. The room spent quite a bit of time debating how the nft_lookup API could be extended to support such new search operations. We also discussed if having intermediate facilities such as firewalld could provide the abstraction levels that could make developers more comfortable. Using firewalld also may have the advantage that coordination between different system components writing ruleset to nftables is handled by firewalld itself and developers are freed of the responsibility of doing it right.

Next was Fernando F. Mancera (Red Hat engineer). He wanted to improve error reporting when deleting table/chain/rules with nftables. In general, there are some inconsistencies on how tables can be deleted (or flushed). And there seems to be no correct way to make a single table go away with all its content in a single command. The room agreed in that the commands destroy table and delete table should be defined consistently, with the following meanings:

  • destroy: nuke the table, don’t fail if it doesn’t exist
  • delete: delete the table, but the command will fail if it doesn’t exist

This topic diverted into another: how to reload/replace a ruleset but keep stateful information (such as counters).

Next was Phil Sutter (Netfilter coreteam member and Red Hat engineer). He was interested in discussing options to make iptables-nft backward compatible. The use case he brought was simple: What happens if a container running iptables 1.8.7 creates a ruleset with features not supported by 1.8.6. A later container running 1.8.6 may fail to operate. Phil’s first approach was to attach additional metadata into rules to assist older iptables-nft in decoding and printing the ruleset. But in general, there are no obvious or easy solutions to this problem. Some people are mixing different tooling version, and there is no way all cases can be predicted/covered. iptables-nft already refuses to work in some of the most basic failure scenarios.

An other way to approach the issue could be to introduce some kind of support to print raw expressions in iptables-nft, like -m nft xyz. Which feels ugly, but may work. We also explored playing with the semantics of release version numbers. And another idea: store strings in the nft rule userdata area with the equivalent matching information for older iptables-nft.

In fact, what Phil may have been looking for is not backwards but forward compatibility. Phil was undecided which path to follow, but perhaps the most common-sense approach is to fall back to a major release version bump (2.x.y) and declaring compatibility breakage with older iptables 1.x.y.

That was pretty much it for the first day. We had dinner together and went to sleep for the next day.

The room

The second day was opened by Florian Westphal (Netfilter coreteam member and Red Hat engineer). Florian has been trying to improve nftables performance in kernels with RETPOLINE mitigations enabled. He commented that several workarounds have been collected over the years to avoid the performance penalty of such mitigations. The basic strategy is to avoid function indirect calls in the kernel.

Florian also described how BPF programs work around this more effectively. And actually, Florian tried translating nf_hook_slow() to BPF. Some preliminary benchmarks results were showed, with about 2% performance improvement in MB/s and PPS. The flowtable infrastructure is specially benefited from this approach. The software flowtable infrastructure already offers a 5x performance improvement with regards the classic forwarding path, and the change being researched by Florian would be an addition on top of that.

We then moved into discussing the meeting Florian had with Alexei in Zurich. My personal opinion was that Netfilter offers interesting user-facing interfaces and semantics that BPF does not. Whereas BPF may be more performant in certain scenarios. The idea of both things going hand in hand may feel natural for some people. Others also shared my view, but no particular agreement was reached in this topic. Florian will probably continue exploring options on that front.

The next topic was opened by Fernando. He wanted to discuss Netfilter involvement in Google Summer of Code and Outreachy. Pablo had some personal stuff going on last year that prevented him from engaging in such projects. After all, GSoC is not fundamental or a priority for Netfilter. Also, Pablo mentioned the lack of support from others in the project for mentoring activities. There was no particular decision made here. Netfilter may be present again in such initiatives in the future, perhaps under the umbrella of other organizations.

Again, Fernando proposed the next topic: nftables JSON support. Fernando shared his plan of going over all features and introduce programmatic tests from them. He also mentioned that the nftables wiki was incomplete and couldn’t be used as a reference for missing tests. Phil suggested running the nftables python test-suite in JSON mode, which should complain about missing features. The py test suite should cover pretty much all statements and variations on how the nftables expression are invoked.

Next, Phil commented on nftables xtables support. This is, supporting legacy xtables extensions in nftables. The most prominent problem was that some translations had some corner cases that resulted in a listed ruleset that couldn’t be fed back into the kernel. Also, iptables-to-nftables translations can be sloppy, and the resulting rule won’t work in some cases. In general, nft list ruleset | nft -f may fail in rulesets created by iptables-nft and there is no trivial way to solve it.

Phil also commented on potential iptables-tests.py speed-ups. Running the test suite may take very long time depending on the hardware. Phil will try to re-architect it, so it runs faster. Some alternatives had been explored, including collecting all rules into a single iptables-restore run, instead of hundreds of individual iptables calls.

Next topic was about documentation on the nftables wiki. Phil is interested in having all nftables code-flows documented, and presented some improvements in that front. We are trying to organize all developer-oriented docs on a mediawiki portal, but the extension was not active yet. Since I worked at the Wikimedia Foundation, all the room stared at me, so at the end I kind of committed to exploring and enabling the mediawiki portal extension. Note to self: is this perhaps https://www.mediawiki.org/wiki/Portals ?

Next presentation was by Pablo. He had a list of assorted topics for quick review and comment.

  • We discussed nftables accept/drop semantics. People that gets two or more rulesets from different software are requesting additional semantics here. A typical case is fail2ban integration. One option is quick accept (no further evaluation if accepted) and the other is lazy drop (don’t actually drop the packet, but delay decision until the whole ruleset has been evaluated). There was no clear way to move forward with this.
  • A debate on nft userspace memory usage followed. Some people are running nftables on low end devices with very little memory (such as 128 MB). Pablo was exploring a potential solution: introducing struct constant_expr, which can reduce 12.5% mem usage.
  • Next we talked about repository licensing (or better, relicensing to GPLv2+). Pablo went over a list of files in the nftables tree which had diverging licenses. All people in the room agreed on this relicensing effort. A mention to the libreadline situation was made.
  • Another quick topic: a bogus EEXIST in nft_rbtree. Pablo & Stefano to work in a patch.
  • Next one was conntrack early drop in flowtable. Pablo is studying use cases for some legitimate UDP unidirectional flows (like RTP traffic).
  • Pablo and Stefano discussed pipapo not being atomic on updates. Stefano already looked into it, and one of the ideas was to introduce a new commit API for sets.
  • The last of the quick topics was an idea to have a global table in nftables. Or some global items, like sets. Folk in the community keep asking for this. Some ideas were discussed, like perhaps adding a family agnostic family. But then there would be a challenge: nftables would need to generate byte code that works in any of the hooks. There was no immediate way of addressing this. The idea of having templated tables/sets circulated again as a way of reusing data across namespaces/families.

Following this, a new topic was introduced by Stefano. He wanted to talk about nft_set_pipapo, documentation, what to do next, etc. He did a nice explanation of how the pipapo algorithm works for element inserts, lookups, and deletion. The source code is pretty well documented, by the way. He showed performance measurements of different data types being stored in the structure. After some lengthly debate on how to introduce changes without breaking usage for users, he declared some action items: writing more docs, addressing problems with non-atomic set reloads and a potential rework of nft_rbtree.

After that, the next topic was ‘kubernetes & netfilter’, also by Stefano. Actually, this topic was very similar to what we already discussed regarding libvirt. Developers want to reduce packet matching effort, but also often don’t leverage nftables most performant features, like sets, maps or concatenations.

Some Red Hat developers are already working on replacing everything with native nftables & firewalld integrations. But some rules generators are very bad. Kubernetes (kube-proxy) is a known case. Developers simply won’t learn how to code better ruleset generators. There was a good question floating around: What are people missing on first encounter with nftables?

The Netfilter project doesn’t have a training or marketing department or something like that. We cannot force-educate developers on how to use nftables in the right way. Perhaps we need to create a set of dedicated guidelines, or best practices, in the wiki for app developers that rely on nftables. Jozsef Kadlecsik (Netfilter coreteam) supported this idea, and suggested going beyond: such documents should be written exclusively from the nftables point of view: stop approaching the docs as a comparison to the old iptables semantics.

Related to that last topic, next was Laura García (Zevenet engineer, and venue host). She shared the same information as she presented in the Kubernetes network SIG in August 2020. She walked us through nftlb and kube-nftlb, a proof-of-concept replacement for kube-proxy based on nftlb that can outperform it. For whatever reason, kube-nftlb wasn’t adopted by the upstream kubernetes community.

She also covered latest changes to nftlb and some missing features, such as integration with nftables egress. nftlb is being extended to be a full proxy service and a more robust overall solution for service abstractions. In a nutshell, nftlb uses a templated ruleset and only adds elements to sets, which is exactly the right usage of the nftables framework. Some other projects should follow its example. The performance numbers are impressive, and from the early days it was clear that it was outperforming classical LVS-DSR by 10x.

I used this opportunity to bring a topic that I wanted to discuss. I’ve seen some SRE coworkers talking about katran as a replacement for traditional LVS setups. This software is a XDP/BPF based solution for load balancing. I was puzzled about what this software had to offer versus, for example, nftlb or any other nftables-based solutions. I commented on the highlighs of katran, and we discussed the nftables equivalents. nftlb is a simple daemon which does everything using a JSON-enabled REST API. It is already packaged into Debian, ready to use, whereas katran feels more like a collection of steps that you need to run in a certain order to get it working. All the hashing, caching, HA without state sharing, and backend weight selection features of katran are already present in nftlb.

To work on a pure L3/ToR datacenter network setting, katran uses IPIP encapsulation. They can’t just mangle the MAC address as in traditional DSR because the backend server is on a different L3 domain. It turns out nftables has a nft_tunnel expression that can do this encapsulation for complete feature parity. It is only available in the kernel, but it can be made available easily on the userspace utility too.

Also, we discussed some limitations of katran, for example, inability to handle IP fragmentation, IP options, and potentially others not documented anywhere. This seems to be common with XDP/BPF programs, because handling all possible network scenarios would over-complicate the BPF programs, and at that point you are probably better off by using the normal Linux network stack and nftables.

In summary, we agreed that nftlb can pretty much offer the same as katran, in a more flexible way.

Group photo

Finally, after many interesting debates over two days, the workshop ended. We all agreed on the need for extending it to 3 days next time, since 2 days feel too intense and too short for all the topics worth discussing.

That’s all on my side! I really enjoyed this Netfilter workshop round.

25 October, 2022 09:00AM

October 20, 2022

hackergotchi for Mike Gabriel

Mike Gabriel

Ubuntu Touch development - Wanna sponsor ARM64 CPU power for CI build infrastructure?

What is Ubuntu Touch? (And what does sunweaver have to do with it?)

With Ubuntu Touch, the UBports Foundation offers a truly unique mobile experience - a viable alternative to Android and iOS. The UBports community provides a free and open-source GNU/Linux-based mobile operating system. One that can be installed and used today.

Currently, there is an intensive effort going on lifting Ubuntu Touch from its current Ubuntu 16.04 base up to an Ubuntu 20.04 base. (And very soon after that to an Ubuntu 22.04 base...).

With the Ubuntu Touch 20.04 base the progress bar is already at (I'd say) 89%, but we recently got hit by a drawback.

I am currently involved in the Ubuntu Touch core development team at UBports and on medium short notice our current ARM64 server sponsor has announced to decommission our ARM64 build server that currently powers all the ARM64 and armhf CI builds.

Call for Hardware Sponsoring

So, the UBports core development team is currently desperately looking for a sponsor (or a few sponsors) who can provide us with (datacenter-hosted) ARM-based CPU power. It is important, that also 32-bit ARM builds are possible with the hardware provided.

For testing, I recently ordered a HoneyComb LX2 (by SolidRun) as a possible solution (multi-node in the end), but the board arrived in a non-usable state, it seems. So this also didn't work out as easy as expected.

As the former provider/sponsor is about to pull the plug, this call for help is kind of urgent. Please get in touch if you can help us out or know people who can.

Thanks!!!
Mike (sunweaver at debian.org, libera.chat, OFTC, Telegram, matrix.org, etc.)

20 October, 2022 07:59PM by sunweaver

Scarlett Gately Moore

KDE Gear Snaps round 2

As a continuation of https://www.scarlettgatelymoore.dev/new-kde-gear-snaps-in-the-works/

Todays releases, tested on both amd64 and arm64, are:

  • Falkon
  • Umbrello
  • Step
  • Kompare
  • Rocs ( First time release on arm! )
  • Kgoldrunner
  • Gwenview

This week has also been a busy week gardening snap bugs in bugs.kde.org. They are all over the place 🙁 I am trying to sort out getting them there own section. I have assigned all snap bugs I have found to myself and requested that this is default. If you have bugs, please report them at bugs.kde.org , for now under neon / Snaps.

More coming next week!

20 October, 2022 04:52PM by sgmoore

October 19, 2022

hackergotchi for Shirish Agarwal

Shirish Agarwal

Pune Rains, Uncosted Budgets, Hearing Loss Covid, Fracking

Pune Rains

Lemme start with a slightly funny picture that tells as much about Pune, my city as anything else does.

Pune- Leave your attitude behind, we have our own

This and similar tags, puns and whatnot you will find if you are entering Pune from the road highway. You can also find similar similar symbols and Puns all over the city and they are partly sarcasm and ironic and partly the truth. Puneities work from the attitude that they know everything rather than nothing, including yours truly 😉 . What is the basis of that or why is there such kind of confidence I have no clue or idea, it is what it is. Approximately 24 hrs. ago, apparently we had a cloudburst. What I came to know later is that we got 100 mm of rain. Sharing from local news site. Much more interesting was a thread made on Reddit where many people half-seriously asked where they can buy a boat. One of the reasons being even if it’s October, in fact, we passed middle of October and it’s still raining. Even today in the evening, it rained for quite a while. As I had shared in a few blog posts before, June where rains should have started, it didn’t, it actually started late July or even August, so something has shifted. The current leadership does not believe in Anthropogenic Climate Change or human activity induced climate change even though that is a reality. I could share many links and even using the term above should give links to various studies. Most of the people who are opposed to it are either misinformed or influenced from the fossil fuel industry. Again, could share many links, but will share just one atm. I have talked to quite a few people about it but nobody has ever been able to give a convincing answer as to why GM had to crush the cars. Let’s even take the argument that it was the worst manufactured car in history and there have been quite a few, have the others been crushed? If not, then the reason shared or given by most people sounds hollow. And if you look into it, they had an opportunity that they let go, and now most of them are scrambling and yet most of the legacy auto manufacturers will be out of existence if they don’t get back into the game in the next 2-3 years. There have been a bunch of announcements but we are yet to see. The Chinese though have moved far ahead, although one has to remark that they have been doing that for the last decade, so they have a 10-year head start, hardly surprising then.

But I need to get back to the subject, another gentleman on Reddit remarked that if you start to use boat, and others start to use boat, then the Govt. will tax it. In fact, somebody had shared the below the other day –

Different types of taxes collected by GOI

Many of the taxes that I have shared above are by the Modi Govt. who came on the platform, manifesto that once they come to power they will reduce taxes for the common man, they have reduced taxes but only for the Corporates. For the common man, the taxes have only gone up, both direct tax and indirect tax. Any reference to the Tory party who have also done similar things and have also shared that it is labor who had done large expenditures even though they have been 8 years in power, I am sure for most is purely coincidental. Incidentally, that is the same tack that was taken even by the Republican party. They all like to give tax benefits to the 1% while for the rest is austerity claiming some reason, even if it has been proven to be false.

Corporate Tax Rate, Revenue Loss to Govt.

The figures mentioned above are findings of parliamentary panel so nobody can accuse of anybody having a bias. Also, I probably had shared this but still feel the need to re-share it as people still believe that 2G scam happened even though there are plenty of arguments I can share to prove how it was all a fabricated lie.

Vinod Rai Mafinama in Uttarakhand High Court.
Part 2 of the same Mafinama.

How pathetic Mr. Rai’s understanding of economics is can be gauged from the fact that he was made Chairman of IDFC and subsequently had to be thrown out. That whole lie was engineered to throw UPA out and it worked. There are and have been many such ‘coincidences’ happening over the last 8 years, parallel stories happening in India and UK.

This was just yesterday, about a year back Air India was given back to the Tatas, There was controversy about the ‘supposed auctions’ as only Indigo was the only other party allowed to be at auction but not allowed to buy but more as a spectator as they already have 60% of the Indian civil aviation market. And there was lot of cheering from the Govt. side that finally Air India has been bought home to its true owners, the Tatas. The Tatas too started cheering and sharing how they will take down all the workers, worker unions and everything will be happy glory within a year. In fact, just couple of days back they shared new plans. Btw for the takeover of Air India, they had bought loans from the Banks and they are in the category of too big to fail. As I have shared couple of times before, RBI has not shared any inspection reports of nationalized or private banks after 2013/14. While by law, RBI is supposed to do inspection reports every 3 months and share them in the Public domain. And if you ask any of their supporter, for everything they will say UPA did x or y, which only goes to show morally bankrupt the present Govt. is. Coming back to the topic, before I forget, the idea of sharing their plans is so that they can again borrow money from the banks.

But that is not the only story. Just one day back, Smita Prakash, one of the biggest cheerleaders of the present Govt. (she is the boss at Asia News International (ANI)) posted how Air India had treated her sister and other 21 passengers. Basically, they had bought business tickets but the whole cabin was dirty, they complained and they were forced to sit in economy class, not just her cousin sister but the other 21 odd passengers too. Of course Ms Smita became calm as her sister was given free air tickets on Vistara and other goodies. Of course, after that she didn’t post anything about the other 21 odd passengers after that. And yes, I understand she is supposed to be a reporter but as can be seen from the twitter thread, there is or was no follow up. Incidentally, she is one of many who has calling others about Revdi culture (freebies to the masses.) but guess that only applies to other people not her or her sister. Again, if there are any coincidences of similar nature in the UK or when Trump was P.M. of the U.S. they are just ‘coincidental’.

The Uncosted budget

India and the UK have many parallels, it’s mind boggling. Before we get into the nitty-gritties, saw something that would be of some interest to the people here.

For those who might not be able to see above, apparently there is place in UK called Tufton Street where there are quite a few organizations that are shadowy and whose finances are not known as to how they are financed. Ms. Truss and quite a few of the people in the cabinet are from the same shadowy organizations. Mr. Kwasi Kwarteng, the just-departed chancellor is and was part of the same group.

Now even for me it was a new term to learn and understand what is an uncosted budget is. To make it much more easier I share the example using a common person who goes to the bank for a loan –

Now M/S X wants a loan of say Rs. 1000/- for whatever reason. He/she/they go to the bank and asks give me a loan of say INR 1000/- The banks asks them to produce a statement of accounts to show what their financial position is. They produce a somewhat half-filled statement of accounts – In which all liabilities are shown, but incomes are not. The bank says you already have so much liabilities, how are you gonna pay those, accounts have to be matched otherwise you are not solvent. M/s X adamantly refuses to do any changes citing that they don’t need to.

At this point, M/s X credit rating goes down and nobody in the market will give them a loan. At the same time, the assets they had held, their value also depreciates because it became known that they can’t act responsibly. So whose to say whether or not M/s X has those number of assets and priced them accurately. But the drama doesn’t end there, M/s X says this is the responsibility of actually Mr. Z ( cue – The Bank of England) as they are my accountant/lawyer etc. M/s Z says as any lawyer/accountant should. This is not under my remit. If the clients either gives incomplete information or false information or whatever then it is their responsibility not mine. And in fact, the Chancellor is supposed to be the one who is given the responsibility of making the budget.

The Chancellor is very similar to our Finance Minister. Because the UK has constitutional monarchy, I am guessing the terms are slightly different, otherwise the functionality seems to be the same. For two weeks, there was lot of chaos, lot of pension funds lost quite a bit in the market and in the end Mr. Kwasi Kwarteng was ousted out of the job. Incredibly, the same media and newspapers who had praised Mr. Kwarteng just few weeks back as the best Tory budget, they couldn’t wait to bury him. And while I have attempted to simplify what happened, the best explanation of what has happened can be found in an article from the guardian. Speculation is rife in the UK as to who’s ruling atm as the new Chancellor has reversed almost all the policies that Ms. Truss had bought and she is now more or less a figurehead. Mr. Hunt, the new chancellor doesn’t have anybody behind him. Apparently, the gentleman wanted to throw his hat the ring in the Tory leadership contest that was held about a month back and he couldn’t get 20 MP’s to support him. Another thing that is different between UK and India is that in UK by law the PM has to answer questions put up to him or her by the opposition leaders. That is the way accountability is measured there. This is known as PMQ’s or Prime Minister Questions and Answers. One can just go to YouTube or any streaming service and give Liz Truss and PMQ’s and if they are interested of a certain date, give a date and they can see how she answered the questions thrown at her. Unfortunately, all she could do in both times were non-answers. In fact, the Tories seem to be using some of Labor’s policies after they had bad-mouthed the same policies. Politics of right-wing both in the UK and the US seems so out of touch with the people whom they are supposed to protect and administer. An article about cyclists which is sort of half-truth, half irony shows how screwed up the policies are of the RW (right-wing). Now they are questions about the pensions triple-lock. Sadly, it is the working class who would suffer the most, most of the rich have already moved their money abroad several years ago. The Financial Times, did share a video about how things have been unfolding –

Seems Ms. Truss forgot to add Financial Times in the list of anti-growth coalition she is so fond of. Also, the Tory party seems to want to create more tax havens in the UK and calling them investment zones. Of course, most of the top tax havens are situated around the UK itself. I wouldn’t go more into as that would probably require its own article, although most of that information is all in public domain.

Fracking

I don’t really want to take much time as the blog post has become long. There have been many articles written why Fracking is bad and that is why even the Tories had put in their Manifesto that they won’t allow Fracking but apparently, today they are trying to reopen Fracking. And again how bad it is and can be found out by the article in Guardian.



19 October, 2022 08:11PM by shirishag75

Petter Reinholdtsen

Managing and using ONVIF IP cameras with Linux

Recently I have been looking at how to control and collect data from a handful IP cameras using Linux. I both wanted to change their settings and to make their imagery available via a free software service under my control. Here is a summary of the tools I found.

First I had to identify the cameras and their protocols. As far as I could tell, they were using some SOAP looking protocol and their internal web server seem to only work with Microsoft Internet Explorer with some proprietary binary plugin, which in these days of course is a security disaster and also made it impossible for me to use the camera web interface. Luckily I discovered that the SOAP looking protocol is actually following the ONVIF specification, which seem to be supported by a lot of IP cameras these days.

Once the protocol was identified, I was able to find what appear to be the most popular way to configure ONVIF cameras, the free software Windows tool named ONVIF Device Manager. Lacking any other options at the time, I tried unsuccessfully to get it running using Wine, but was missing a dotnet 40 library and I found no way around it to run it on Linux.

The next tool I found to configure the cameras were a non-free Linux Qt client ONVIF Device Tool. I did not like its terms of use, so did not spend much time on it.

To collect the video and make it available in a web interface, I found the Zoneminder tool in Debian. A recent version was able to automatically detect and configure ONVIF devices, so I could use it to set up motion detection in and collection of the camera output. I had initial problems getting the ONVIF autodetection to work, as both Firefox and Chromium refused the inter-tab communication being used by the Zoneminder web pages, but managed to get konqueror to work. Apparently the "Enhanced Tracking Protection" in Firefox cause the problem. I ended up upgrading to the Bookworm edition of Zoneminder in the process to try to fix the issue, and believe the problem might be solved now.

In the process I came across the nice Linux GUI tool ONVIF Viewer allowing me to preview the camera output and validate the login passwords required. Sadly its author has grown tired of maintaining the software, so it might not see any future updates. Which is sad, as the viewer is sightly unstable and the picture tend to lock up. Note, this lockup might be due to limitations in the cameras and not the viewer implementation. I suspect the camera is only able to provide pictures to one client at the time, and the Zoneminder feed might interfere with the GUI viewer. I have asked for the tool to be included in Debian.

Finally, I found what appear to be very nice Linux free software replacement for the Windows tool, named libonvif. It provide a C library to talk to ONVIF devices as well as a command line and GUI tool using the library. Using the GUI tool I was able to change the admin passwords and update other settings of the cameras. I have asked for the package to be included in Debian.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2022-10-20: Since my initial publication of this text, I got several suggestions for more free software Linux tools. There is a ONVIF python library (already requested into Debian) and a python 3 fork using a different SOAP dependency. There is also support for ONVIF in Home Assistant, and there is an alternative to Zoneminder called Shinobi. The latter two are not included in Debian either. I have not tested any of these so far.

19 October, 2022 10:30AM

October 17, 2022

Scarlett Gately Moore

New KDE Gear snaps in the works

KDE Extras 22.08.2 was released! https://kde.org/announcements/gear/22.08.2/

So… I am working on new snaps! This release also includes a new content snap I made with frameworks 5.98 and Qt 5.15.6. With all the new goodness, I am (Re) testing all snaps to make sure they are working as expected.

You can find a link to all of my snap releases from the KDE Snap Store Releases on the menu above.

Some notable releases that have new fixes and improvements are:

  • Kalzium: Molecular editor now works!
  • Artikulate: Now works on arm64 ( eg. Rasberry Pi )
  • Dragon: Now works on arm64
  • Minuet: Now works on arm64

New 22.08.2 releases re-tested on arm64 and amd64:

  • Picmi
  • Kturtle
  • Ksudoku
  • Konquest

More coming soon!

Please consider donating! I am seeking employment, but until then, I need assistance with gas to power my generator to power my laptop. Solar doesn’t work great in these coming winter months 🙁 Thank you for your consideration! https://www.patreon.com/sgmoore

17 October, 2022 07:53PM by sgmoore

Dima Kogan

gnuplot output in an FLTK widget

Overview

I make a lot of plots, and the fragmentation of tools in this space really bugs me. People writing Python code mostly use matplotlib, R people use ggplot2. MS people use the internal Excel thing. I've seen people use gtkdatabox for GTK widgets, rrdtool for logging, qcustomplot for qt. And so on. This is really unhelpful, and it would benefit everybody if there was a single solid plotting backend with lots of bindings to different languages and tools.

For my own usage, I've been fighting this quixotic battle, using gnuplot as the plotting backend for all my use cases. gnuplot is

  • very mature
  • stable
  • fast
  • powerful
  • supported on every (with reason) platform
  • supports lots and lots of output backends

There are some things it can't do, but those can be added, and I haven't felt it to be limiting in over 20 years of using it.

I rarely use it directly, and usually interact with it through one of

I wrote all of these, although the Perl library was taken over by others long ago.

Recently I needed a plotting widget for an FLTK program written in Python. It would be great if there was a C++ class deriving from Fl_Widget that would be wrapped by pyfltk, but there isn't.

But it turns out that I already had all the tools to quickly hack together something that mostly works. This is a not-ready-for-primetime hack, but it works so well, I'd like to write it up. Hopefully this will be done "properly" someday.

Approach

Alright. So here I'm trying to tie together a Python program, gnuplot output and an FLTK widget. This is a Python program, I can use gnuplotlib to talk to the gnuplot backend. In a perfect world, gnuplot would ship a backend interfacing to FLTK. But it doesn't. What it does do is to ship an x11 backend that makes plots with X11 commands, and it allows these commands to be directed to an arbitrary X11 window. So we

  1. Make an FLTK widget that simply creates an X11 window, and never actually draws into it
  2. Tell gnuplot to plot into this window

Demo

This is really simple, and works shockingly well. Here's my Fl_gnuplotlib widget:

#!/usr/bin/python3

import sys
import gnuplotlib as gp
import fltk

class Fl_Gnuplotlib_Window(fltk.Fl_Window):

    def __init__(self, x,y,w,h, **plot_options):
        super().__init__(x,y,w,h)
        self.end()

        self._plot                 = None
        self._delayed_plot_options = None

        self.init_plot(**plot_options)

    def init_plot(self, **plot_options):
        if 'terminal' in plot_options:
            raise Exception("Fl_Gnuplotlib_Window needs control of the terminal, but the user asked for a specific 'terminal'")

        if self._plot is not None:
            self._plot = None

        self._delayed_plot_options = None

        xid = fltk.fl_xid(self)
        if xid == 0:
            # I don't have an xid (yet?), so I delay the init
            self._delayed_plot_options = plot_options
            return

        # will barf if we already have a terminal
        gp.add_plot_option(plot_options,
                           terminal = f'x11 window "0x{xid:x}"')

        self._plot = gp.gnuplotlib(**plot_options)

    def plot(self, *args, **kwargs):

        if self._plot is None:
            if self._delayed_plot_options is None:
                raise Exception("plot has not been initialized")

            self.init_plot(**self._delayed_plot_options)
            if self._plot is None:
                raise Exception("plot has not been initialized. Delayed initialization failed")

        self._plot.plot(*args, **kwargs)

Clearly it's simply making an Fl_Window, and pointing gnuplotlib at it. And a sample application that uses this widget:

#!/usr/bin/python3

import sys
import numpy as np
import numpysane as nps
from fltk import *
from Fl_gnuplotlib import *


window = Fl_Window(800, 600, "plot")
plot   = Fl_Gnuplotlib_Window(0, 0, 800,600)


iplot = 0
plotx = np.arange(1000)
ploty = nps.cat(plotx*plotx,
                np.sin(plotx/100),
                plotx)

def timer_callback(*args):

    global iplot, plotx, ploty, plot
    plot.plot(plotx,
              ploty[iplot],
              _with = 'lines')

    iplot += 1
    if iplot == len(ploty):
        iplot = 0

    Fl.repeat_timeout(1.0, timer_callback)


window.resizable(window)
window.end()
window.show()

Fl.add_timeout(1.0, timer_callback)

Fl.run()

This is nice and simple. Exactly what a program using a widget to make a plot (while being oblivious to the details) should look like. It creates a window, places the one plotting widget into it, and cycles the plot inside it at 1Hz (cycling between a parabola, a sinusoid and a line). Clearly we could place other UI elements around it, or add more plots, or whatever.

The output looks like this:

Fl_gnuplotlib_demo.gif

To run you need to apt install python3-numpysane python3-gnuplotlib python3-fltk. If running an older distro on a non-Debian-based distro, you should grab those from source.

Discussion

This works. But it's a hack. Some issues:

  • This plotting widget currently can output only. It can make whatever plot we like, but it cannot accept UI input from the container program in any way
  • More than that, when focused it completely replaces the FLTK event logic for that window. So all keyboard input is swallowed, including the keys to access FLTK menus, to exit the application, etc, etc.
  • This approach requires us to use the x11 gnuplot terminal. This works, but it's no longer the terminal preferred by the gnuplot devs, and it it's maintained as vigilantly as the others.
  • And it has bugs. For instance, asking to plot into a window that doesn't yet exist, causes it to create a new window. This breaks FLTK applications that start up and create a plot immediately. Here's a mailing list thread discussing these issues.

So this is a very functional hack, but it's still hack. And it feels like making this solid will take a lot of work. Maybe. I'll push more on this as I need it. Stay tuned!

17 October, 2022 07:28PM by Dima Kogan

Jeremy Bicha

Ubuntu bug fix anniversary

I first installed Ubuntu when Ubuntu 6.06 LTS “Dapper Drake” was released. I was brand new to Linux. This was Ubuntu’s first LTS release; the very first release of Ubuntu was only a year and a half before. I was impressed by how usable and useful the system was. It soon became my primary home operating system and I wanted to help make it better.

On October 15, 2009, I was helping test the release candidates ISOs for the Ubuntu 9.10 release. Specifically, I tested Edubuntu. Edubuntu has since been discontinued but at the time it was an official Ubuntu flavor preloaded with lots of education apps. One of those education apps was Moodle, an e-learning platform.

When testing Moodle, I found that a default installation would make Moodle impossible to use locally. I figured out how to fix this issue. This was really exciting: I finally found an Ubuntu bug I knew how to fix. I filed the bug report.

This was very late in the Ubuntu 9.10 release process and Ubuntu was in the Final Freeze state. In Final Freeze, every upload to packages included in the default install need to be individually approved by a member of the Ubuntu Release Team. Also, I didn’t have upload rights to Ubuntu. Jordan Mantha (LaserJock), an Edubuntu maintainer, sponsored my bug fix upload.

I also forwarded my patch to Debian.

While trying to figure out what wasn’t working with Moodle, I stumbled across a packaging bug. Edubuntu provided a choice of MySQL or PostgreSQL for the system default database. MySQL was the default, but if PostgreSQL were chosen instead, Moodle wouldn’t work. I figured out how to fix this bug too a week later. Jordan sponsored this upload and Steve Langasek from the Release Team approved it so it also was able to be fixed before 9.10 was released.

Although the first bug was new to 9.10 because of a behavior change in a low-level dependency, this PostgreSQL bug existed in stable Ubuntu releases. Therefore, I prepared Stable Release Updates for Ubuntu 9.04 and Ubuntu 8.04 LTS.

Afterwards

Six months later, I was able to attend my first Ubuntu Developer Summit. I was living in Bahrain (in the Middle East) at the time and a trip to Belgium seemed easier to me than if I were living in the United States where I usually live. This was the Ubuntu Developer Summit where planning for Ubuntu 10.10 took place. I like to believe that I helped with the naming since I added Maverick to the wiki page where people contribute suggestions.

I did not apply for financial sponsorship to attend and I stayed in a budget hotel on the other side of Brussels. The event venue was on the outskirts of Brussels so there wasn’t a direct bus or metro line to get there. I rented a car. I didn’t yet have a smartphone and I had a LOT of trouble navigating to and from the site every day. I learned then that it’s best to stay close to the conference site since a lot of the event is actually in the unstructured time in the evenings. Fortunately, I managed to arrive in time for Mark Shuttleworth’s keynote where the Unity desktop was first announced. This was released in Ubuntu 10.10 in the Ubuntu Netbook Remix and became the default for Ubuntu Desktop in Ubuntu 11.04.

Ubuntu’s switch to Unity provided me with a huge opportunity. In April 2011, GNOME 3.0 was released. I wanted to try it but it wasn’t yet packaged in Ubuntu or Debian. It was suggested that I could help work on packaging the major new version in a PPA. The PPA was convenient because I was able to get permission to upload there easier than being able to upload directly to Ubuntu. My contributions there then enabled me to get upload rights to the Ubuntu Desktop packages later that year.

At a later Ubuntu Developer Summit, it was suggested that I start an official Ubuntu flavor for GNOME. So along with Tim Lunn (darkxst), I co-founded Ubuntu GNOME. Years later, Canonical stopped actively developing Unity; instead, Ubuntu GNOME was merged into Ubuntu Desktop.

Along the way, I became an Ubuntu Core Developer and a Debian Developer. And in January 2022, I joined Canonical on the Desktop Team. This all still feels amazing to me. It took me a long time to be comfortable calling myself a developer!

Conclusion

My first Ubuntu bugfix was 13 years ago this week. Because Ubuntu historically uses alphabetical adjective animal release names, 13 years means that we have rolled around to the letter K again! Later today, we begin release candidate ISO testing for Ubuntu 22.10 “Kinetic Kudu”.

I encourage you to help us test the release candidates and report bugs that you find. If you figure out how to fix a bug, we still sponsor bug fixes. If you are an Ubuntu contributor, I highly encourage you to attend an Ubuntu Summit if you can. The first Ubuntu Summit in years will be in 3 weeks in Prague, but the intent is for the Ubuntu Summits to be recurring events again.

17 October, 2022 01:54PM by Jeremy Bicha

October 16, 2022

Sven Hoexter

CentOS 9, stunnel, an openssl memory leak and a VirtualBox crash

tl;dr; OpenSSL 3.0.1 leaks memory in ssl3_setup_write_buffer(), seems to be fixed in 3.0.5 3.0.2. The issue manifests at least in stunnel and keepalived on CentOS 9. In addition I learned the hard way that running a not so recent VirtualBox version on Debian bullseye let to dh parameter generation crashing in libcrypto in bn_sqr8x_internal().

A recent rabbit hole I went down. The actual bug in openssl was nailed down and documented by Quentin Armitage on GitHub in keepalived My bugreport with all back and forth in the RedHat Bugzilla is #2128412.

Act I - Hello stunnel, this is the OOMkiller Calling

We started to use stunnel on Google Cloud compute engine instances running CentOS 9. The loadbalancer in front of those instances used a TCP health check to validate the backend availability. A day or so later the stunnel instances got killed by the OOMkiller. Restarting stunnel and looking into /proc/<pid>/smaps showed a heap segment growing quite quickly.

Act II - Reproducing the Issue

While I'm not the biggest fan of VirtualBox and Vagrant I've to admit it's quite nice to just fire up a VM image, and give other people a chance to recreate that setup as well. Since VirtualBox is no longer released with Debian/stable I just recompiled what was available in unstable at the time of the bullseye release, and used that. That enabled me now to just start a CentOS 9 VM, setup stunnel with a minimal config, grab netcat and a for loop and watch the memory grow. E.g. while true; do nc -z localhost 2600; sleep 1; done To my surprise, in addition to the memory leak, I also observed some crashes but did not yet care too much about those.

Act III - Wrong Suspect, a Workaround and Bugreporting

Of course the first idea was that something must be wrong in stunnel itself. But I could not find any recent bugreports. My assumption is that there are still a few people around using CentOS and stunnel, so someone else should probably have seen it before. Just to be sure I recompiled the latest stunnel package from Fedora. Didn't change anything. Next I recompiled it without almost all the patches Fedora/RedHat carries. Nope, no progress. Next idea: Maybe this is related to the fact that we do not initiate a TLS context after connecting? So we changed the test case from nc to openssl s_client, and the loadbalancer healthcheck from TCP to a TLS based one. Tada, a workaround, no more memory leaking. In addition I gave Fedora a try (they have Vagrant Virtualbox images in the "Cloud" Spin, e.g. here for Fedora 36) and my local Debian installation a try. No leaks experienced on both. Next I reported #2128412.

Act IV - Crash in libcrypto and a VirtualBox Bug

When I moved with the test case from the Google Cloud compute instance to my local VM I encountered some crashes. That morphed into a real problem when I started to run stunnel with gdb and valgrind. All crashes happened in libcrypto bn_sqr8x_internal() when generating new dh parameter (stunnel does that for you if you do not use static dh parameter). I quickly worked around that by generating static dh parameter for stunnel. After some back and forth I suspected VirtualBox as the culprit. Recompiling the current VirtualBox version (6.1.38-dfsg-3) from unstable on bullseye works without any changes. Upgrading actually fixed that issue.

Epilog

I highly appreciate that RedHat, with all the bashing around the future of CentOS, still works on community contributed bugreports. My kudos go to Clemens Lang. :) Now that the root cause is clear, I guess RedHat will push out a fix for the openssl 3.0.1 based release they have in RHEL/CentOS 9. Until that is available at least stunnel and keepalived are known to be affected. If you run stunnel on something public it's not that pretty, because already a low rate of TCP connections will result in a DoS condition.

16 October, 2022 06:24PM

hackergotchi for Colin Watson

Colin Watson

Reproducible man-db databases

I’ve released man-db 2.11.0 (announcement, NEWS), and uploaded it to Debian unstable.

The biggest chunk of work here was fixing some extremely long-standing issues with how the database is built. Despite being in the package name, man-db’s database is much less important than it used to be: most uses of man(1) haven’t required it in a long time, and both hardware and software improvements mean that even some searches can be done by brute force without needing prior indexing. However, the database is still needed for the whatis(1) and apropos(1) commands.

The database has a simple format - no relational structure here, it’s just a simple key-value database using old-fashioned DBM-like interfaces and composing a few fields to form values - but there are a number of subtleties involved. The issues tend to amount to this: what does a manual page name mean? At first glance it might seem simple, because you have file names that look something like /usr/share/man/man1/ls.1.gz and that’s obviously ls(1). Some pages are symlinks to other pages (which we track separately because it makes it easier to figure out which entries to update when the contents of the file system change), and sometimes multiple pages are even hard links to the same file.

The real complications come with “whatis references”. Pages can list a bunch of names in their NAME section, and the historical expectation is that it should be possible to use those names as arguments to man(1) even if they don’t also appear in the file system (although Debian policy has deprecated relying on this for some time). Not only does that mean that man(1) sometimes needs to consult the database, but it also means that the database is inherently more complicated, since a page might list something in its NAME section that conflicts with an actual file name in the file system, and now you need a priority system to resolve ambiguities. There are some other possible causes of ambiguity as well.

The people working on reproducible builds in Debian branched out to the related challenge of reproducible installations some time ago: can you take a collection of packages, bootstrap a file system image from them, and reproduce that exact same image somewhere else? This is useful for the same sorts of reasons that reproducible builds are useful: it lets you verify that an image is built from the components it’s supposed to be built from, and doesn’t contain any other skulduggery by accident or design. One of the people working on this noticed that man-db’s database files were an obstacle to that: in particular, the exact contents of the database seemed to depend on the order in which files were scanned when building it. The reporter proposed solving this by processing files in sorted order, but I wasn’t keen on that approach: firstly because it would mean we could no longer process files in an order that makes it more efficient to read them all from disk (still valuable on rotational disks), but mostly because the differences seemed to point to other bugs.

Having understood this, there then followed several late nights of very fiddly work on the details of how the database is maintained. None of this was conceptually difficult: it mainly amounted to ensuring that we maintain a consistent well-order for different entries that we might want to insert for a given database key, and that we consider the same names for insertion regardless of the order in which we encounter files. As usual, the tricky bit is making sure that we have the right data structures to support this. man-db is written in C which is not very well-supplied with built-in data structures, and originally much of the code was written in a style that tried to minimize memory allocations; this came at the cost of ownership and lifetime often being rather unclear, and it was often difficult to make changes without causing leaks or double-frees. Over the years I’ve been gradually introducing better encapsulation to make things easier to follow, and I had to do another round of that here. There were also some problems with caching being done at slightly the wrong layer: we need to make use of a “trace” of the chain of links followed to resolve a page to its ultimate source file, but we were incorrectly caching that trace and reusing it for any link to the same file, with incorrect results in many cases.

Oh, and after doing all that I found that the on-disk representation of a GDBM database is insertion-order-dependent, so I ended up having to manually reorganize the database at the end by reading it all in and writing it all back out in sorted order, which feels really weird to me coming from spending most of my time with PostgreSQL these days. Fortunately the database is small so this takes negligible time.

None of this is particularly glamorous work, but it paid off:

# export SOURCE_DATE_EPOCH="$(date +%s)"
# mkdir emptydir disorder
# disorderfs --multi-user=yes --shuffle-dirents=yes --reverse-dirents=no emptydir disorder
# export TMPDIR="$(pwd)/disorder"
# mmdebstrap --variant=standard --hook-dir=/usr/share/mmdebstrap/hooks/merged-usr \
      unstable out1.tar
# mmdebstrap --variant=standard --hook-dir=/usr/share/mmdebstrap/hooks/merged-usr \
      unstable out2.tar
# cmp out1.tar out2.tar
# echo $?
0

16 October, 2022 03:54PM by Colin Watson

hackergotchi for Aigars Mahinovs

Aigars Mahinovs

Ryzen 7000 amdgpu boot hang

So you decided to build a brand new system using all the latest and coolest tech, so you buy a Ryzen 7000 series Zen 4 CPU, like the Ryzen 7700X that I picked, with a new mother board and DDR5 memory and all that jazz. But for now, you don't yet have a fitting GPU for that system (as the new ones will only come out in November), so you are booting a Debian system using the new build-in video card of the new CPUs (Zen 4 generation has a simple AMD GPU build-in into every CPU now - great stuff for debugging and mostly-headless systems) and you get ... nothing on the screen. Hmm. You boot into the rescue mode and the kernel message stop after:

Oct 16 13:31:25 home kernel: [    4.128328] amdgpu: Ignoring ACPI CRAT on non-APU system
Oct 16 13:31:25 home kernel: [    4.128329] amdgpu: Virtual CRAT table created for CPU
Oct 16 13:31:25 home kernel: [    4.128332] amdgpu: Topology: Add CPU node

That looks bad, right?

Well, if you either ssh into the machine or reboot with module_blacklist=amdgpu in the kernel command line you will find in /var/log/kern.log.1 those messages and also the following messages that will clarify the situation a bit:

Oct 16 13:31:25 home kernel: [    4.129352] amdgpu 0000:10:00.0: firmware: failed to load amdgpu/psp_13_0_5_toc.bin (-2)
Oct 16 13:31:25 home kernel: [    4.129354] firmware_class: See https://wiki.debian.org/Firmware for information about missing firmware
Oct 16 13:31:25 home kernel: [    4.129358] amdgpu 0000:10:00.0: firmware: failed to load amdgpu/psp_13_0_5_toc.bin (-2)
Oct 16 13:31:25 home kernel: [    4.129359] amdgpu 0000:10:00.0: Direct firmware load for amdgpu/psp_13_0_5_toc.bin failed with error -2
Oct 16 13:31:25 home kernel: [    4.129360] amdgpu 0000:10:00.0: amdgpu: fail to request/validate toc microcode
Oct 16 13:31:25 home kernel: [    4.129361] [drm:psp_sw_init [amdgpu]] *ERROR* Failed to load psp firmware!
Oct 16 13:31:25 home kernel: [    4.129432] [drm:amdgpu_device_init.cold [amdgpu]] *ERROR* sw_init of IP block <psp> failed -2
Oct 16 13:31:25 home kernel: [    4.129525] amdgpu 0000:10:00.0: amdgpu: amdgpu_device_ip_init failed
Oct 16 13:31:25 home kernel: [    4.129526] amdgpu 0000:10:00.0: amdgpu: Fatal error during GPU init
Oct 16 13:31:25 home kernel: [    4.129527] amdgpu 0000:10:00.0: amdgpu: amdgpu: finishing device.
Oct 16 13:31:25 home kernel: [    4.129633] amdgpu: probe of 0000:10:00.0 failed with error -2

So what you need is to get a new set of Linux Kernel Firmware blobs and upack that in /lib/firmware. The tarball from 2022-10-12 worked well for me.

After that you also need to re-create the initramfs with update-initramfs -k all -c to include the new firmware. Having kernel version 5.18 or newer is also required for stable Zen 4 support. It might be that a fresh Mesa version is also of importance, but as I am running sid on this machine I can only say that Mesa 22.2.1 that is in sid works fine.

16 October, 2022 02:15PM by Aigars Mahinovs

Vincent Fourmond

Tutorial: analysis of multiwavelength fast kinetics data

The purpose of this post is to demonstrate a first approach to the analysis of multiwavelength kinetic data, like those obtained using stopped-flow data. To practice, we will use data that were acquired during the stopped flow practicals of the MetBio summer school from the FrenchBIC. During the practicals, the student monitored the reaction of myoglobin (in its Fe(III) state) with azide, which yields a fast and strong change in the absorbance spectrum of the protein, which was monitored using a diode array. The data is publicly available on zenodo.

Aims of this tutorial

The purpose of this tutorial is to teach you to use the free softwareQSoas to run a simple, multiwavelength exponential fit on the data, and to look at the results. This is not a kinetics lecture, so that it will not go in depth about the use of the exponential fit and its meaning.

Getting started: loading the file

First, make sure you have a working version of QSoas, you can download them (for free) there. Then download the data files from zenodo. We will work only on the data file Azide-1.25mm_001.dat, but of course, the purpose of this tutorial is to enable you to work on all of them. The data files contain the time evolution of the absorbance for all wavelengths, in a matrix format, in which each row correpond to a time point and each column to a wavelength.

Start QSoas, and launch the command:
QSoas> load /comments='"'
Then, choose the Azide-1.25mm_001.dat data file. This should bring up a horizontal red line at the bottom of the data display, with X values between about 0 and 2.5. If you zoom on the red line with the mouse wheel, you'll realize it is data. The /comments='"' part is very important since it allows the extraction of the wavelength from the data. We will look at what it means another day. At this stage, you can look at the loaded data using the command:
QSoas> edit
You should have a window looking like this:
The rows each correspond to a data point displayed on the window below. The first column correspond to the X values, the second the Y values, and all the other ones to extra Y columns (they are not displayed by default). What is especially interesting is the first row, which contains a nan as the X value and what is obviously the wavelength for all the Y values. To tell that QSoas should take this line as the wavelength (which will be the perpendicular coordinate, the coordinate of the other direction of the matrix), first close the edit window and run:
QSoas> set-perp /from-row=0

Splitting and fitting

Now, we have a single dataset containing a lot of Y columns. We want to fit all of them simultaneously with a (mono) exponential fit. For that, we first need to split the big matrix into a series of X,Y datasets (because fitting only works on the first Y). This is possible by running:
QSoas> expand /style=red-to-blue /flags=kinetics
Your screen should now look like this:
You're looking at the kinetics at all wavelengths at the same time (this may take some time to display on your computer, it is after all a rather large number of data points). The /style=red-to-blue is not strictly necessary, but it gives the red to blue color gradient which makes things easier to look at (and cooler !). The /flags=kinetics is there to attach a label (a flag) to the newly created datasets so we can easily manipulate all of them at the same time. Then it's time to fit, with the following command:
QSoas> mfit-exponential-decay flagged:kinetics
This should bring up a new window. After resizing it, you should have something that looks like this:
The bottom of the fit window is taken by the parameters, each with two checkboxes on the right to set them fixed (i.e. not determined by the fitting mechanism) and/or global (i.e. with a single value for all the datasets, here all the wavelengths). The top shows the current dataset along with the corresponding fit (in green), and, below, the residuals. You can change the dataset by clicking on the horizontal arrows or using Ctrl+PgUp or Ctrl+PgDown (keep holding it to scan fast). See the Z = 728.15 showing that QSoas has recognized that the currently displayed dataset corresponds to the wavelength 728.15. The equation fitted to the data is: $$y(x) = A_\infty + A_1 \times \exp -(x - x_0)/\tau_1$$ In this case, while the \(A_1\) and \(A_\infty\) parameters clearly depend on the wavelength, the time constant of evolution should be independent of wavelength (the process happens at a certain rate regardless of the wavelength we're analyzing), so that the \(\tau_1\) parameter should be common for all the datasets/wavelengths. Just click on the global checkbox at the right of the tau_1 parameter, make sure it is checked, and hit the Fit button...

The fit should not take long (less than a minute), and then you end up with the results of the fits: all the parameters. The best way to look at the non global parameters like \(A_1\) and \(A_\infty\) is to use the Show Parameters item from the Parameters menu. Using it and clicking on A_inf too should give you a display like this one:
The A_inf parameter corresponds to the spectum at infinite time (of azide-bound heme), while the A_1 parameter corresponds to the difference spectrum between the initial (azide-free) and final (azide-bound) states.

Now, the fit is finished, you can save the parameters if you want to reload them in a later fit by using the Parameters/Save menu item or export them in a form more suitable for plotting using Parameters/Export (although QSoas can also display and the parameters saved using Save). This concludes this first approach to fitting the data. What you can do is
  • look at the depence of the tau_1 parameter as a function of the azide concentration;
  • try fitting more than one exponential, using for instance:
    QSoas> mfit-exponential-decay /exponentials=2 flagged:kinetics
    

How to read the code above

All the lines starting by QSoas> in the code areas above are meant to be typed into the QSoas command line (at the bottom of the window), and started by pressing enter at the end. You must remove the QSoas> bit. The other lines (when applicable) show you the response of QSoas, in the terminal just above the command-line. You may want to play with the QSoas tutorial to learn more about how to interact with QSoas.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.1. You can freely (and at no cost) download its source code or precompiled versions for MacOS and Windows there. Alternatively, you can clone from the GitHub repository.
Contact: find my email address there, or contact me on LinkedIn.

16 October, 2022 12:54PM by Vincent Fourmond ([email protected])

kpcyrd

updlockfiles: Manage lockfiles in PKGBUILDs for upstreams that don't ship them

I’ve released a new tool to manage lockfiles for Arch Linux packages that can’t use a lockfile from the official upstream release. It integrates closely with other Arch Linux tooling like updpkgsums that’s already used to pin the content of build inputs in PKGBUILD.

To use this, the downstream lockfile becomes an additional source input in the source= array of our PKGBUILD (this is already the case for some packages).

source=("git+https://github.com/vimeo/psalm.git#commit=${_commit}"
        "composer.lock")

You would then add a new function named updlockfiles that can generate new lockfiles and copies them into $outdir, and a prepare function to copy the lockfile in the right place:

 prepare() {
   cd ${pkgname}
   cp ../composer.lock .
}

updlockfiles() {
  cd ${pkgname}
  rm -f composer.lock
  composer update
  cp composer.lock "${outdir}/"
}

To update the package to the latest (compatible) patch level simply run:

updlockfiles

This can also be used in case upstreams lockfile has vulnerable dependencies that you want to patch downstream. For more detailed instructions see the readme.

Thanks

This work is currently crowd-funded on github sponsors. I’d like to thank @SantiagoTorres, @repi and @rgacogne for their support in particular. ♥️

16 October, 2022 12:00AM

October 15, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

podman generate

I've been working with and on container technology for seven years, but I still learn new things every day. Recently I read the excellent LWN article Docker and the OCI container ecosystem and this was news to me:

Running the docker CLI under a process supervisor only results in supervising the CLI process. This has a variety of consequences for users of these tools. For example, any attempt to limit a container's memory usage by running the CLI as a systemd service will fail; the limits will only apply to the CLI and its non-existent children. In addition, attempts to terminate a client process may not result in terminating all of the processes in the container.

Huh — of course! I hadn't really thought about that. I run a small number of containers on my home system via docker (since I was using it at work) and systemd, and sometimes I have weird consistency issues. This explains them.

Later:

As a result, Podman plays nicely with tools like systemd; using podman run with a process supervisor works as expected, because the processes inside the container are children of podman run. The developers of Podman encourage people to use it in this way by a command to generate systemd units for Podman containers.

Given the above, it seemed like a good idea to migrate my few local containers over to Podman. This was easy. The first part is copying the images from Docker's storage to Podman's. To do this, I used the skopeo tool:

sudo skopeo copy {docker-daemon,containers-storage}:octoprint/octoprint:latest

(I want to launch these as a superuser, rather than use root-less containers, to isolate the containers from each other: rootless ones are forced to share a common namespace.)

Once the images were copied over, I needed to start up a container instance via Podman. For most of them, running under Docker, I had volume-mounted the important bits for persistent data. I can do the same with Podman:

# podman run -d --rm \
    --name octoprint \
    -p 8080:80 \
    -v /archive/octoprint:/octoprint \
    --device /dev/ttyACM0:/dev/ttyACM0 \
    octoprint/octoprint

Once an instance was running, I can use podman generate to create a Systemd unit file which describes creating an equivalent container:

# cd /etc/systemd/system
# podman generate systemd octoprint \
    --new \
    --name \
    --files

For some of the containers I was running, there are a few more steps required: migrating docker volumes and configuring equivalents for private docker networks. But Podman's versions of those concepts are largely identical. Running a mixture of Podman and Docker containers side-by-side also meant renumbering some private addresses for migrated hosts whilst the original network was still up.

15 October, 2022 10:04AM

October 14, 2022

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Firsts

Two nice “firsts” for me in tech this month:

  • First Chromium feature I'm leading (previous work has mostly been about optimizations): CSS Nesting. (Note that the CSSWG is still ironing out the exact spec.)
  • First patch to a Rust project: Function context for Difftastic. (Not yet processed by upstream.)

14 October, 2022 05:15PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

Dowry, Racism, Railways

Dowry

Few days back, had posted about the movie Raksha Bandhan and whatever I felt about it. Sadly, just couple of days back, somebody shared this link. Part of me was shocked and part of me was not. Couple of acquaintances of mine in the past had said the same thing for their daughters. And in such situations you are generally left speechless because you don’t know what the right thing to do is. If he has shared it with you being an outsider, how many times he must have told the same to their wife and daughters? And from what little I have gathered in life, many people have justified it on similar lines. And while the protests were there, sadly the book was not removed. Now if nurses are reading such literature, how their thought process might be forming, you can tell :(. And these are the ones whom we call for when we are sick and tired :(. And I have not taken into account how the girls/women themselves might be feeling. There are similar things in another country but probably not the same, nor the same motivations though although feeling helplessness in both would be a common thing.

But such statements are not alone. Another gentleman in slightly different context shared this as well –

The above is a statement shared in a book recommended for CTET (Central Teacher’s Eligibility Test that became mandatory to be taken as the RTE (Right To Education) Act came in.). The statement says “People from cold places are white, beautiful, well-built, healthy and wise. And people from hot places are black, irritable and of violent nature.”

Now while I can agree with one part of the statement that people residing in colder regions are more fair than others but there are loads of other factors that determine fairness or skin color/skin pigmentation. After a bit of search came to know that this and similar articulation have been made in an idea/work called ‘Environmental Determinism‘. Now if you look at that page, you would realize this was what colonialism is and was all about. The idea that the white man had god-given right to rule over others. Similarly, if you are fair, you can lord over others. Seems simplistic, but yet it has a powerful hold on many people in India. Forget the common man, this thinking is and was applicable to some of our better-known Freedom fighters. Pune’s own Bal Gangadhar Tilak – The Artic Home to the Vedas. It sort of talks about Aryans and how they invaded India and became settled here. I haven’t read or have access to the book so have to rely on third-party sources. The reason I’m sharing all this is that the right-wing has been doing this myth-making for sometime now and unless and until you put a light on it, it will continue to perpetuate 😦 . For those who have read this blog, do know that India is and has been in casteism from ever. They even took the fair comment and applied it to all Brahmins. According to them, all Brahmins are fair and hence have god-given right to lord over others. What is called the Eton boy’s network serves the same in this casteism. The only solution is those idea under limelight and investigate. To take the above, how does one prove that all fair people are wise and peaceful while all people black and brown are violent. If that is so, how does one count for Mahatma Gandhi, Martin Luther King Junior, Nelson Mandela, Michael Jackson the list is probably endless. And not to forget that when Mahatma Gandhiji did his nonviolent movements either in India or in South Africa, both black and brown people in millions took part. Similar examples of Martin Luther King Jr. I know and read of so many non-violent civl movements that took place in the U.S. For e.g. Rosa Parks and the Montgomery Bus Boycott. So just based on these examples, one can conclude that at least the part about the fair having exclusive rights to being fair and noble is not correct.

Now as far as violence goes, while every race, every community has had done violence in the past or been a victim of the same. So no one is and can be blameless, although in light of the above statement, the question can argumentated as to who were the Vikings? Both popular imagination and serious history shares stories about Vikings. The Vikings were somewhat nomadic in nature even though they had permanent settlements but even then they went on raids, raped women, captured both men and women and sold them at slaves. So they are what pirates came to be, but not the kind Hollywood romanticizes about. Europe in itself has been a tale in conflict since time immemorial. It is only after the formation of EU that most of these countries stopped fighting each other From a historical point perspective, it is too new. So even the part of fair being non-violent dies in face of this evidence. I could go on but this is enough on that topic.

Railways and Industrial Action around the World.

While I have shared about Railways so many times on this blog, it continues to fascinate me that how people don’t understand the first things about Railways. For e.g. Railways is a natural monopoly. What that means is and you can look at all and any type of privatization around the world, you will see it is a monopoly. Unlike the road or Skies, Railways is and would always be limited by infrastructure and the ability to have new infrastructure. Unlike in road or Skies (even they have their limits) you cannot run train services on a whim. At any particular point in time, only a single train could and should occupy a stretch of Railway network. You could have more trains on one line, but then the likelihood of front or rear-end collisions becomes a real possibility. You also need all sorts of good and reliable communications, redundant infrastructure so if one thing fails then you have something in place. The reason being a single train can carry anywhere from 2000 to 5000 passengers or more. While this is true of Indian Railways, Railways around the world would probably have some sort of similar numbers.It is in this light that I share the below videos.

To be more precise, see the fuller video –

Now to give context to the recording above, Mike Lynch is the general secretary at RMT. For those who came in late, both UK and the U.S. have been threatened by railway strikes. And the reason for the strikes or threat of strikes is similar. Now from the company perspective, all they care is to invest less and make the most profits that can be given to equity shareholders. At the same time, they have freezed the salaries of railway workers for the last 3 years. While the politicians who were asking the questions, apparently gave themselves raise twice this year. They are asking them to negotiate at 8% while inflation in the UK has been 12.3% and projected to go higher. And it is not only the money. Since the 1980s when UK privatized the Railways, they stopped investing in the infrastructure. And that meant that the UK Railway infrastructure over period of time started getting behind and is even behind say Indian Railways which used to provide most bang for the buck. And Indian Railways is far from ideal. Ironically, most of the operators on UK are nationalized Railways of France, Germany etc. but after the hard Brexit, they too are mulling to cut their operations short, they have too 😦 There is also the EU Entry/Exit system that would come next year.

Why am I sharing about what is happening in UK Rail, because the Indian Government wants to follow the same thing, and fooling the public into saying we would do it better. What inevitably will happen is that ticket prices go up, people no longer use the service, the number of services go down and eventually they are cancelled. This has happened both in Indian Railways as well as Airlines. In fact, GOI just recently announced a credit scheme just a few days back to help Airlines stay afloat. I was chatting with a friend who had come down to Pune from Chennai and the round-trip cost him INR 15k/- on that single trip alone. We reminisced how a few years ago, 8 years to be precise, we could buy an Air ticket for 2.5k/- just a few days before the trip and did it. I remember doing/experiencing at least a dozen odd trips via air in the years before 2014. My friend used to come to Pune, almost every weekend because he could afford it, now he can’t do that. And these are people who are in the above 5-10% of the population. And this is not just in UK, but also in the United States. There is one big difference though, the U.S. is mainly a freight carrier while the UK Railway Operations are mostly passenger based. What was and is interesting that Scotland had to nationalize their services as they realized the Operators cannot or will not function when they were most needed. Most of the public even in the UK seem to want a nationalized rail service, at least their polls say so. So, it would definitely be interesting to see what happens in the UK next year.

In the end, I know I promised to share about books, but the above incidents have just been too fascinating to not just share the news but also share what I think about them. Free markets function good where there is competition, for example what is and has been happening in China for EV’s but not where you have natural monopolies. In all Railway privatization, you have to handover the area to one person, then they have no motivation. If you have multiple operators, then there would always be haggling as to who will run the train and at what time. In either scenario, it doesn’t work and raises prices while not delivering anything better 😦

I do take examples from UK because lot of things are India are still the legacy of the British. The whole civil department that was created in 1953 is/was a copy of the British civil department at that time and it is to this day.

P.S. – Just came to know that the UK Chancellor Kwasi Kwarteng was just sacked as UK Chancellor. I do commend Truss for facing the press even though she might be dumped a week later unlike our PM who hasn’t faced a single press conference in the last 8 odd years.

https://www.youtube.com/watch?v=oTP6ogBqU7of

The difference in Indian and UK politics seems to be that the English are now asking questions while here in India, most people are still sleeping without a care in the world.

Another thing to note Minidebconf Palakkad is gonna happen 12-13th November 2022. I am probably not gonna go but would request everyone who wants to do something in free software to attend it. I am not sure whether I would be of any use like this and also when I get back, it would be an empty house. But for people young and old, who want to do anything with free/open source software it is a chance not to be missed. Registration of the same closes on 1st of November 2022. All the best, break a leg 😉

Just read this, beautifully done.

14 October, 2022 05:03PM by shirishag75