April 01, 2022

hackergotchi for Junichi Uekawa

Junichi Uekawa

I am more used to Rust.

I am more used to Rust. I can do simple string processing faster in Rust than say Python. It has enough things to make it more pleasant to work with than C++.

01 April, 2022 10:27AM by Junichi Uekawa

Russ Allbery

Review: Princess Floralinda and the Forty-Flight Tower

Review: Princess Floralinda and the Forty-Flight Tower, by Tamsyn Muir

Publisher: Subterranean Press
Copyright: 2020
ISBN: 1-59606-992-9
Format: Kindle
Pages: 111

A witch put Princess Floralinda at the top of a forty-flight tower, but it wasn't personal. This is just what witches do, particularly with princesses with butter-coloured curls and sapphire-blue eyes. Princes would come from miles around to battle up the floors of the tower and rescue the princess. The witch even helpfully provided a golden sword, in case a prince didn't care that much about princesses. Floralinda was provided with water and milk, two loaves of bread, and an orange, all of them magically renewing, to sustain her while she waited.

In retrospect, the dragon with diamond-encrusted scales on the first floor may have been a mistake. None of the princely endeavors ever saw the second floor. The diary that Floralinda found in her room indicated that she may not be the first princess to have failed to be rescued from this tower.

Floralinda finally reaches the rather astonishing conclusion that she might have to venture down the tower herself, despite the goblins she was warned were on the 39th floor (not to mention all the other monsters). The result of that short adventure, after some fast thinking, a great deal of luck, and an unforeseen assist from her magical food, is a surprising number of dead goblins. Also seriously infected hand wounds, because it wouldn't be a Tamsyn Muir story without wasting illness and body horror. That probably would have been the end of Floralinda, except a storm blew a bottom-of-the-garden fairy in through the window, sufficiently injured that she and Floralinda were stuck with each other, at least temporarily.

Cobweb, the fairy, is neither kind nor inclined to help Floralinda (particularly given that Floralinda is not a child whose mother is currently in hospital), but it is an amateur chemist and finds both Floralinda's tears and magical food intriguing. Cobweb's magic is also based on wishes, and after a few failed attempts, Floralinda manages to make a wish that takes hold. Whether she'll regret the results is another question.

This is a fairly short novella by the same author as Gideon the Ninth, but it's in a different universe and quite different in tone. This summary doesn't capture the writing style, which is a hard-to-describe mix of fairy tale, children's story, and slightly archaic and long-winded sentence construction. This is probably easier to show with a quote:

"You are displaying a very small-minded attitude," said the fairy, who seemed genuinely grieved by this. "Consider the orange-peel, which by itself has many very nice properties. Now, if you had a more educated brain (I cannot consider myself educated; I have only attempted to better my situation) you would have immediately said, 'Why, if I had some liquor, or even very hot water, I could extract some oil from this orange-peel, which as everyone knows is antibacterial; that may well do my hands some good,' and you wouldn't be in such a stupid predicament."

On balance, I think this style worked. It occasionally annoyed me, but it has some charm. About halfway through, I was finding the story lightly entertaining, although I would have preferred a bit less grime, illness, and physical injury.

Unfortunately, the rest of the story didn't work for me. The dynamic between Floralinda and Cobweb turns into a sort of D&D progression through monster fights, and while there are some creative twists to those fights, they become all of a sameness. And while I won't spoil the ending, it didn't work for me. I think I see what Muir was trying to do, and I have some intellectual appreciation for the idea, but it wasn't emotionally satisfying.

I think my root problem with this story is that Muir sets up a rather interesting world, one in which witches artistically imprison princesses, and particularly bright princesses (with the help of amateur chemist fairies) can use the trappings of a magical tower in ways the witch never intended. I liked that; it has a lot of potential. But I didn't feel like that potential went anywhere satisfying. There is some relationship and characterization work, and it reached some resolution, but it didn't go as far as I wanted. And, most significantly, I found the end point the characters reached in relation to the world to be deeply unsatisfying and vaguely irritating.

I wanted to like this more than I did. I think there's a story idea in here that I would have enjoyed more. Unfortunately, it's not the one that Muir wrote, and since so much of my problem is with the ending, I can't provide much guidance on whether someone else would like this story better (and why). But if the idea of taking apart a fairy-tale tower and repurposing the pieces sounds appealing, and if you get along better with Muir's illness motif than I do, you may enjoy this more than I did.

Rating: 5 out of 10

01 April, 2022 04:28AM

Updated eyrie Debian archive keyring

For anyone who uses my personal Debian repository (there are fewer and fewer reasons to do that, but there are still some Debian packages there that aren't available anywhere else), I've (finally) refreshed the archive signing key.

The new key is available through the eyrie-archive-keyring package as normal. Both the new and the old keys were provided in that package for a while. As of today, the old key has been removed. The key can also be downloaded from my web site.

01 April, 2022 03:15AM

Russell Coker

Converting to UEFI

When I got my HP ML110 Gen9 working as a workstation I initially was under the impression that boot wasn’t supported on NVMe and booted it from USB. I found USB booting with legacy boot to be unreliable so decided to try EFI booting and noticed that the NVMe devices were boot candidates with UEFI. Making one of them bootable was more complex than expected because no-one seems to have documented such things. So here’s my documentation, it’s not great but this method has worked once for me.

Before starting major partitioning work it’s best to run “parted -l and save the output to a file, that can allow you to recreate partitions if you corrupt them. One thing I’m doing on systems I manage is putting “@reboot /usr/sbin/parted -l > /root/parted.log” in the root crontab, then when the system is backed up the backup server gets any recent changes to partitioning (I don’t backup /var/log on all my systems).

Firstly run parted on the device to create the EFI and /boot partitions, note that if you want to copy and paste from this you must do so one line at a time, a block paste seemed to confuse parted.

mklabel gpt
mkpart EFI fat32 1 99
mkpart boot ext3 99 300
toggle 1 boot
toggle 1 esp
p
# Model: CT1000P1SSD8 (nvme)
# Disk /dev/nvme1n1: 1000GB
# Sector size (logical/physical): 512B/512B
# Partition Table: gpt
# Disk Flags: 
#
# Number  Start   End     Size    File system  Name  Flags
#  1      1049kB  98.6MB  97.5MB  fat32        EFI   boot, esp
#  2      98.6MB  300MB   201MB   ext3         boot
q

Here are the commands needed to create the filesystems and install the necessary files. This is almost to the stage of being scriptable. Some minor changes need to be made to convert from NVMe device names to SATA/SAS but nothing serious.

mkfs.vfat /dev/nvme1n1p1
mkfs.ext3 -N 1000 /dev/nvme1n1p2
file -s /dev/nvme1n1p2 | sed -e s/^.*UUID/UUID/ -e "s/ .*$/ \/boot ext3 noatime 0 1/" >> /etc/fstab
file -s /dev/nvme1n1p1 | tr "[a-f]" "[A-F]" |sed -e s/^.*numBEr.0x/UUID=/ -e "s/, .*$/ \/boot\/efi vfat umask=0077 0 1/" >> /etc/fstab
# edit /etc/fstab to put a hyphen between the 2 groups of 4 chars for the VFAT filesystem UUID
mount /boot
mkdir -p /boot/efi /boot/grub
mount /boot/efi
mkdir -p /boot/efi/EFI/debian
apt install efibootmgr shim-unsigned grub-efi-amd64
cp /usr/lib/shim/* /usr/lib/grub/x86_64-efi/monolithic/grubx64.efi /boot/efi/EFI/debian
file -s /dev/nvme1n1p2 | sed -e "s/^.*UUID=/search.fs_uuid /" -e "s/ .needs.*$/ root hd0,gpt2/" > /boot/efi/EFI/debian/grub.cfg
echo "set prefix=(\$root)'/boot/grub'" >> /boot/efi/EFI/debian/grub.cfg
echo "configfile \$prefix/grub.cfg" >> /boot/efi/EFI/debian/grub.cfg
grub-install
update-grub

If someone would like to make a script that can handle the different partition names of regular SCSI/SATA disks, NVMe, CCISS, etc then that would be great. It would be good to have a script in Debian that creates the partitions and sets up the EFI files.

If you want to have a second bootable device then the following commands will copy a GPT partition table and give it new UUIDs, make very certain that $DISKB is the one you want to be wiped and refer to my previous mention of “parted -l“. Also note that parted has a rescue command which works very well.

sgdisk /dev/$DISKA -R /dev/$DISKB 
sgdisk -G /dev/$DISKB

To backup a GPT partition table run a command like this. Note that if sgdisk is told to backup a MBR partitioned disk it will say “Found invalid GPT and valid MBR; converting MBR to GPT forma” which is probably a viable way of converting MBR format to GPT.

sgdisk -b sda.bak /dev/sda

01 April, 2022 02:17AM by etbe

Antoine Beaupré

Salvaged my first Debian package

I finally salvaged my first Debian package, python-invoke. As part of ITS 964718, I moved the package from the Openstack Team to the Python team. The Python team might not be super happy with it, because it's breaking some of its rules, but at least someone (ie. me) is actively working (and using) the package.

Wait what

People not familiar with Debian will not understand anything in that first paragraph, so let me expand. Know-it-all Debian developers (you know who you are) can skip to the next section.

Traditionally, the Debian project (my Linux-based operating system of choice) has prided itself on the self-managed, anarchistic organisation of its packaging. Each package maintainer is the lord of his little kingdom. Some maintainers like to accumulate lots of kingdoms to rule over.

(Yes, it really doesn't sound like anarchism when you put it like that. Yes, it's complicated: there's a constitution and voting involved. And yes, we're old.)

Therefore, it's really hard to make package maintainers do something they don't want. Typically, when things go south, someone makes a complaint to the Debian Technical Committee (CTTE) which is established by the Debian constitution to resolve such conflicts. The committee is appointed by the Debian Project leader, himself elected each year (and there's an election coming up if you haven't heard).

Typically, the CTTE will then vote and formulate a decision. But here's the trick: maintainers are still free to do whatever they want after that, in a sense. It's not like the CTTE can just break down doors and force maintainers to type code.

(I won't go into the details of the why of that, but it involves legal issues and, I think, something about the Turing halting problem. Or something like that.)

Anyways. The point is all that is super heavy and no one wants to go there...

(Know-it-all Debian developers, I know you are still reading this anyways and disagree with that statement, but please, please, make it true.)

... but sometimes, packages just get lost. Maintainers get distracted, or busy with something else. It's not that they want to abandon their packages. They love their little fiefdoms. It's just there was a famine or a war or something and everyone died, and they have better things to do than put up fences or whatever.

So clever people in Debian found a better way of handling such problems than waging war in the poor old CTTE's backyard. It's called the Package Salvaging process. Through that process, a maintainer can propose to take over an existing package from another maintainer, if a certain set of condition are met and a specific process is followed.

Normally, taking over another maintainer's package is basically a war declaration, rarely seen in the history of Debian (yes, I do think it happened!), as rowdy as ours is. But through this process, it seems we have found a fair way of going forward.

The process is basically like this:

  1. file a bug proposing the change
  2. wait three weeks
  3. upload a package making the change, with another week delay
  4. you now have one more package to worry about

Easy right? It actually is! Process! It's magic! It will cure your babies and resurrect your cat!

So how did that go?

It went well!

The old maintainer was actually fine with the change because their team wasn't using the package anymore anyways. He asked to be kept as an uploader, which I was glad to oblige.

(He replied a few months after the deadline, but I wasn't in a rush anyways, so that doesn't matter. It was polite for him to answer, even if, technically, I was already allowed to take it over.)

What happened next is less shiny for me though. I totally forgot about the ITS, even after the maintainer reminded me of it. See, the thing is the ITS doesn't show up on my dashboard at all. So I totally forgot about it (yes, twice).

In fact, the only reason I remembered it was that got into the process of formulating another ITS (1008753, trocla) and I was trying to figure out how to write the email. Then I remembered: "hey wait, I think I did this before!" followed by "oops, yes, I totally did this before and forgot for 9 months".

So, not great. Also, the package is still not in a perfect shape. I was able to upload the upstream version that was pending 1.5.0 to clear out the ITS, basically. And then there's already two new upstream releases to upload, so I pushed 1.7.0 to experimental as well, for good measure.

Unfortunately, I still can't enable tests because everything is on fire, as usual.

But at least my kingdom is growing.

Appendix

Just in case someone didn't get the metaphor, I'm not a monarchist promoting feodalism as a practice to manage a community. I do not intend to really "grow my kingdom" and I think the culture around "property" of "packages" is kind of absurd in Debian. I kind of wish it would go away.

Team maintenance, the LowNMU process, and low threshold adoption processes are all steps in the good direction, but they are all opt in. At least the package salvaging process is someone a little more ... uh... coercive? Or at least it allows the community to step in and do the right thing, in a sense.

We'll see what happens with the coming wars around the tech committee, which are bound to touch on that topic. (Hint: our next drama is called "usrmerge".) Hopefully, LWN will make a brilliant article to sum it up for us so that I don't have to go through the inevitable debian-devel flamewar to figure it out. I already wrecked havoc on the #debian-devel IRC channel asking newbie questions so I won't stir that mud any further for now.

01 April, 2022 01:50AM

Paul Wise

FLOSS Activities March 2022

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Spam: reported 3 Debian bug reports and 53 Debian mailing list posts
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:

Administration

  • Debian servers: investigate wiki mail delivery issue, restart backup director
  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Forward python-plac test failure issue upstream
  • Participate in Debian Project Leader election discussions
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The oci-python-sdk and plac work was sponsored. All other work was done on a volunteer basis.

01 April, 2022 01:27AM

March 31, 2022

hackergotchi for Matthew Garrett

Matthew Garrett

ZTA doesn't solve all problems, but partial implementations solve fewer

Traditional network access controls work by assuming that something is trustworthy based on some other factor - for example, if a computer is on your office network, it's trustworthy because only trustworthy people should be able to gain physical access to plug something in. If you restrict access to your services to requests coming from trusted networks, then you can assert that it's coming from a trusted device.

Of course, this isn't necessarily true. A machine on your office network may be compromised. An attacker may obtain valid VPN credentials. Someone could leave a hostile device plugged in under a desk in a meeting room. Trust is being placed in devices that may not be trustworthy.

A Zero Trust Architecture (ZTA) is one where a device is granted no inherent trust. Instead, each access to a service is validated against some policy - if the policy is satisfied, the access is permitted. A typical implementation involves granting each device some sort of cryptographic identity (typically a TLS client certificate) and placing the protected services behind a proxy. The proxy verifies the device identity, queries another service to obtain the current device state (we'll come back to that in a moment), compares the state against a policy and either pass the request through to the service or reject it. Different services can have different policies (eg, you probably want a lax policy around whatever's hosting the documentation for how to fix your system if it's being refused access to something for being in the wrong state), and if you want you can also tie it to proof of user identity in some way.

From a user perspective, this is entirely transparent. The proxy is made available on the public internet, DNS for the services points to the proxy, and every time your users try to access the service they hit the proxy instead and (if everything's ok) gain access to it no matter which network they're on. There's no need to connect to a VPN first, and there's no worries about accidentally leaking information over the public internet instead of over a secure link.

It's also notable that traditional solutions tend to be all-or-nothing. If I have some services that are more sensitive than others, the only way I can really enforce this is by having multiple different VPNs and only granting access to sensitive services from specific VPNs. This obviously risks combinatorial explosion once I have more than a couple of policies, and it's a terrible user experience.

Overall, ZTA approaches provide more security and an improved user experience. So why are we still using VPNs? Primarily because this is all extremely difficult. Let's take a look at an extremely recent scenario. A device used by customer support technicians was compromised. The vendor in question has a solution that can tie authentication decisions to whether or not a device has a cryptographic identity. If this was in use, and if the cryptographic identity was tied to the device hardware (eg, by being generated in a TPM), the attacker would not simply be able to obtain the user credentials and log in from their own device. This is good - if the attacker wanted to maintain access to the service, they needed to stay on the device in question. This increases the probability of the monitoring tooling on the compromised device noticing them.

Unfortunately, the attacker simply disabled the monitoring tooling on the compromised device. If device state was being verified on each access then this would be noticed before too long - the last data received from the device would be flagged as too old, and the requests would no longer satisfy any reasonable access control policy. Instead, the device was assumed to be trustworthy simply because it could demonstrate its identity. There's an important point here: just because a device belongs to you doesn't mean it's a trustworthy device.

So, if ZTA approaches are so powerful and user-friendly, why aren't we all using one? There's a few problems, but the single biggest is that there's no standardised way to verify device state in any meaningful way. Remote Attestation can both prove device identity and the device boot state, but the only product on the market that does much with this is Microsoft's Device Health Attestation. DHA doesn't solve the broader problem of also reporting runtime state - it may be able to verify that endpoint monitoring was launched, but it doesn't make assertions about whether it's still running. Right now, people are left trying to scrape this information from whatever tooling they're running. The absence of any standardised approach to this problem means anyone who wants to deploy a strong ZTA has to integrate with whatever tooling they're already running, and that then increases the cost of migrating to any other tooling later.

But even device identity is hard! Knowing whether a machine should be given a certificate or not depends on knowing whether or not you own it, and inventory control is a surprisingly difficult problem in a lot of environments. It's not even just a matter of whether a machine should be given a certificate in the first place - if a machine is reported as lost or stolen, its trust should be revoked. Your inventory system needs to tie into your device state store in order to ensure that your proxies drop access.

And, worse, all of this depends on you being able to put stuff behind a proxy in the first place! If you're using third-party hosted services, that's a problem. In the absence of a proxy, trust decisions are probably made at login time. It's possible to tie user auth decisions to device identity and state (eg, a self-hosted SAML endpoint could do that before passing through to the actual ID provider), but that's still going to end up providing a bearer token of some sort that can potentially be exfiltrated, and will continue to be trusted even if the device state becomes invalid.

ZTA doesn't solve all problems, and there isn't a clear path to it doing so without significantly greater industry support. But a complete ZTA solution is significantly more powerful than a partial one. Verifying device identity is a step on the path to ZTA, but in the absence of device state verification it's only a step.

comment count unavailable comments

31 March, 2022 11:06PM

Russell Coker

AMT/MEBX on Debian

I’ve just been playing with Intel’s Active Management Technology (AMT) [1] which is also known as Management Engine Bios Extension (MEBX).

Firstly a disclaimer, using this sort of technology gives remote access to your system at a level that allows in some ways overriding the OS. If this gets broken then you have big problems. Also all the code that matters is non-free. Please don’t comment on this post saying that AMT is bad, take it as known that it has issues and that people are forced to use it anyway.

I tested this out on a HP Z420 workstation. The first thing it to enable AMT via Intel “MEBX”, the default password is “admin”. On first use you are compelled to set a new password which must be 8+ characters containing upper and lower case, number, and punctuation characters.

The Debian package “amtterm” (which needs the package “libsoap-lite-perl“) has basic utilities for AMT. The amttool program connects to TCP port 16992 and the amtterm program connects to TCP port 16994. Note that these programs seem a little rough, you can get Perl errors (as opposed to deliberate help messages) if you enter bad command-line parameters. They basically work but could do with some improvement.

If you use DHCP for the IP address the DHCP hostname will be “DESKTOP-$AssetID” and you can find the IP address by requesting an alert be sent to the sysadmin.

Here are some examples of amttool usage:

# get AMT info
AMT_PASSWORD="$PASS" amttool $IP
# reset the system and redirect BIOS messages to serial over lan
AMT_PASSWORD="$PASS" amttool reset bios
# access serial over lan console
amtterm -p "$PASS" $IP

The following APT configuration enables the Ubuntu package wsmancli which had some features not in any Debian packages last time I checked.

deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates universe
deb http://us.archive.ubuntu.com/ubuntu/ bionic universe

This Cyberciti article has information on accessing KVM over AMT [2], I haven’t tried to do that yet.

31 March, 2022 03:51AM by etbe

March 30, 2022

hackergotchi for Ulrike Uhlig

Ulrike Uhlig

How do kids conceive the internet?

I wanted to understand how kids between 10 and 18 conceive the internet. Surely, we have seen a generation that we call “digital natives” grow up with the internet. Now, there is a younger generation who grows up with pervasive technology, such as smartphones, smart watches, virtual assistants and so on. And only a few of them have parents who work in IT or engineering…

Pervasive technology contributes to the idea that the internet is immaterial

With their search engine website design, Google has put in place an extremely simple and straightforward user interface. Since then, designers and psychologists have worked on making user interfaces more and more intuitive to use. The buzzwords are “usability” and “user experience design”. Besides this optimization of visual interfaces, haptic interfaces have evolved as well, specifically on smartphones and tablets where hand gestures have replaced more clumsy external haptic interfaces such as a mouse. And beyond interfaces, the devices themselves have become smaller and slicker. While in our generation many people have experienced opening a computer tower or a laptop to replace parts, with the side effect of seeing the parts the device is physically composed of, the new generation of end user devices makes this close to impossible, essentially transforming these devices into black boxes, and further contributing to the idea that the internet they are being used to access with would be something entirely intangible.

What do kids in 2022 really know about the internet?

So, what do kids of that generation really know about the internet, beyond purely using services they do not control? In order to find out, I decided to interview children between 10 and 18.

I conducted 5 interviews with kids aged 9, 10, 12, 15 and 17, two boys and three girls. Two live in rural Germany, one in a German urban area, and two live in the French capital.

I wrote the questions in a way to stimulate the interviewees to tell me a story each time. I also told them that the interview is not a test and that there are no wrong answers.

Except for the 9 year old, all interviewees possessed both, their own smartphone and their own laptop. All of them used the internet mostly for chatting, entertainment (video and music streaming, online games), social media (TikTok, Instagram, Youtube), and instant messaging.

Let me introduce you to their concepts of the internet. That was my first story telling question to them:

If aliens had landed on Earth and would ask you what the internet is, what would you explain to them?

The majority of respondents agreed in their replies that the internet is intangible – while still being a place where one “can do anything” and “everything”.

Before I tell you more about their detailed answers to the above question, let me show you how they visualize “their” internet.

If you had to make a drawing to explain to a person what the internet is, how would this drawing look like?

Each interviewee had some minutes to come up with a drawing. As you will see, that drawing corresponds to what the kids would want an alien to know about the internet and how they are using the internet themselves.

Movies, series, videos

A child's drawing. In the middle, there is a screen, on the screen a movie is running. Around the screen there are many people, at least two dozens. The words 'film', 'series', 'network', 'video' are written and arrows point from these words to the screen. There's also a play icon.

The youngest respondent, a 9 year old girl, drew a screen with lots of people around it and the words “film, series, network, video”, as well as a “play” icon. She said that she mostly uses the internet to watch movies. She was the only one who used a shared tablet and smartphone that belonged to her family, not to herself. And she would explain the net like this to an alien:

"Internet is a… er… one cannot touch it… it‘s an, er… [I propose the word ‚idea‘], yes it‘s an idea. Many people use it not necessarily to watch things, but also to read things or do other stuff."

User interface elements

There is a magnifying glass icon, a play icon and speech bubbles drawn with a pencil.

A 10 year old boy represented the internet by recalling user interface elements he sees every day in his drawing: a magnifying glass (search engine), a play icon (video streaming), speech bubbles (instant messaging). He would explain the internet like this to an alien:

"You can use the internet to learn things or get information, listen to music, watch movies, and chat with friends. You can do nearly anything with it."

Another planet

Pencil drawing that shows a planet with continents. The continents are named: H&M, Ebay, Google, Wikipedia, Facebook.

A 12 year old girl imagines the internet like a second, intangible, planet where Google, Wikipedia, Facebook, Ebay, or H&M are continents that one enters into.

"And on [the] Ebay [continent] there‘s a country for clothes, and ,trousers‘, for example, would be a federal state in that country."

Something that was unique about this interview was that she told me she had an email address but she never writes emails. She only has an email account to receive confirmation emails, for example when doing online shopping, or when registering to a service and needing to confirm one‘s address. This is interesting because it‘s an anti-spam measure that might become outdated with a generation that uses email less or not at all.

Home network

Kid's drawing: there are three computer towers and next to each there are two people. The first couple is sad, the seconf couple is smiling, the last one is suprised. Each computer is connected to a router, two of them by cable, one by wifi.

A 15 year old boy knew that his family‘s devices are connected to a home router (Freebox is a router from the French ISP Free) but lacked an imagination of the rest of the internet‘s functioning. When I asked him about what would be behind the router, on the other side, he said what‘s behind is like a black hole to him. However, he was the only interviewee who did actually draw cables, wifi waves, a router, and the local network. His drawing is even extremely precise, it just lacks the cable connecting the router to the rest of the internet.

Satellite internet

This is another very simple drawing: On top left, there's planet Earth an there are lines indicating that earth is a sphere. Around Earth there are two big satellites reaching most of Earth. on the left, below, there are three icons representing social media services on the internet: Snapchat, Instagram, TikTok. On the right, there are simplified drawings of possibilities which the internet offers: person to person connection, email (represented by envelopes), calls (represented by an old-style telephone set).

A 17 year old girl would explain the internet to an alien as follows:

"The internet goes around the entire globe. One is networked with everyone else on Earth. One can find everything. But one cannot touch the internet. It‘s like a parallel world. With a device one can look into the internet. With search engines, one can find anything in the world, one can phone around the world, and write messages. [The internet] is a gigantic thing."

This interviewee stated as the only one that the internet is huge. And while she also is the only one who drew the internet as actually having some kind of physical extension beyond her own home, she seems to believe that internet connectivity is based on satellite technology and wireless communication.

Imagine that a wise and friendly dragon could teach you one thing about the internet that you’ve always wanted to know. What would you ask the dragon to teach you about?

A 10 year old boy said he‘d like to know “how big are the servers behind all of this”. That‘s the only interview in which the word “server” came up.

A 12 year old girl said “I would ask how to earn money with the internet. I always wanted to know how this works, and where the money comes from.” I love the last part of her question!

The 15 year old boy for whom everything behind the home router is out of his event horizon would ask “How is it possible to be connected like we are? How does the internet work scientifically?”

A 17 year old girl said she‘d like “to learn how the darknet works, what hidden things are there? Is it possible to get spied on via the internet? Would it be technically possible to influence devices in a way that one can listen to secret or telecommanded devices?”

Lastly, I wanted to learn about what they find annoying, or problematic about the internet.

Imagine you could make the internet better for everyone. What would you do first?

Asked what she would change if she could, the 9 year old girl advocated for a global usage limit of the internet in order to protect the human brain. Also, she said, her parents spend way too much time on their phones and people should rather spend more time with their children.

Three of the interviewees agreed that they see way too many advertisements and two of them would like ads to disappear entirely from the web. The other one said that she doesn‘t want to see ads, but that ads are fine if she can at least click them away.

The 15 year old boy had different ambitions. He told me he would change:

"the age of access to the internet. More and more younger people access the internet ; especially with TikTok there is a recommendation algorithm that can influcence young people a lot. And influencing young people should be avoided but the internet does it too much. And that can be negative. If you don‘t yet have a critical spirit, and you watch certain videos you cannot yet moderate your stance. It can influence you a lot. There are so many things that have become indispensable and that happen on the internet and we have become dependent. What happens if one day it doesn‘t work anymore? If we connect more and more things to the net, that‘s not a good thing."

The internet - Oh, that’s what you mean!

On a sidenote, my first interview tentative was with an 8 year old girl from my family. I asked her if she uses the internet and she denied, so I abandoned interviewing her. Some days later, while talking to her, she proposed to look something up on Google, using her smartphone. I said: “Oh, so you are using the internet!” She replied: “Oh, that‘s what you‘re talking about?”

I think she knows the word “Google” and she knows that she can search for information with this Google thing. But it appeared that she doesn‘t know that the Google search engine is located somewhere else on internet and not on her smartphone. I concluded that for her, using the services on the smartphone is as “natural” as switching on a light in the house: we also don’t think about where the electricity comes from when we do that.

What can we learn from these few interviews?

Unsurprisingly, social media, streaming, entertainment, and instant messaging are the main activities kids undertake on the internet. They are completely at the mercy of advertisements in apps and on websites, not knowing how to get rid of them. They interact on a daily basis with algorithms that are unregulated and known to perpetuate discrimination and to create filter bubbles, without necessarily being aware of it. The kids I interviewed act as mere service users and seem to be mostly confined to specific apps or websites.

All of them perceived the internet as being something intangible. Only the older interviewees perceived that there must be some kind of physical expansion to it: the 17 year old girl by drawing a network of satellites around the globe, the 15 year old boy by drawing the local network in his home.

To be continued…

30 March, 2022 10:00PM by Ulrike Uhlig

hackergotchi for Bits from Debian

Bits from Debian

Lenovo Platinum Sponsor of DebConf22

lenovologo

We are very pleased to announce that Lenovo has committed to supporting DebConf22 as a Platinum sponsor. This is the fourth year in a row that Lenovo is sponsoring The Debian Conference with the higher tier!

As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

With this commitment as Platinum Sponsor, Lenovo is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Lenovo, for your support of DebConf22!

Become a sponsor too!

DebConf22 will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th.

And DebConf22 is still accepting sponsors! Interested companies and organizations may contact the DebConf team through [email protected], and visit the DebConf22 website at https://debconf22.debconf.org/sponsors/become-a-sponsor.

DebConf22 banner open registration

30 March, 2022 09:00AM by The Debian Publicity Team

March 29, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppBDT 0.2.5: Maintenance

A minor maintenance release for the RcppBDT package is now on CRAN.

The RcppBDT package is an early adopter of Rcpp and was one of the first packages utilizing Boost and its Date_Time library. The now more widely-used package anytime is a direct descentant of RcppBDT. Thanks again for the heads-up!

This release mostly deals with a one-definition rule violation detected by link-time optimisation (which can be enable when configuring R itself at build time with --enable-lto). I confused myself into thinking Rcpp Modules may be at fault, but Iñaki was a little more awake than myself and noticed that I only needed to carry the (common) header RcppBDT.h to the file toPOSIXct.cpp added last summer.

The NEWS entry follows:

Changes in version 0.2.5 (2022-03-29)

  • Ensure consistent compilation by ensuring RcppBDT.h is included in all files, this addresses an LTO/ODR issue

  • Correct one declaration in init.c

  • Minor additional cleanups

Courtesy of my CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 March, 2022 11:13PM

Jeremy Bicha

How to install a bunch of debs

Recently, I needed to check if a regression in Ubuntu 22.04 Beta was triggered by the mesa upgrade. Ok, sounds simple, let me just install the older mesa version.

Let’s take a look.

Oh, wow, there are about 24 binary packages (excluding the packages for debug symbols) included in mesa!

Because it’s no longer published in Ubuntu 22.04, we can’t use our normal apt way to install those packages. And downloading those one by one and then installing them sounds like too much work.

Step Zero: Prerequisites

If you are an Ubuntu (or Debian!) developer, you might already have ubuntu-dev-tools installed. If not, it has some really useful tools!

$ sudo apt install ubuntu-dev-tools

Step One: Create a Temporary Working Directory

Let’s create a temporary directory to hold our deb packages. We don’t want to get them mixed up with other things.

$ mkdir mesa-downgrade; cd mesa-downgrade

Step Two: Download All the Things

One of the useful tools is pull-lp-debs. The first argument is the source package name. In this case, I next need to specify what version I want; otherwise it will give me the latest version which isn’t helpful. I could specify a series codename like jammy or impish but that won’t give me what I want this time.

$ pull-lp-debs mesa 21.3.5-1ubuntu2

By the way, there are several other variations on pull-lp-debs:

  • pull-lp-source – downloads source package from Launchpad.
  • pull-lp-debs – downloads debs package(s) from Launchpad.
  • pull-lp-ddebs – downloads dbgsym/ddebs package(s) from Launchpad.
  • pull-lp-udebs – downloads udebs package(s) from Launchpad.
  • pull-debian-* – same as pull-lp-* but for Debian packages.

I use the LP and Debian source versions frequently when I just want to check something in a package but don’t need the full git repo.

Step Three: Install Only What We Need

This command allows us to install just what we need.

$ sudo apt install --only-upgrade --mark-auto ./*.deb

--only-upgrade tells apt to only install packages that are already installed. I don’t actually need all 24 packages installed; I just want to change the versions for the stuff I already have.

--mark-auto tells apt to keep these packages marked in dpkg as automatically installed. This allows any of these packages to be suggested for removal once there isn’t anything else depending on them. That’s useful if you don’t want to have old libraries installed on your system in case you do manual installation like this frequently.

Finally, the apt install syntax has a quirk: It needs a path to a file because it wants an easy way to distinguish from a package name. So adding ./ before filenames works.

I guess this is a bug. apt should be taught that libegl-mesa0_21.3.5-1ubuntu2_amd64.deb is a file name not a package name.

Step Four: Cleanup

Let’s assume that you installed old versions. To get back to the current package versions, you can just upgrade like normal.

$ sudo apt dist-upgrade

If you do want to stay on this unsupported version a bit longer, you can specify which packages to hold:

$ sudo apt-mark hold

And you can use apt-mark list and apt-mark unhold to see what packages you have held and release the holds. Remember you won’t get security updates or other bug fixes for held packages!

And when you’re done with the debs we download, you can remove all the files:

$ cd .. ; rm -ri mesa-downgrade

Bonus: Downgrading back to supported

What if you did the opposite and installed newer stuff than is available in your current release? Perhaps you installed from jammy-proposed and you want to get back to jammy ? Here’s the syntax for libegl-mesa0

Note the /jammy suffix on the package name.

$ sudo apt install libegl-mesa0/jammy

But how do you find these packages? Use apt list

Here’s one suggested way to find them:

$ apt list --installed --all-versions| grep local] --after-context 1

Finally, I should mention that apt is designed to upgrade packages not downgrade them. You can break things by downgrading. For instance, a database could upgrade its format to a new version but I wouldn’t expect it to be able to reverse that just because you attempt to install an older version.

29 March, 2022 09:55PM by Jeremy Bicha

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Join Freexian to help improve Debian

Freexian has set itself new ambitious goals in support of Debian and we would like to expand our team to help us reach those goals. We have drafted a mission statement to clarify our purpose and our values, and we hope to be able to attract talented software developers, entrepreneurs and Debian experts from our community.

Freexian’s mission is to help Debian evolve to be the leading Linux distribution and a model to follow in the free software world.

We want to achieve that by enabling passionate contributors to spend most of their time working on Debian and free software, combining lucrative work in support of enterprise customers, core contributions to Debian’s processes, and personal goals.

Extract from Freexian’s mission statement

If Debian is an important part of your life, if improving Debian is something that you enjoy doing, if you have enough skills and Debian expertise to match some of the roles that we have described on our Join us page, then we would love to hear from you at [email protected], so we can figure out a way to work together.

29 March, 2022 10:29AM by Raphaël Hertzog

Jacob Adams

A Lesson in Shortcuts

(The below was written by Rob Pike, copied here for posterity from The Wayback Machine)

Long ago, as the design of the Unix file system was being worked out, the entries . and .. appeared, to make navigation easier. I’m not sure but I believe .. went in during the Version 2 rewrite, when the file system became hierarchical (it had a very different structure early on). When one typed ls, however, these files appeared, so either Ken or Dennis added a simple test to the program. It was in assembler then, but the code in question was equivalent to something like this:

   if (name[0] == '.') continue;

This statement was a little shorter than what it should have been, which is

   if (strcmp(name, ".") == 0 || strcmp(name, "..") == 0) continue;

but hey, it was easy.

Two things resulted.

First, a bad precedent was set. A lot of other lazy programmers introduced bugs by making the same simplification. Actual files beginning with periods are often skipped when they should be counted.

Second, and much worse, the idea of a “hidden” or “dot” file was created. As a consequence, more lazy programmers started dropping files into everyone’s home directory. I don’t have all that much stuff installed on the machine I’m using to type this, but my home directory has about a hundred dot files and I don’t even know what most of them are or whether they’re still needed. Every file name evaluation that goes through my home directory is slowed down by this accumulated sludge.

I’m pretty sure the concept of a hidden file was an unintended consequence. It was certainly a mistake.

How many bugs and wasted CPU cycles and instances of human frustration (not to mention bad design) have resulted from that one small shortcut about 40 years ago?

Keep that in mind next time you want to cut a corner in your code.

(For those who object that dot files serve a purpose, I don’t dispute that but counter that it’s the files that serve the purpose, not the convention for their names. They could just as easily be in $HOME/cfg or $HOME/lib, which is what we did in Plan 9, which had no dot files. Lessons can be learned.)

29 March, 2022 12:00AM

March 28, 2022

Russell Coker

Feedburner Seems to be Dying

Many years ago Feedburner was a useful service. It proxied the RSS feed of your blog and gave you analytics of what happened with it. Now feeds using Feedburner randomly give HTTP error 404s. The Feedburner Twitter account is inactive and recommends that people Tweet at Google instead. It seems that Google wants to get rid of the service and random 404s probably aren’t a high priority for them.

I’ve just gone through the config for Planet Linux Australia [1] and changed as many Feedburner URLs as possible to direct feed URLs. I did this by loading the Feedburner feed, getting the URL for the site, and then guessing the feed URL (usually just appending “/feed” to the domain name).

I recommend that everyone abandon Feedburner, it’s not reliable enough to use and doesn’t seem to have any active support.

28 March, 2022 11:00PM by etbe

Antoine Beaupré

What is going on with web servers

I stumbled upon this graph recently, which is w3techs.com graph of "Historical yearly trends in the usage statistics of web servers". It seems I hadn't looked at it in a long while because I was surprised at many levels:

  1. Apache is now second, behind Nginx, since ~2022 (so that's really new at least)

  2. Cloudflare "server" is third ahead of the traditional third (Microsoft IIS) - I somewhat knew that Cloudflare was hosting a lot of stuff, but I somehow didn't expect to see it there at all for some reason

  3. I had to lookup what LiteSpeed was (and it's not a bike company): it's a drop-in replacement (!?) of the Apache web server (not a fork, a bizarre idea but which seems to be gaining a lot of speed recently, possibly because of its support for QUIC and HTTP/3

So there. I'm surprised. I guess the stats should be taken with a grain of salt because they only partially correlate with Netcraft's numbers which barely mention LiteSpeed at all. (But they do point at a rising share as well.)

Netcraft also puts Nginx's first place earlier in time, around April 2019, which is about when Microsoft's IIS took a massive plunge down. That is another thing that doesn't map with w3techs' graphs at all either.

So it's all lies and statistics, basically. Moving on.

Oh and of course, the two first most popular web servers, regardless of the source, are package in Debian. So while we're working on statistics and just making stuff up, I'm going to go ahead and claim all of this stuff runs on Linux and that BSD is dead. Or something like that.

28 March, 2022 02:11PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCNPy 0.2.11

A minor maintenance release of the RcppCNPy package arrived on CRAN three days ago, but we skipped announcing it right then.

RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers along with Rcpp for the glue to R.

One of the vignettes created an issue at CRAN with one of the Python modules used, so we simply switched to pre-made vignettes just as we do for a few other packages. Other small changes that had accumulated since the previous release were a new section in the reticulate vignette as well as some more documentation on types as well as some updates to continuous integration and bdages.

Changes in version 0.2.11 (2022-03-24)

  • The reticulate vignette has new section on 3d-arrays.

  • Added clarification to the manual page that the default types are 32-bit integer and 64-bit double (as we are working with R here).

  • Several updates have been made to the CI system; it now runs r-ci.

  • The README.md was updated with some new badges.

  • The vignettes are now pre-made to avoid any external dependencies.

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 March, 2022 10:58AM

Russell Coker

Hangouts Replacement

Google is currently in the process of killing Hangouts. Last year Hangouts was quite a nice IM system with integrated video chat and voice calling. Now they have decided to kill it and replace it with “Google Chat” and “Google Meet” both of which are integrated with the Gmail app on Android. To start getting people off the old platform they have disabled video and audio chats with more than 2 people in Hangouts. To do a video call you have to use Meet which has a worse user interface and isn’t integrated with text chat, so if in a text discussion someone says “let’s have a video call” you have to open a new app. Meet also doesn’t appear to have a facility to notify group members that someone has joined a group call so it’s required that Chat (or something else) is used to tell people they can join Meet.

Many of my relatives use Hangouts because they are forced to have it installed on their Android phones and because it worked quite well. Now it doesn’t work well and will soon be going away. So another option is needed.

I’m considering Matrix as a replacement. Matrix has a good feature set and is being worked on a lot. The video conferencing is through a connection to a Jitsi server and is well integrated giving functionality more like Hangouts than Chat/Meet.

For the LUV Matrix server the URL https://luv.asn.au/.well-known/matrix/client has the following contents:

{
  "m.homeserver": {
    "base_url": "https://luv.asn.au"
  }
  "jitsi": {
    "preferredDomain": "jitsi.perthchat.org"
  }
  "im.vector.riot.jitsi": {
    "preferredDomain": "jitsi.perthchat.org"
  }
}

This specifies the Jitsi server to be used for chats started from that Matrix server. The PerthChat.org people seem to be leading the way for self hosted Matrix in Australia. Note that other people shouldn’t link to their Jitsi server without discussing it with them first. I only included real data because it’s published on the web so there’s no point in keeping it secret.

The Flounder free software users’ group [1] uses Matrix a lot. We will probably discuss Matrix at the next meeting on Saturday.

There is also Element Call [2] which is apparently more integrated with Matrix (and also newer and possibly buggier). Jitsi works and we can change to a different service easily enough at a later time.

28 March, 2022 08:08AM by etbe

March 27, 2022

Russ Allbery

Review: A Song for a New Day

Review: A Song for a New Day, by Sarah Pinsker

Publisher: Berkley
Copyright: September 2019
ISBN: 1-9848-0259-3
Format: Kindle
Pages: 372

Luce Cannon was touring with a session band when the shutdown began. First came the hotel evacuation in the middle of the night due to bomb threats against every hotel in the state. Then came the stadium bombing just before they were ready to go on stage. Luce and most of the band performed anyway, with a volunteer crew and a shaken crowd. It was, people later decided, the last large stage show in the United States before the congregation laws shut down public gatherings. That was the end of Luce's expected career, and could have been the end of music, or at least public music. But Luce was stubborn and needed the music.

Rosemary grew up in the aftermath: living at home with her parents well away from other people, attending school virtually, and then moving seamlessly into a virtual job for Superwally, the corporation that ran essentially everything. A good fix for some last-minute technical problems with StageHoloLive's ticketing system got her an upgraded VR hoodie and complimentary tickets to the first virtual concert she'd ever attended. She found the experience astonishing, prompting her to browse StageHoloLive job openings and then apply for a technical job and, on a whim, an artist recruiter role. That's how Rosemary found herself, quite nerve-wrackingly, traveling out into the unsafe world to look for underground musicians who could become StageHoloLive acts.

A Song for a New Day was published in 2019 and had a moment of fame at the beginning of 2020, culminating in the Nebula Award for best novel, because it's about lockdowns, isolation, and the suppression of public performances. There's even a pandemic, although it's not a respiratory disease (it's some variety of smallpox or chicken pox) and is only a minor contributing factor to the lockdowns in this book. The primary impetus is random violence.

Unfortunately, the subsequent two years have not been kind to this novel. Reading it in 2022, with the experience of the past two years fresh in my mind, was a frustrating and exasperating experience because the world setting is completely unbelievable. This is not entirely Pinsker's fault; this book was published in 2019, was not intended to be about our pandemic, and therefore could not reasonably predict its consequences. Still, it required significant effort to extract the premise of the book from the contradictory evidence of current affairs and salvage the pieces of it I still enjoyed.

First, Pinsker's characters are the most astonishingly incurious and docile group of people I've seen in a recent political SF novel. This extends beyond the protagonists, where it could arguably be part of their characterization, to encompass the entire world (or at least the United States; the rest of the world does not appear in this book at all so far as I can recall). You may be wondering why someone bombs a stadium at the start of the book. If so, you are alone; this is not something anyone else sees any reason to be curious about. Why is random violence spiraling out of control? Is there some coordinated terrorist activity? Is there some social condition that has gotten markedly worse? Race riots? Climate crises? Wars? The only answer this book offers is a completely apathetic shrug. There is a hint at one point that the government may have theories that they're not communicating, but no one cares about that either.

That leads to the second bizarre gap: for a book that hinges on political action, formal political structures are weirdly absent. Near the end of the book, one random person says that they have been inspired to run for office, which so far as I can tell is the first mention of elections in the entire book. The "government" passes congregation laws shutting down public gatherings and there are no protests, no arguments, no debate, but also no suppression, no laws against the press or free speech, no attempt to stop that debate. There's no attempt to build consensus for or against the laws, and no noticeable political campaigning. That's because there's no need. So far as one can tell from this story, literally everyone just shrugs and feels sad and vaguely compliant. Police officers exist and enforce laws, but changing those laws or defying them in other than tiny covert ways simply never occurs to anyone. This makes the book read a bit like a fatuous libertarian parody of a docile populous, but this is so obviously not the author's intent that it wouldn't be satisfying to read even as that.

To be clear, this is not something that lasts only a few months in an emergency when everyone is still scared. This complete political docility and total incuriosity persists for enough years that Rosemary grows up within that mindset.

The triggering event was a stadium bombing followed by an escalating series of random shootings and bombings. (The pandemic in the book only happens after everything is locked down and, apart from adding to Rosemary's agoraphobia and making people inconsistently obsessed with surface cleanliness, plays little role in the novel.) I lived through 9/11 and the Oklahoma City bombing in the US, other countries have been through more protracted and personally dangerous periods of violence (the Troubles come to mind), and never in human history has any country reacted to a shock of violence (or, for that matter, disease) like the US does in this book. At points it felt like one of those SF novels where the author is telling an apparently normal story and all the characters turn out to be aliens based on spiders or bats.

I finally made sense of this by deciding that the author wasn't using sudden shocks like terrorism or pandemics as a model, even though that's what the book postulates. Instead, the model seems to be something implicitly tolerated and worked around: US school shootings, for instance, or the (incorrect but widespread) US belief in a rise of child kidnappings by strangers. The societal reaction here looks less like a public health or counter-terrorism response and more like suburban attitudes towards child-raising, where no child is ever left unattended for safety reasons but we routinely have school shootings no other country has at the same scale. We have been willing to radically (and ineffectually) alter the experience of childhood due to fears of external threat, and that's vaguely and superficially similar to the premise of this novel.

What I think Pinsker still misses (and which the pandemic has made glaringly obvious) is the immense momentum of normality and the inability of adults to accept limitations on their own activities for very long. Even with school shootings, kids go to school in person. We now know that parts of society essentially collapse if they don't, and political pressure becomes intolerable. But by using school shootings as the model, I managed to view Pinsker's setup as an unrealistic but still potentially interesting SF extrapolation: a thought experiment that ignores countervailing pressures in order to exaggerate one aspect of society to an extreme.

This is half of Pinsker's setup. The other half, which made less of a splash because it didn't have the same accident of timing, is the company Superwally: essentially "what if Amazon bought Walmart, Google, Facebook, Netflix, Disney, and Live Nation." This is a more typical SF extrapolation that left me with a few grumbles about realism, but that I'll accept as a plot device to talk about commercialization, monopolies, and surveillance capitalism. But here again, the complete absence of formal political structures in this book is not credible. Superwally achieves an all-pervasiveness that in other SF novels results in corporations taking over the role of national governments, but it still lobbies the government in much the same way and with about the same effectiveness as Amazon does in our world. I thought this directly undermined some parts of the end of the book. I simply did not believe that Superwally would be as benign and ineffectual as it is shown here.

Those are a lot of complaints. I found reading the first half of this book to be an utterly miserable experience and only continued reading out of pure stubbornness and completionism. But the combination of the above-mentioned perspective shift and Pinsker's character focus did partly salvage the book for me.

This is not a book about practical political change, even though it makes gestures in that direction. It's primarily a book about people, music, and personal connection, and Pinsker's portrayal of individual and community trust in all its complexity is the one thing the book gets right. Rosemary's character combines a sort of naive arrogance with self-justification in a way that I found very off-putting, but the pivot point of the book is the way in which Luce and her community extends trust to her anyway, as part of staying true to what they believe.

The problem that I think Pinsker was trying to write about is atomization, which leads to social fragmentation into small trust networks with vast gulfs between them. Luce and Rosemary are both characters who are willing to bridge those gulfs in their own ways. Pinsker does an excellent job describing the benefits, the hurt, the misunderstandings, the risk, and the awkward process of building those bridges between communities that fundamentally do not understand each other. There's something deep here about the nature of solidarity, and how you need both people like Luce and people like Rosemary to build strong and effective communities. I've kept thinking about that part.

It's also helpful for a community to have people who are curious about cause and effect, and who know how a bill becomes a law.

It's hard to sum up this book, other than to say that I understand why it won a Nebula but it has deep world-building flaws that have become far more obvious over the past two years. Pinsker tries hard to capture the feeling of live music for both the listener and the performer and partly succeeded even for me, which probably means others will enjoy that part of the book immensely. The portrayal of the difficult dynamics of personal trust was the best part of the book for me, but you may have to build scaffolding and bracing for your world-building disbelief in order to get there.

On the whole, I think A Song for a New Day is worth reading, but maybe not right now. If you do read it now, tell yourself at the start that this is absolutely not about the pandemic and that everything political in this book is a hugely simplified straw-man extrapolation, and hopefully you'll find the experience less frustrating than I found it.

Rating: 6 out of 10

27 March, 2022 03:58AM

Andrew Cater

Imminent release for the media images for Debian 10.12 and 11.3 20220327 0010

 OK - so it wasn't quite all done in one day - and since today is TZ change day in the UK, it might actually run into the TZ bump but I suspect that it will all be done very soon now. Very few glitches - everybody cheerful with what's been done.

I did spot someone in IRC who had been reading the release notes - which is always much appreciated. Lots of security fixes overall in the last couple of months but just a fairly normal time, I think.

Thanks to the team behind all of this: the ftpmasters, the press team and everyone else involved in making Debian more secure. This is the last blog for this one - there will be another point release along in about three months or so.

27 March, 2022 12:10AM by Andrew Cater ([email protected])

Reproducible Builds (diffoscope)

diffoscope 209 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 209. This version includes the following changes:

* Update R test fixture for R 4.2.x series. (Closes: #1008446)
* Update minimum version of Black to prevent test failure on Ubuntu jammy.

You find out more by visiting the project homepage.

27 March, 2022 12:00AM

March 26, 2022

Andrew Cater

Part way through testing Debian media images 20220326 1555UTC - Found a new useful utility

 For various obscure reasons, I have a mirror of Debian in one room and the main laptop and so on I use in another. The mirror is connected to a fast Internet line - and has a 1Gb Ethernet cable into the back directly from the router, the laptop and everything else - not so much, everything is wired, but depends on a WiFi link across the property. One end is fast - one end runs like a snail.

Steve suggested I use a different tool to make images directly on the mirror machine - jigit. Slightly less polished than jigdo but - if you're on the same machine - blazingly fast. I just used it to make the Blu-Ray sized .iso and was very pleasantly surprised. 

jigit-mkimage -j [jigdo file] -t [template file] -m Debian=[path to mirror of Debian] -o [output filename]

Another nice surprise for me - I have a horrible old Lenovo Ideapad. It's one of the Bay Trail Intel machines with a 32 bit UEFI and a 64 bit processor. I rescued it from the junk heap. Reinstalling it with an image today fixed an issue I had with slow boot and has turned it into an adequate machine for web browsing.

All in all, I've done relatively few tests so far - but it's been a good day, as ever.

More later.



26 March, 2022 10:15PM by Andrew Cater ([email protected])

Debian media team - testing and releasing Debian 11.3 - 20220326 1243UTC

And back to relative normality : the usual suspects are in Cambridge. It's a glorious day across the UK and we're spending it indoors with laptops :)

We'll also be releasing a point release of Buster as a wrap up of recent changes.

Debian 10 should move from full support to LTS on July August 14th - one year after the release of Debian 11 - and there will be a final point release of Buster somewhere around that point.

All seems to be behaving itself well.

Thanks to all for the hard work that goes into preparing each release and especially the security fixes of which there seem to be loads lately.



26 March, 2022 08:31PM by Andrew Cater ([email protected])

Still testing Debian media images 20220326 2026UTC- almost finished 11.3 - Buster starting soon

 And we're working through quite nicely.


It's been a long, long day so far and we're about 1/2 way through :)


Shout out to Isy, Sledge and RattusRattus in Cambridge and also smcv.

Two releases in a day is a whole bunch :)

26 March, 2022 08:27PM by Andrew Cater ([email protected])

March 25, 2022

Reproducible Builds (diffoscope)

diffoscope 208 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 208. This version includes the following changes:

[ Brent Spillner ]
* Add graceful handling for UNIX sockets and named pipes.
  (Closes: reproducible-builds/diffoscope#292)
* Remove a superfluous log message and reformatt comment lines.

[ Chris Lamb ]
* Reformat various files to satisfy current version of Black.

You find out more by visiting the project homepage.

25 March, 2022 12:00AM

March 24, 2022

Ingo Juergensmann

New Server – NVMe Issues

My current server is somewhat aged. I bought it new in July 2014 with a 6-core Xeon E5-2630L, 32 GB RAM and 4x 3.5″ hot-swappable drives. Gladly I had the opportunity to extend the memory to 128 GB RAM at no additional cost by using memory from my ex-employer. It also has 4x 2 TB WD Red HDDs with 5400 rpm hooked up to the SATA backplane, but unfortunately only two of them are SATA-3 with 6 Gbit/s.

The new server is a used/refurbished Supermicro server with 2x 14-core Xeon E5-2683 and 256 GB RAM and 4x 3.5″ hot-swappable drives. It also came with a Hardware-RAID SAS/SATA 8-port controller with BBU. I also ordered two slim drive kits (MCP-220-81504-0N & MCP-220-81506-0N) to be able to use 2x 3.5″ slots for rotational HDDs as a cheap storage. Right now I added 2x 128 GB Supermicro SATA DOMs, 4x WD Red 4 TB SSDs and a Sonnet Fusion 4×4 Silent and 4x 1 TB Seagate Firecuda 520 NVMe disks.

And here the issue starts:

The NVMe should be capable of 4-5 GB/s, but they are connected to a PCIe 3.0 x16 port via the Sonnet Fusion 4×4, which itself features a PCIe bridge, so bifurbacation is not necessary.

When doing some tests with bonnie++ I get around 1 GB/s transfer rates out of a RAID10 setup with all 4 NVMes. In fact, regardless of the RAID level there are only transfer rates of about 1 – 1.2 GB/s with bonnie++. (All software RAIDs with mdadm.)

But also when constructing a RAID each NVMe gives around 300-600 MB/s in sync speed – except for one exception: RAID1.

Regardless of how many NVMe disks in a RAID1 setup the sync speed is up to 2.5 GB/s for each of the NVMe disks. So the lower transfer rates with bonnie++ or other RAID levels shouldn’t be limited by bus speed nor by CPU speed. Alas, atop shows upto 100% CPU usage for all tests. I even tested

In my understanding RAID10 should perform similar to RAID1 in terms of syncing and better and while bonnie++ tests (up to 2x write and 4x read speed compared to a single disk).

For the bonnie++ tests I even made some tests that are available here. You can find the test parameters listed in the hostname column: Baldur is the hostname, then followed by the layout (near-2, far-2, offset-2), chunk size and concurrency of bonnie++. In the end there was no big impact of the chunk size of the RAID.

So, now I’m wondering what the reason for the “slow” performance of those 4x NVMe disks is? Bus speed of the PCIe 3.0 x16 shouldn’t be the cause, because I assume that the software RAID will need to transfer the blocks in RAID1 as well as in RAID10 over the bus. Same goes for the CPU: the amount of CPU work should be roughly the same for RAID1 and for RAID10. RAID10 should even have an advantage because the blocks only need to be synced to 2 disks in a stripe set.

Bonnie++ tests are a different topic for sure. But when testing reading with dd from the md-devices I “only” get around 1-1.5 GB/s as well. Even when using LVM RAID instead of LVM on top of md RAID.

All NVMe disks are already set to 4k and IO scheduler is set to mq-deadline.

Is there anything I could do to improve the performance of the NVMe disks? On the other head, pure transfer rates are not that important to a server that runs a dozen of VMs. Here the improved IOPS performance over rotation disks is a clear performance gain. But I’m still curious if I could get maybe 2 GB/s out of a RAID10 setup with the NVMe disks. Then again having two independent RAID1 setups for MariaDB and for PostgreSQL databases might be a better choice over a single RAID10 setup?

24 March, 2022 09:49AM by ij

March 23, 2022

hackergotchi for Matthew Garrett

Matthew Garrett

AMD's Pluton implementation seems to be controllable

I've been digging through the firmware for an AMD laptop with a Ryzen 6000 that incorporates Pluton for the past couple of weeks, and I've got some rough conclusions. Note that these are extremely preliminary and may not be accurate, but I'm going to try to encourage others to look into this in more detail. For those of you at home, I'm using an image from here, specifically version 309. The installer is happy to run under Wine, and if you tell it to "Extract" rather than "Install" it'll leave a file sitting in C:\\DRIVERS\ASUS_GA402RK_309_BIOS_Update_20220322235241 which seems to have an additional 2K of header on it. Strip that and you should have something approximating a flash image.

Looking for UTF16 strings in this reveals something interesting:

Pluton (HSP) X86 Firmware Support
Enable/Disable X86 firmware HSP related code path, including AGESA HSP module, SBIOS HSP related drivers.
Auto - Depends on PcdAmdHspCoreEnable build value
NOTE: PSP directory entry 0xB BIT36 have the highest priority.
NOTE: This option will NOT put HSP hardware in disable state, to disable HSP hardware, you need setup PSP directory entry 0xB, BIT36 to 1.
// EntryValue[36] = 0: Enable, HSP core is enabled.
// EntryValue[36] = 1: Disable, HSP core is disabled then PSP will gate the HSP clock, no further PSP to HSP commands. System will boot without HSP.

"HSP" here means "Hardware Security Processor" - a generic term that refers to Pluton in this case. This is a configuration setting that determines whether Pluton is "enabled" or not - my interpretation of this is that it doesn't directly influence Pluton, but disables all mechanisms that would allow the OS to communicate with it. In this scenario, Pluton has its firmware loaded and could conceivably be functional if the OS knew how to speak to it directly, but the firmware will never speak to it itself. I took a quick look at the Windows drivers for Pluton and it looks like they won't do anything unless the firmware wants to expose Pluton, so this should mean that Windows will do nothing.

So what about the reference to "PSP directory entry 0xB BIT36 have the highest priority"? The PSP is the AMD Platform Security Processor - it's an ARM core on the CPU package that boots before the x86. The PSP firmware lives in the same flash image as the x86 firmware, so the PSP looks for a header that points it towards the firmware it should execute. This gives a pointer to a "directory" - a list of different object types and where they're located in flash (there's a description of this for slightly older AMDs here). Type 0xb is treated slightly specially. Where most types contain the address of where the actual object is, type 0xb contains a 64-bit value that's interpreted as enabling or disabling various features - something AMD calls "soft fusing" (Intel have something similar that involves setting bits in the Firmware Interface Table). The PSP looks at the bits that are set here and alters its behaviour. If bit 36 is set, the PSP tells Pluton to turn itself off and will no longer send any commands to it.

So, we have two mechanisms to disable Pluton - the PSP can tell it to turn itself off, or the x86 firmware can simply never speak to it or admit that it exists. Both of these imply that Pluton has started executing before it's shut down, so it's reasonable to wonder whether it can still do stuff. In the image I'm looking at, there's a blob starting at 0x0069b610 that appears to be firmware for Pluton - it contains chunks that appear to be the reference TPM2 implementation, and it broadly decompiles as valid ARM code. It should be viable to figure out whether it can do anything in the face of being "disabled" via either of the above mechanisms.

Unfortunately for me, the system I'm looking at does set bit 36 in the 0xb entry - as a result, Pluton is disabled before x86 code starts running and I can't investigate further in any straightforward way. The implication that the user-controllable mechanism for disabling Pluton merely disables x86 communication with it rather than turning it off entirely is a little concerning, although (assuming Pluton is behaving as a TPM rather than having an enhanced set of capabilities) skipping any firmware communication means the OS has no way to know what happened before it started running even if it has a mechanism to communicate with Pluton without firmware assistance. In that scenario it'd be viable to write a bootloader shim that just faked up the firmware measurements before handing control to the OS.

The bit 36 disabling mechanism seems more solid? Again, it should be possible to analyse the Pluton firmware to determine whether it actually pays attention to a disable command being sent. But even if it chooses to ignore that, if the PSP is in a position to just cut the clock to Pluton, it's not going to be able to do a lot. At that point we're trusting AMD rather than trusting Microsoft, but given that you're also trusting AMD to execute the code you're giving them to execute, it's hard to avoid placing trust in them.

Overall: I'm reasonably confident that systems that ship with Pluton disabled via setting bit 36 in the soft fuses are going to disable it sufficiently hard that the OS can't do anything about it. Systems that give the user an option to enable or disable it are a little less clear in that respect, and it's possible (but not yet demonstrated) that an OS could communicate with Pluton anyway. However, if that's true, and if the firmware never communicates with Pluton itself, the user could install a stub loader in UEFI that mimicks the firmware behaviour and leaves the OS thinking everything was good when it absolutely is not.

So, assuming that Pluton in its current form on AMD has no capabilities outside those we know about, the disabling mechanisms are probably good enough. It's tough to make a firm statement on this before I have access to a system that doesn't just disable it immediately, so stay tuned for updates.

comment count unavailable comments

23 March, 2022 08:42AM

March 22, 2022

hackergotchi for Ulrike Uhlig

Ulrike Uhlig

Workshops about anger, saying NO, and mapping one’s capacities and desires

For the second year in a row, I proposed some workshops at the feminist hackers assembly at the remote C3. I’m sharing them here because I believe they might be useful to others.

Anger workshop

Based on my readings about the subject and a mediation training, I created a first workshop about dealing with one’s own anger for the feminist hackers assembly in 2020. Many women who attended said they recognized themselves in what I was talking about. I created the exercises in the workshop with the goal of getting participants to share and self-reflect in small groups. I’m not giving out solutions, instead proposals on how to deal with anger come from the participants themselves. (I added the last two content pages to the file after the workshop.) This is why this workshop is always different, depending on the group and what they want to share. The first time I did this workshop was a huge success and so I created an improved version for the assembly of 2021.

Angry womxn* workshop

The act of saying NO

We often say yes, despite wanting to say no, out of a sense of duty, or because we learned that we should always be nice and helpful, and that our own needs are best served last. Many people don’t really know how to say no. Sarah Cooper, a former Google employee herself, makes fun of this in her fabulous book “How to Be Successful Without Hurting Men’s Feelings” (highly recommended read!):

A drawing of a woman who says: How I say yes: I'd love to. How I say no: sure.

That’s why a discussion space about saying NO did not seem out of place at the feminist hackers assembly :) I based my workshop on the original, created by the Institute of War and Peace Reporting and distributed through their holistic security training manual.

I like this workshop because sharing happens in a small groups and has an immediately felt effect. Several people reported that the exercises allowed them to identify the exact moment when they had said yes to something despite really having wanted to say no. The exercises from the workshop can easily be done with a friend or trusted person, and they can even be done alone by writing them down, although the effect in writing might be less pronounced.

The act of saying NO workshop

Mapping capacities and desires

Based on discussions with a friend, whose company uses SWOT analysis (strengths—weaknesses—opportunities—threats) to regularly check in with their employees, and to allow employees to check in with themselves, I created a similar tool for myself which I thought would be nice to share with others. It’s a very simple self-reflection that can help map out what works well, what doesn’t work so well and where one wants to go in the future. I find it important to not use this tool narrow-mindedly only regarding work skills and expertise. Instead, I think it’s useful to also include soft skills, hobbies, non-work capacities and whatever else comes to mind in order to create a truer map.

Fun fact: During the assembly, a bunch of participants reported that they found it hard to distinguish between things they don’t like doing and things they don’t know how to do.

Mapping capacities and desires

Known issues

One important feedback point I got is that people felt the time for the exercises in all three workshops could have been longer. In case you want to try out these workshops, you might want to take this into account.

22 March, 2022 11:00PM by Ulrike Uhlig

hackergotchi for Tollef Fog Heen

Tollef Fog Heen

DNSSEC, ssh and VerifyHostKeyDNS

OpenSSH has this very nice setting, VerifyHostKeyDNS, which when enabled, will pull SSH host keys from DNS, and you no longer need to either trust on first use, or copy host keys around out of band.

Naturally, trusting unsecured DNS is a bit scary, so this requires the record to be signed using DNSSEC. This has worked for a long time, but then broke, seemingly out of the blue. Running ssh -vvv gave output similar to

debug1: found 4 insecure fingerprints in DNS
debug3: verify_host_key_dns: checking SSHFP type 1 fptype 2
debug3: verify_host_key_dns: checking SSHFP type 4 fptype 2
debug1: verify_host_key_dns: matched SSHFP type 4 fptype 2
debug3: verify_host_key_dns: checking SSHFP type 4 fptype 1
debug1: verify_host_key_dns: matched SSHFP type 4 fptype 1
debug3: verify_host_key_dns: checking SSHFP type 1 fptype 1
debug1: matching host key fingerprint found in DNS

even though the zone was signed, the resolver was checking the signature and I even checked that the DNS response had the AD bit set.

The fix was to add options trust-ad to /etc/resolv.conf. Without this, glibc will discard the AD bit from any upstream DNS servers. Note that you should only add this if you actually have a trusted DNS resolver. I run unbound on localhost, so if somebody can do a man-in-the-middle attack on that traffic, I have other problems.

22 March, 2022 07:30PM

March 21, 2022

hackergotchi for Gunnar Wolf

Gunnar Wolf

Long, long, long live Emacs after 39 years

Reading Planet Debian (see, Sam, we are still having a conversation over there? 😉), I read Anarcat’s 20+ years of Emacs. And.. Well, should I brag contribute to the discussion? Of course, why not?

Emacs is the first computer program I can name that I ever learnt to use to do something minimally useful. 39 years ago.


From the Space Cadet keyboard that (obviously…) influenced Emacs’ early design

The Emacs editor was born, according to Wikipedia, in 1976, same year as myself. I am clearly not among its first users. It was already a well-established citizen when I first learnt it; I am fortunate to be the son of a Physics researcher at UNAM, My father used to take me to his institute after he noticed how I was attracted to computers; we would usually spend some hours there between 7 and 11PM on Friday nights. His institute had a computer room where they had very sweet gear: Some 10 Heathkit terminals quite similar to this one:

The terminals were connected (via individual switches) to both a PDP-11 and a Foonly F2 computers. The room also had a beautiful thermal printer, a beautiful Tektronix vectorial graphics output terminal, and some other stuff. The main user for my father was to typeset some books; he had recently (1979) published Integral Transforms in Science and Engineering (that must be my first mention in scientific literature), and I remember he was working on the proceedings of a conference he held in Oaxtepec (the account he used in the system was oax, not his usual kbw, which he lent me). He was also working on Manual de Lenguaje y Tipografía Científica en Castellano, where you can see some examples of TeX; due to a hardware crash, the book has the rare privilege of being a direct copy of the output of the thermal printer: It was not possible to produce a higher resolution copy for several years… But it is fun and interesting to see what we were able to produce with in-house tools back in 1985!

So, what could he teach me so I could use the computers while he worked? TeX, of course. No, no LaTeX (that was published in 1984). LaTeX is a set of macros developed initially by Leslie Lamport, used to make TeX easier; TeX was developed by Donald Knuth, and if I have this information correct, it was Knuth himself who installed and demonstrated TeX in the Foonly computer, during a visit to UNAM.

Now, after 39 years hammering at Emacs buffers… Have I grown extra fingers? Nope. I cannot even write decent elisp code, and can barely read it. I do use org-mode (a lot!) and love it; I have written basically five books, many articles and lots of presentations and minor documents with it. But I don’t read my mail or handle my git from Emacs. I could say, I’m a relatively newbie after almost four decades.

Four decades

When we got a PC in 1986, my father got the people at the Institute to get him memacs (micro-emacs). There was probably a ten year period I barely used any emacs, but always recognized it. My fingers hve memorized a dozen or so movement commands, and a similar number of file management commands.

And yes, Emacs and TeX are still the main tools I use day to day.

21 March, 2022 05:45PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (January and February 2022)

The following contributor got his Debian Developer account in the last two months:

  • Francisco Vilmar Cardoso Ruviaro (vilmar)

The following contributors were added as Debian Maintainers in the last two months:

  • Lu YaNing
  • Mathias Gibbens
  • Markus Blatt
  • Peter Blackman
  • David da Silva Polverari

Congratulations!

21 March, 2022 04:00PM by Jean-Pierre Giraud

Antoine Beaupré

20+ years of Emacs

I enjoyed reading this article named "22 years of Emacs" recently. It's kind of fascinating, because I realised I don't exactly know for how long I've been using Emacs. It's lost in the mists of history. If I would have to venture a guess, it was back in the "early days", which in that history is mapped around 1996-1997, when I installed my very own "PC" with FreeBSD 2.2.x and painstakingly managed to make XFree86 run on it.

Modelines. Those were the days... But I digress.

I am old...

The only formal timestamp I can put is that my rebuilt .emacs.d git repository has its first commit in 2002. Some people reading this may be born after that time. This means I'm at least significantly older than those people, to put things gently.

Clever history nerds will notice that the commit is obviously fake: Git itself did not exist until 2005. But ah-ah! I was already managing my home directory with CVS in 2001! I converted that repository into git some time in 2009, and therefore you can see all my embarrassing history, including changes from two decades ago.

That includes my first known .emacs file which is just bizarre to read right now: 200 lines, most of which are "customize" stuff. Compare with the current, 1000+ lines init.el which is also still kind of a mess, but actually shares very little with the original, thankfully.

All this to say that in those years (decades, really) of using Emacs, I have had a very different experience than credmp who wrote packages, sent patches, and got name dropping from other developers. My experience is just struggling to keep up with everything, in general, but also in Emacs.

... and Emacs is too fast for me

It might sound odd to say, but Emacs is actually moving pretty fast right now. A lot of new packages are coming out, and I can hardly keep up.

  • I am not using org mode, but did use it for time (and task) tracking for a while (and for invoicing too, funky stuff).

  • I am not using mu4e, but maybe I'm using something better (notmuch) and yes, I am reading my mail in Emacs, which I find questionable from a security perspective. (Sandboxing untrusted inputs? Anyone?)

  • I am using magit, but only when coding, so I do end up using git on the command line quite a bit anyways.

  • I do have which-key enabled, and reading about it reminded me I wanted to turn it off because it's kind of noisy and I never remember I can actually use it for anything. Or, in other words, I don't even remember the prefix key or, when I do, there's too many possible commands after for it to be useful.

  • I haven't setup lsp-mode, let alone Eglot, which I just learned about reading the article. I thought I would be super shiny and cool by setting up LSP instead of the (dying?) elpy package, but I never got around to it. And now it seems lsp-mode is uncool and I should really do eglot instead, and that doesn't help.

    UPDATE: I finally got tired and switched to lsp-mode. The main reason for choosing it over eglot is that it's in Debian (and eglot is not). (Apparently, eglot has more chance of being upstreamed, "when it's done", but I guess I'll cross that bridge when I get there.) lsp-mode feels slower than elpy but I haven't done any of the performance tuning and this will improve even more with native compilation (see below).

    I already had lsp-mode partially setup in Emacs so I only had to do this small tweak to switch and change the prefix key (because s-l or mod is used by my window manager). I also had to pin LSP packages to bookworm here and here.

  • I am not using projectile. It's on some of my numerous todo lists somewhere, surely. I suspect it's important to getting my projects organised, but I still live halfway between the terminal and Emacs, so it's not quite clear what I would gain.

  • I had to ask what native compilation was or why it mattered the first time I heard of it. And when I saw it again in the article, I had to click through to remember.

Overall, I feel there's a lot of cool stuff in Emacs out there. But I can't quite tell what's the best of which. I can barely remember which completion mechanism I use (company, maybe?) or what makes my mini-buffer completion work the way it does. Everything is lost in piles of customize and .emacs hacks that is constantly changing. Because a lot is in third-party packages, there are often many different options and it's hard to tell which one we should be using.

... or at least fast enough

And really, Emacs feels fast enough for me. When I started, I was running Emacs on a Pentium I, 166MHz, with 8MB of RAM (eventually upgraded to 32MB, whoohoo!). Back in those days, the joke was that EMACS was an acronym for "Eight Megs, Always Scratching" and now that I write this down, I realize it's actually "Eight Megs, and Constantly Swapping", which doesn't sound as nice because you could actually hear Emacs running on those old hard drives back in the days. It would make a "scratching" noise as the hard drive heads would scramble maniacally to swap pages in and out of swap to make room for the memory-hungry editor.

Now Emacs is pretty far down the list of processes in top(1) regardless of how you look at it. It's using 97MB of resident memory and close to 400MB of virtual memory, which does sound like an awful lot compared to my first computer... But it's absolutely nothing compared to things like Signal-desktop, which somehow manages to map a whopping 20.5GB virtual memory. (That's twenty Gigabytes of memory for old timers or time travelers from the past, and yes, that is now a thing.) I'm not exactly sure how much resident memory it uses (because it forks multiple processes), probably somewhere around 300MB of resident memory. Firefox also uses gigabytes of that good stuff, also spread around the multiple processes, per tab.

Emacs "feels" super fast. Typing latency is noticeably better in Emacs than my web browser, and even beats most terminal emulators. It gets a little worse when font-locking is enabled, unfortunately, but it's still feels much better.

And all my old stuff still works in Emacs, amazingly. (Good luck with your old Netscape or ICQ configuration from 2000.)

I feel like an oldie, using Emacs, but I'm really happy to see younger people using it, and learning it, and especially improving it. If anything, one direction I would like to see it go is closer to what web browsers are doing (yes, I know how bad that sounds) and get better isolation between tasks.

An attack on my email client shouldn't be able to edit my Puppet code, and/or all files on my system, for example. And I know, fundamentally, that's a really hard challenge in Emacs. But if you're going to treat your editor as your operating system (or vice versa, I lost track of where we are now that there's an Emacs Window Manager, which I do not use), at least we should get that kind of security.

Otherwise I'll have to find a new mail client, and that's really something I try to limit to once a decade or so.

21 March, 2022 03:08AM

March 20, 2022

Joerg Jaspert

Another shell script moved to rust

Shell? Rust!

Not the first shell script I took and made a rust version of, but probably my largest yet. This time I took my little tm (tmux helper) tool which is (well, was) a bit more than 600 lines of shell, and converted it to Rust.

I got most of the functionality done now, only one major part is missing.

What’s tm?

tm started as a tiny shell script to make handling tmux easier. The first commit in git was in July 2013, but I started writing and using it in 2011. It started out as a kind-of wrapper around ssh, opening tmux windows with an ssh session on some other hosts. It quickly gained support to open multiple ssh sessions in one window, telling tmux to synchronize input (send input to all targets at once), which is great when you have a set of machines that ought to get the same commands.

tm vs clusterssh / mussh

In spirit it is similar to clusterssh or mussh, allowing to run the same command on many hosts at the same time. clusterssh sets out to open new terminals (xterm) per host and gives you an input line, that it sends everywhere. mussh appears to take your command and then send it to all the hosts. Both have disadvantages in my opinion: clusterssh opens lots of xterm windows, and you can not easily switch between multiple sessions, mussh just seems to send things over ssh and be done.

tm instead “just” creates a tmux session, telling it to ssh to the targets, possibly setting the tmux option to send input to all panes. And leaves all the rest of the handling to tmux. So you can

  • detach a session and reattach later easily,
  • use tmux great builtin support for copy/paste,
  • see all output, modify things even for one machine only,
  • “zoom” in to one machine that needs just ONE bit different (cssh can do this too),
  • let colleagues also connect to your tmux session, when needed,
  • easily add more machines to the mix, if needed,
  • and all the other extra features tmux brings.

More tm

tm also supports just attaching to existing sessions as well as killing sessions, mostly for lazyness (less to type than using tmux directly).

At some point tm gained support for setting up sessions according to some “session file”. It knows two formats now, one is simple and mostly a list of hostnames to open synchronized sessions for. This may contain LIST commands, which let tm execute that command, expected output is list of hostnames (or more LIST commands) for the session. That, combined with the replacement part, lets us have one config file that opens a set of VMs based on tags our Ganeti runs, based on tags. It is simply a LIST command asking for VMs tagged with the replacement arg and up. Very handy. Or also “all VMs on host X”.

The second format is basically “free form tmux commands”. Mostly “commandline tmux call, just drop the tmux in front” collection.

Both of them supporting a crude variable replacement.

Conversion to Rust

Some while ago I started playing with Rust and it somehow ‘clicked’, I do like it. My local git tells me, that I tried starting off with go in 2017, but that appearently did not work out. Fun, everywhere I can read says that Rust ought to be harder to learn.

So by now I have most of the functionality implemented in the Rust version, even if I am sure that the code isn’t a good Rust example. I’m learning, after all, and already have adjusted big parts of it, multiple times, whenever I learn (and understand) something more - and am also sure that this will happen again…

Compatibility with old tm

It turns out that my goal of staying compatible with the behaviour of the old shell script does make some things rather complicated. For example, the LIST commands in session config files - in shell I just execute them commands, and shell deals with variable/parameter expansion, I just set IFS to newline only and read in what I get back. Simple. Because shell is doing a lot of things for me.

Now, in Rust, it is a different thing at all:

  • Properly splitting the line into shell words, taking care of quoting (can’t simply take whitespace) (there is shlex)
  • Expanding specials like ~ and $HOME (there is home_dir).
  • Supporting environment variables in general, tm has some that adjust behaviour of it. Which shell can use globally. Used lazy_static for a similar effect - they aren’t going to change at runtime ever, anyways.

Properly supporting the commandline arguments also turned out to be a bit more work. Rust appearently has multiple crates supporting this, I settled on clap, but as tm supports “getopts”-style as well as free-form arguments (subcommands in clap), it takes a bit to get that interpreted right.

Speed

Most of the time entirely unimportant in the tool that tm is (open a tmux with one to some ssh connections to some places is not exactly hard or time consuming), there are situations, where one can notice that it’s calling out to tmux over and over again, for every single bit to do, and that just takes time: Configurations that open sessions to 20 and more hosts at the same time especially lag in setup time. (My largest setup goes to 443 panes in one window). The compiled Rust version is so much faster there, it’s just great. Nice side effect, that is. And yes, in the end it is also “only” driving tmux, still, it takes less than half the time to do so.

Code, Fun parts

As this is still me learning to write Rust, I am sure the code has lots to improve. Some of which I will sure find on my own, but if you have time, I love PRs (or just mails with hints).

Github

Also the first time I used Github Actions to see how it goes. Letting it build, test, run clippy and also run a code coverage tool (Yay, more than 50% covered…) on it. Unsure my tests are good, I am not used to writing tests for code, but hey, coverage!

Up next

I do have to implement the last missing feature, which is reading the other config file format. A little scared, as that means somehow translating those lines into correct calls within the tmux_interface I am using, not sure that is easy. I could be bad and just shell out to tmux on it all the time, but somehow I don’t like the thought of doing that. Maybe (ab)using the control mode, but then, why would I use tmux_interface, so trying to handle it with that first.

Afterwards I want to gain a new command, to save existing sessions and be able to recreate them easily. Shouldn’t be too hard, tmux has a way to get at that info, somewhere.

20 March, 2022 12:23PM

March 18, 2022

hackergotchi for Bits from Debian

Bits from Debian

DebConf22 registration and call for proposals are open!

DebConf22 banner open registration

Registration for DebConf22 is now open. The the 23rd edition of DebConf will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th.

Along with the registration, the DebConf content team announced the call for proposals. Deadline to submit a proposal to be considered in the main schedule is April 15th, 2022 23:59:59 UTC (Friday).

DebConf is an event open to everyone, no matter how you identify yourself or how others perceive you. We want to increase visibility of our diversity and work towards inclusion at Debian Project, drawing our attendees from people just starting their Debian journey, to seasoned Debian Developers or active contributors in different areas like packaging, translation, documentation, artwork, testing, specialized derivatives, user support and many other. In other words, all are welcome.

To register for the event, log into the registration system and fill out the form. You will be able to edit and update your registration at any point. However, in order to help the organizers have a better estimate of how many people will attend the event, we would appreciate if you could access the system and confirm (or cancel) your participation in the conference as soon as you know if you will be able to come. The last day to confirm or cancel is July 1st, 2022 23:59:59 UTC. If you don't confirm or you register after this date, you can come to the DebConf22 but we cannot guarantee availability of accommodation, food and swag (t-shirt, bag, and so on).

For more information about registration, please visit registration information.

Submitting an event

You can now submit an event proposal. Events are not limited to traditional presentations or informal sessions (BoFs): we welcome submissions of tutorials, performances, art installations, debates, or any other format of event that you think would be of interest to the Debian community.

Regular sessions may either be 20 or 45 minutes long (including time for questions), other kinds of sessions (workshops, demos, lightning talks, and so on) could have different durations. Please choose the most suitable duration for your event and explain any special requests.

In order to submit a talk, you will need to create an account on the website. We suggest that Debian Salsa account holders (including DDs and DMs) use their Salsa login when creating an account. However, this isn't required, as you can sign up with an e-mail address and password.

Bursary for travel, accommodation and meals

In an effort to widen the diversity of DebConf attendees, the Debian Project allocates a part of the financial resources obtained through sponsorships to pay for bursaries (travel, accommodation, and/or meals) for participants who request this support when they register.

As resources are limited, we will examine the requests and decide who will receive the bursaries. They will be destined:

  • To active Debian contributors.
  • To promote diversity: newcomers to Debian and/or DebConf, especially from under-represented communities.

Giving a talk, organizing an event or helping during DebConf22 is taken intoa account when deciding upon your bursary, so please mention them in your bursary application.

For more information about bursaries, please visit applying for a bursary to DebConf.

Attention: the registration for DebConf22 will be open until the conference starts, but the deadline to apply for bursaries using the registration form before May 1st, 2022 23:59:59 UTC. This deadline is necessary in order to the organizers use time to analyze the requests, and for successful applicants to prepare for the conference.

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsors Lenovo and Infomaniak.

DebConf22 is accepting sponsors; if you are interested, or think you know of others who would be willing to help, please get in touch!

18 March, 2022 08:10PM by The Debian Publicity Team

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Debian Clojure Team Sprint --- May 13-14th 2022

I'm happy to announce the Debian Clojure Team will hold a remote sprint from May 13th to May 14th 2022.

The goal of this sprint is to improve various aspects of the Clojure ecosystem in Debian. As such, everyone is welcome to participate!

Here are a few items we are planning to work on, in no particular order:

  • Update leiningen to the latest upstream version, to let some libraries in experimental migrate to unstable.
  • Work towards replacing our custom Clojure script with upstream's and package clj | clojure-cli.
  • Update clojure to the latest upstream version.
  • Work on debugging autopkgtest failures on a bunch of puppetlabs-* libraries.
  • Work on lintian tags for the Clojure Team.

You can register for the sprint on the Debian Wiki. We are planning to ask the DPL for a food budget. If you plan on joining and want your food to be sponsored, please register before April 2nd.

18 March, 2022 06:45PM by Louis-Philippe Véronneau

Enrico Zini

Context-dependent logger in Python

This is a common logging pattern in Python, to have loggers related to module names:

import logging

log = logging.getLogger(__name__)


class Bill:
    def load_bill(self, filename: str):
        log.info("%s: loading file", filename)

I often however find myself wanting to have loggers related to something context-dependent, like the kind of file that is being processed. For example, I'd like to log loading of bill loading when done by the expenses module, and not when done by the printing module.

I came up with a little hack that keeps the same API as before, and allows to propagate a context dependent logger to the code called:

# Call this file log.py
from __future__ import annotations
import contextlib
import contextvars
import logging

_log: contextvars.ContextVar[logging.Logger] = contextvars.ContextVar('log', default=logging.getLogger())


@contextlib.contextmanager
def logger(name: str):
    """
    Set a default logger for the duration of this context manager
    """
    old = _log.set(logging.getLogger(name))
    try:
        yield
    finally:
        _log.reset(old)


def debug(*args, **kw):
    _log.get().debug(*args, **kw)


def info(*args, **kw):
    _log.get().info(*args, **kw)


def warning(*args, **kw):
    _log.get().warning(*args, **kw)


def error(*args, **kw):
    _log.get().error(*args, **kw)

And now I can do this:

from . import log

# …
    with log.logger("expenses"):
        bill = load_bill(filename)


# This code did not change!
class Bill:
    def load_bill(self, filename: str):
        log.info("%s: loading file", filename)

18 March, 2022 10:53AM

March 17, 2022

hackergotchi for Gunnar Wolf

Gunnar Wolf

Speaking about the OpenPGP WoT on LibrePlanet this Saturday

So, LibrePlanet, the FSF’s conference, is coming!

I much enjoyed attending this conference in person in March 2018. This year I submitted a talk again, and it got accepted — of course, given the conference is still 100% online, I doubt I will be able to go 100% conference-mode (I hope to catch a couple of other talks, but… well, we are all eager to go back to how things were before 2020!)

Anyway, what is my talk about?

My talk is titled Current challenges for the OpenPGP keyserver network. Is there a way forward?. The abstract I submitted follows:

Many free projects use OpenPGP encryption or signatures for various important tasks, like defining membership, authenticating participation, asserting identity over a vote, etc. The Web-of-Trust upon which its operation is based is a model many of us hold dear, allowing for a decentralized way to assign trust to the identity of a given person.

But both the Web-of-Trust model and the software that serves as a basis for the above mentioned uses are at risk due to attacks on the key distribution protocol (not on the software itself!)

With this talk, I will try to bring awareness to this situation, to some possible mitigations, and present some proposals to allow for the decentralized model to continue to thrive towards the future.

I am on the third semester of my PhD, trying to somehow keep a decentralized infrastructure for the OpenPGP Web of Trust viable and usable for the future. While this is still in the early stages of my PhD work (and I still don’t have a solution to present), I will talk about what the main problems are… and will sketch out the path I intend to develop.

What is the relevance? Mainly, I think, that many free software projects use the OpenPGP Web of Trust for their identity definitions… Are we anachronistic? Are we using tools unfit for this century? I don’t think so. I think we are in time to fix the main sore spots for this great example of a decentralized infrastructure.

When is my talk scheduled?

This Saturday, 2022.03.19, at

GMT / UTC time
19:25–20:10
Conference schedule time (EDT/GMT-4)
15:25–16:10
Mexico City time (GMT-6)
13:25–14:10

How to watch it?

The streams are open online. I will be talking in the Saturn room, feel free to just appear there and watch! The FSF asks people to [register for the conference](https://my.fsf.org/civicrm/event/info?reset=1&id=99) beforehand, in order to be able to have an active participation (i.e. ask questions and that). Of course, you might be interested in other talks – take a look at the schedule!

LibrePlanet keeps a video archive of their past conferences, and this talk will be linked from there. Of course, I will link to the recording once it is online.

17 March, 2022 04:55PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.8.3: Hotfixing Hotfix

rcpp logo

An even newer hot-fix release 1.0.8.3 of Rcpp follows the 1.0.8.2 release of a few days ago and got to CRAN this morning. A Debian upload will follow shortly, and Windows and macOS binaries will appear at CRAN in the next few days. This release again breaks with the six-months cycle started with release 1.0.5 in July 2020. When we addressed the CRAN request in 1.0.8.2 we forgot to dial testing down to their desired level (as ‘three-part’ release numbers do automagically for us, whereas ‘four-part’ do not). This is now taken care of, along with the hot-fix that was in 1.0.8.2 already.

Rcpp has become the most popular way of enhancing R with C or C++ code. Right now, around 2522 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 239 in BioConductor.

The full list of details for these two interim releases (and hence all changes accumulated since the last regular release, 1.0.8 in January) follows.

Changes in Rcpp hotfix release version 1.0.8.3 (2022-03-14)

  • Changes in Rcpp API:

    • Accomodate C++98 compilation by adjusting attributes.cpp (Dirk in #1193 fixing #1192)

    • Accomodate newest compilers replacing deprecated std::unary_function and std::binary_function with std::function (Dirk in #1202 fixing #1201 and CRAN request)

  • Changes in Rcpp Documentation:

    • Adjust one overflowing column (Bill Denney in #1196 fixing #1195)
  • Changes in Rcpp Deployment:

    • Accomodate four digit version numbers in unit test (Dirk)

    • Do not run complete test suite to limit test time to CRAN preference (Dirk)

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2843 previous questions.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 March, 2022 12:13PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, February 2022

A Debian LTS logo

Every month we review the work funded by Freexian’s Debian LTS offering. Please find the report for February below.

Debian project funding

  • In February Raphaël and the LTS worked on a survey of Debian developers meant to solicit ideas for improvements in the Debian project at large. You can see the results of the initial discussion here in the list of ideas of which there are already over 30.
  • The full survey is due to be emailed to Debian Developers shortly.
  • In February € 2250 was put aside to fund Debian projects.

Debian LTS contributors

In February, 12 contributors were paid to work on Debian LTS, their reports are available below. If you’re interested in participating in the LTS or ELTS teams, we welcome participation from the Debian community. Simply get in touch with Jeremiah or Raphaël if you are if you are interested in participating.

Evolution of the situation

In February we released 24 DLAs.

The security tracker currently lists 61 packages with a known CVE and the dla-needed.txt file has 26 packages needing an update.

You can find out more about the Debian LTS project via the following video:

Thanks to our sponsors

Sponsors that joined recently are in bold.

17 March, 2022 11:32AM by Raphaël Hertzog

March 16, 2022

Michael Ablassmeier

python logging messages and exit codes

Everyone knows that an application exit code should change based on the success, error or maybe warnings that happened during execution.

Lately i came along some python code that was structured the following way:

#!/usr/bin/python3
import sys
import logging

def warnme():
    # something bad happens
    logging.warning("warning")
    sys.exit(2)

def evil():
    # something evil happens
    logging.error("error")
    sys.exit(1)

def main():
    logging.basicConfig(
        level=logging.DEBUG,
    )   

    [..]

the situation was a little bit more complicated, some functions in other modules also exited the application, so sys.exit() calls were distributed in lots of modules an files.

Exiting the application in some random function of another module is something i dont consider nice coding style, because it makes it hard to track down errors.

I expect:

  • exit code 0 on success
  • exit code 1 on errors
  • exit code 2 on warnings
  • warnings or errors shall be logged in the function where they actually happen: the logging module will show the function name with a better format option: nice for debugging.
  • one function that exits accordingly, preferrably main()

How to do better?

As the application is using the logging module, we have a single point to collect warnings and errors that might happen accross all modules. This works by passing a custom handler to the logging module which tracks emitted messages.

Heres an small example:

#!/usr/bin/python3
import sys
import logging

class logCount(logging.Handler):
    class LogType:
        def __init__(self):
            self.warnings = 0
            self.errors = 0

    def __init__(self):
        super().__init__()
        self.count = self.LogType()

    def emit(self, record):
        if record.levelname == "WARNING":
            self.count.warnings += 1
        if record.levelname == "ERROR":
            self.count.errors += 1
            
def infome():
    logging.info("hello world")

def warnme():
    logging.warning("help, an warning")

def evil():
    logging.error("yikes")

def main():
    EXIT_WARNING = 2
    EXIT_ERROR = 1
    counter = logCount()
    logging.basicConfig(
        level=logging.DEBUG,
        handlers=[counter, logging.StreamHandler(sys.stderr)],
    )
    infome()
    warnme()
    evil()
    if counter.count.errors != 0:
        raise SystemExit(EXIT_ERROR)
    if counter.count.warnings != 0:
        raise SystemExit(EXIT_WARNING)

if __name__ == "__main__":
    main()
python3 count.py ; echo $?
INFO:root:hello world
WARNING:root:help, an warning
ERROR:root:yikes
1

This also makes easy to define something like:

  • hey, got 2 warnings, change exit code to error?
  • got 3 warnings, but no –strict passed, ingore those, exit with success!
  • etc..

16 March, 2022 12:00AM

March 15, 2022

Kunal Mehta

How to mirror the Russian Wikipedia with Debian and Kiwix

It has been reported that the Russian government has threatened to block access to Wikipedia for documenting narratives that do not agree with the official position of the Russian government.

One of the anti-censorship strategies I've been working on is Kiwix, an offline Wikipedia reader (and plenty of other content too). Kiwix is free and open source software developed by a great community of people that I really enjoy working with.

With threats of censorship, traffic to Kiwix has increased fifty-fold, with users from Russia accounting for 40% of new downloads!

You can download copies of every language of Wikipedia for offline reading and distribution, as well as hosting your own read-only mirror, which I'm going to explain today.

Disclaimer: depending on where you live it may be illegal or get you in trouble with the authorities to rehost Wikipedia content, please be aware of your digital and physical safety before proceeding.

With that out of the way, let's get started. You'll need a Debian (or Ubuntu) server with at least 30GB of free disk space. You'll also want to have a webserver like Apache or nginx installed (I'll share the Apache config here).

First, we need to download the latest copy of the Russian Wikipedia.

$ wget 'https://download.kiwix.org/zim/wikipedia/wikipedia_ru_all_maxi_2022-03.zim'

If the download is interrupted or fails, you can use wget -c $url to resume it.

Next let's install kiwix-serve and try it out. If you're using Ubuntu, I strongly recommend enabling our Kiwix PPA first.

$ sudo apt update
$ sudo apt install kiwix-tools
$ kiwix-serve -p 3004 wikipedia_ru_all_maxi_2022-03.zim

At this point you should be able to visit http://yourserver.com:3004/ and see the Russian Wikipedia. Awesome! You can use any available port, I just picked 3004.

Now let's use systemd to daemonize it so it runs in the background. Create /etc/systemd/system/kiwix-ru-wp.service with the following:

[Unit]
Description=Kiwix Russian Wikipedia

[Service]
Type=simple
User=www-data
ExecStart=/usr/bin/kiwix-serve -p 3004 /path/to/wikipedia_ru_all_maxi_2022-03.zim
Restart=always

[Install]
WantedBy=multi-user.target

Now let's start it and enable it at boot:

$ sudo systemctl start kiwix-ru-wp
$ sudo systemctl enable kiwix-ru-wp

Since we want to expose this on the public internet, we should put it behind a more established webserver and configure HTTPS.

Here's the Apache httpd configuration I used:

<VirtualHost *:80>
        ServerName ru-wp.yourserver.com

        ServerAdmin webmaster@localhost
        DocumentRoot /var/www/html

        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

        <Proxy *>
                Require all granted
        </Proxy>

        ProxyPass / http://127.0.0.1:3004/
        ProxyPassReverse / http://127.0.0.1:3004/
</VirtualHost>

Put that in /etc/apache2/sites-available/kiwix-ru-wp.conf and run:

$ sudo a2ensite kiwix-ru-wp
$ sudo systemctl reload apache2

Finally, I used certbot to enable HTTPS on that subdomain and redirect all HTTP traffic over to HTTPS. This is an interactive process that is well documented so I'm not going to go into it in detail.

You can see my mirror of the Russian Wikipedia, following these instructions, at https://ru-wp.legoktm.com/. Anyone is welcome to use it or distribute the link, though I am not committing to running it long-term.

This is certainly not a perfect anti-censorship solution, the copy of Wikipedia that Kiwix provides became out of date the moment it was created, and the setup described here will require you to manually update the service when the new copy is available next month.

Finally, if you have some extra bandwith, you can also help seed this as a torrent.

15 March, 2022 01:02AM by legoktm

March 14, 2022

Sam Hartman

Nostalgia for Blogging

Recently, I migrated this blog from Livejournal over to Dreamwidth. As part of the process, I was looking back at my blog entries from around 2007 or so.

I miss those days. I miss the days when blogging was more of an interactive community. Comments got exchanged, and at least among my circle of friends people wrote thoughtful, well-considered entries. There was introspection into what was going on in people's lives, as well as technical stuff, as well as just keeping up with people who were important in my life.
Today, we have some of the same thought going into things like Planet Debian, but it's a lot less interactive. Then we have things like Facebook, Twitter, and the more free alternatives. There's interactivity, but it feels like everything has to fit into the length of a single tweet. So it is a lot faster paced and a lot less considered. I find I don't belong to that fast-paced social media as much as I did to the blogs of old.



comment count unavailable comments

14 March, 2022 12:51AM

March 12, 2022

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

kitty rxvt-like config

kitty is a terminal with some nice features (I particularly like the focus on low latency, and the best-in-class support for emoji) but with a rather unusual default configuration. Since everybody's opinions are bad, I will offer my own configuration so far to get a bit closer to classic terminals' defaults:

# If you're running GNOME with Wayland, you may or may not want to uncomment
# this to get your normal window decorations back (this may or may not be
# better in the future; see https://github.com/kovidgoyal/kitty/issues/3284)
# linux_display_server x11

# This is pretty much the only non-xterm choice I make; fixed just isn't
# suitable for high-DPI screens. I also install Noto Color Emoji, which will
# be used for fallback for flags etc.
font_family      Noto Mono
font_size        12
italic_font      auto
bold_italic_font auto

# kitty doesn't support bold-as-bright (which is on purpose, but I'm not
# really a fan); see https://github.com/kovidgoyal/kitty/issues/197.
# In any case, that means we'll need a bold font. 
bold_font        Noto Sans Mono Bold

# Typical terminals don't blink (and it causes wakeups).
cursor_blink_interval 0

# No bell. Be silent.
enable_audio_bell no

# Standard scrolling with pageup/pagedown, and a reasonable scrolling speed
# (this also holds for mouse wheel).
map shift+page_up scroll_page_up
map shift+page_down scroll_page_down
touch_scroll_multiplier 5.0

# I don't want kitty to boot up maximized just because there's some other
# maximized terminal somewhere. 80x24 for life, yo.
remember_window_size no
initial_window_width 80c
initial_window_height 24c

# It's really jarring having spare room at the _top_ if the terminal isn't
# a perfect multiple of the font cell size.
placement_strategy top-left

# Now for the default xterm/rxvt-like colors.
foreground #ffffff
background #000000
# black
color0 #000000
color8 #404040
# red
color1 #CD0000
color9 #FF0000
# green
color2 #00CD00
color10 #00FF00
# yellow
color3 #CDCD00
color11 #FFFF00
# blue
color4 #0000CD
color12 #0000FF
# magenta
color5 #CD00CD
color13 #FF00FF
# cyan
color6 #00CDCD
color14 #00FFFF
# white
color7 #FFFFFF
color15 #00FF00

I'm still not 100% sold on default URL behavior (somehow, it doesn't seem to always react on my left-click), and I'd really like those window decorations to be fixed, but apart from that, this is pretty good stuff so far.

12 March, 2022 06:03PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppGSL 0.3.11: Small Maintenance

A new release 0.3.11 of RcppGSL is now on CRAN. The RcppGSL package provides an interface from R to the GNU GSL by relying on the Rcpp package.

This release updates src/Makefile.ucrt to use the RTools42 libraries. Details follow from the NEWS file.

Changes in version 0.3.11 (2022-03-12)

  • The UCRT Makefile was updated

  • Minor edits to README.md were made

Courtesy of CRANberries, a summary of changes in the most recent release is also available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 March, 2022 03:30PM

hackergotchi for Thomas Koch

Thomas Koch

lsp-java coming to debian

Posted on March 12, 2022
Tags: debian

The Language Server Protocol (LSP) standardizes communication between editors and so called language servers for different programming languages. This reduces the old problem that every editor had to implement many different plugins for all different programming languages. With LSP an editor just needs to talk LSP and can immediately provide typicall IDE features.

I already packaged the Emacs packages lsp-mode and lsp-haskell for Debian bullseye. Now lsp-java is waiting in the NEW queue.

I’m always worried about downloading and executing binaries from random places of the internet. It should be a matter of hygiene to only run binaries from official Debian repositories. Unfortunately this is not feasible when programming and many people don’t see a problem with running multiple curl-sh pipes to set up their programming environment.

I prefer to do such stuff only in virtual machines. With Emacs and LSP I can finally have a lightweight textmode programming environment even for Java.

Unfortunately the lsp-java mode does not yet work over tramp. Once this is solved, I could run emacs on my host and only isolate the code and language server inside the VM.

The next step would be to also keep the code on the host and mount it with Virtio FS in the VM. But so far the necessary daemon is not yet in Debian (RFP: #1007152).

In Detail I uploaded these packages:

12 March, 2022 02:55PM

Waiting for a STATE folder in the XDG basedir spec

Posted on February 18, 2014

The XDG Basedirectory specification proposes default homedir folders for the categories DATA (~/.local/share), CONFIG (~/.config) and CACHE (~/.cache). One category however is missing: STATE. This category has been requested several times but nothing happened.

Examples for state data are:

  • history files of shells, repls, anything that uses libreadline
  • logfiles
  • state of application windows on exit
  • recently opened files
  • last time application was run
  • emacs: bookmarks, ido last directories, backups, auto-save files, auto-save-list

The missing STATE category is especially annoying if you’re managing your dotfiles with a VCS (e.g. via VCSH) and you care to keep your homedir tidy.

If you’re as annoyed as me about the missing STATE category, please voice your opinion on the XDG mailing list.

Of course it’s a very long way until applications really use such a STATE directory. But without a common standard it will never happen.

12 March, 2022 02:55PM

shared infrastructure coop

Posted on February 5, 2014

I’m working in a very small web agency with 4 employees, one of them part time and our boss who doesn’t do programming. It shouldn’t come as a surprise, that our development infrastructure is not perfect. We have many ideas and dreams how we could improve it, but not the time. Now we have two obvious choices: Either we just do nothing or we buy services from specialized vendors like github, atlassian, travis-ci, heroku, google and others.

Doing nothing does not work for me. But just buying all this stuff doesn’t please me either. We’d depend on proprietary software, lock-in effects or one-size-fits-all offerings. Another option would be to find other small web shops like us, form a cooperative and share essential services. There are thousands of web shops in the same situation like us and we all need the same things:

  • public and private Git hosting
  • continuous integration (Jenkins)
  • code review (Gerrit)
  • file sharing (e.g. git-annex + webdav)
  • wiki
  • issue tracking
  • virtual windows systems for Internet Explorer testing
  • MySQL / Postgres databases
  • PaaS for PHP, Python, Ruby, Java
  • staging environment
  • Mails, Mailing Lists
  • simple calendar, CRM
  • monitoring

As I said, all of the above is available as commercial offerings. But I’d prefer the following to be satisfied:

  • The infrastructure itself should be open (but not free of charge), like the OpenStack Project Infrastructure as presented at LCA. I especially like how they review their puppet config with Gerrit.

  • The process to become an admin for the infrastructure should work much the same like the process to become a Debian Developer. I’d also like the same attitude towards quality as present in Debian.

Does something like that already exists? There already is the German cooperative hostsharing which is kind of similar but does provide mainly hosting, not services. But I’ll ask them next after writing this blog post.

Is your company interested in joining such an effort? Does it sound silly?

Comments:

Sounds promising. I already answered by mail. Dirk Deimeke (Homepage) am 16.02.2014 08:16 Homepage: http://d5e.org

I’m sorry for accidentily removing a comment that linked to https://mayfirst.org while moderating comments. I’m really looking forward to another blogging engine… Thomas Koch am 16.02.2014 12:20

Why? What are you missing? I am using s9y for 9 years now. Dirk Deimeke (Homepage) am 16.02.2014 12:57

12 March, 2022 02:55PM

Petter Reinholdtsen

Publish Hargassner wood chip boiler state to MQTT

Recently I had a look at a Hargassner wood chip boiler, and what kind of free software can be used to monitor and control it. The boiler can be connected to some cloud service via what the producer call an Internet Gateway, which seem to be a computer connecting to the boiler and passing the information gathered to the cloud. I discovered the boiler controller got an IP address on the local network and listen on TCP port 23 to provide status information as a text line of numbers. It also provide a HTTP server listening on port 80, but I have not yet figured out what it can do beside return an error code.

If I am to believe various free software implementations talking to such boiler, the interpretation of the line of numbers differ between type of boiler and software version on the boiler. By comparing the list of numbers on the front panel of the boiler with the numbers returned via TCP, I have been able to figure out several of the numbers, but there are a lot left to understand. I've located several temperature measurements and hours running values, as well as oxygen measurements and counters.

I decided to write a simple parser in Python for the values I figured out so far, and a simple MQTT injector publishing both the interpreted and the unknown values on a MQTT bus to make collecting and graphing simpler. The end result is available from the hargassner2mqtt project page on gitlab. I very much welcome patches extending the parser to understand more values, boiler types and software versions. I do not really expect very few free software developers got their hands on such unit to experiment, but it would be fun if others too find this project useful.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

12 March, 2022 05:30AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.8.2: Hotfix release per CRAN request

rcpp logo

A new hot-fix release 1.0.8.2 of Rcpp just got to CRAN. It will also be uploaded to Debian shortly, and Windows and macOS binaries will appear at CRAN in the next few days. This release breaks with the six-months cycle started with release 1.0.5 in July 2020 as CRAN desired an update to silence nags from the newest clang version which turned a little loud over a feature deprecated in C++11 (namely std::unary_function() and std::binary_function()). This was easy to replace with std::function() which we did. The release also contains a minor bugfix relative to 1.0.8 and C++98 builds, and minor correction to one pdf vignette. The release was fully tested by us and CRAN as usual against all reverse dependencies.

Rcpp has become the most popular way of enhancing R with C or C++ code. Right now, around 2519 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 239 in BioConductor.

The full list of details for this interim release follows.

Changes in Rcpp hotfix release version 1.0.8.2 (2022-03-10)

  • Changes in Rcpp API:

    • Accomodate C++98 compilation by adjusting attributes.cpp (Dirk in #1193 fixing #1192)

    • Accomodate newest compilers replacing deprecated std::unary_function and std::binary_function with std::function (Dirk in #1202 fixing #1201 and CRAN request)

  • Changes in Rcpp Documentation:

    • Adjust one overflowing column (Bill Denney in #1196 fixing #1195)
  • Changes in Rcpp Deployment:

    • Accomodate four digit version numbers in unit test (Dirk)

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2843 previous questions.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 March, 2022 12:00AM

March 11, 2022

hackergotchi for Santiago García Mantiñán

Santiago García Mantiñán

tcpping-nmap a substitute for tcpping based on nmap

I was about to setup a tcpping based monitoring on smokeping but then I discovered this was based on tcptraceroute which on Debian comes setuid root and the alternative is to use sudo, so, anyway you put it... this runs with root privileges.

I didn't like what I saw, so, I said... couldn't we do this with nmap without needing root?

And so I started to write a little script that could mimic what tcpping and tcptraceroute were outputing but using nmap.

The result is tcpping-nmap which does this. The only little thing is that nmap only outputs miliseconds while the tcpping gets to microseconds.

Hope you enjoy it :-)

11 March, 2022 11:20PM by Santiago García Mantiñán ([email protected])

March 10, 2022

hackergotchi for Holger Levsen

Holger Levsen

20220310-Debian-Reunion-Hamburg-2022

Debian Reunion Hamburg 2022 from May 23 to 30

As last year there will be a Debian Reunion Hamburg 2022 event taking place at the same location as previous years, from May 23rd until the 30th.

This is just a preliminary announcement to get the word out, that this event will happen, so you can ponder attending. The wiki page has more information and some fine folks have even already registered!

A few things still need to be sorted out, eg a call for papers and a call for sponsors. If you want to help with that or have questions about the event, please reach out via #debconf-hamburg on irc.oftc.net or via the debconf-hamburg mailinglist.

I'm very much looking forward to meet some of you again soon and getting to know some others for the first time! Yay. It's been a long time...

10 March, 2022 12:04PM

Michael Ablassmeier

fscom switch shell

fs.com s5850 and s8050 series type switches have a secret mode which lets you enter a regular shell from the switch cli, like so:

hostname # start shell
Password:

The command and password are not documented by the manufacturer, i wondered wether if its possible to extract that password from the firmware. After all: its my device, and i want to have access to all the features!

Download the latest firmware image for those switch types and let binwalk do its magic:

$ wget https://img-en.fs.com/file/user_manual/s5850-and-s8050-series-switches-fsos-v7-2-5r-software.zip
binwalk FSOS-S5850-v7.2.5.r.bin  -e

This will extract an regular cpio archive, including the switch root FS:

$ file 2344D4 
2344D4: ASCII cpio archive (SVR4 with no CRC)
$ cpio --no-absolute-filenames -idv < 2344D4

The extracted files include the passwd file with hashes:

cat etc/passwd
root:$1$ZrdxfwMZ$1xAj.S6emtA7gWD7iwmmm/:0:0:root:/root:/bin/sh
nms:$1$nUbsGtA7$5omXOHPNK.ZzNd5KeekUq/:0:0:root:/ftp:/bin/sh

Let john do its job:

$ wget https://github.com/brannondorsey/naive-hashcat/releases/download/data/rockyou.txt
$ sudo john etc/passwd  --wordlist=rockyou.txt
<the_password>   (nms)
<the_password>   (root)
2g 0:00:04:03 100% 0.008220g/s 58931p/s 58935c/s 58935C/s nancy..!*!hahaha!*!

Thats it (wont reveal the password here, but well: its an easy one ;))

Now have fun poking around on your switches firmware:

hostname # start shell
Password: <the_password>
[root@hostname /mnt/flash]$ ps axw
  PID TTY      STAT   TIME COMMAND
    1 ?        Ss     0:29 init
    2 ?        S      0:06 [kthreadd]
 [..]
[root@hostname /mnt/flash]$ uname -a
Linux hostname 2.6.35-svn37723 #1 Thu Aug 22 20:43:19 CST 2019 ppc unknow

even tho the good things wont work, but i guess its time to update the firmware anyways:

[root@hostname /mnt/flash]$ tcpdump -pni vlan250
tcpdump: can't get TPACKET_V3 header len on packet socket: Invalid argument

10 March, 2022 12:00AM

March 09, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

Broken webcam aspect ratio

picture of my Sony RX100-III camera

Sony RX100-III, relegated to a webcam

Sometimes I have remote meetings with Google Meet. Unlike the other video-conferencing services that I use (Bluejeans, Zoom), my video was stretched out of proportion under Google Meet with Firefox. I haven't found out why this was happening, but I did figure out a work-around.

Thanks to Daniel Silverstone, Rob Kendrick, Gregor Herrmann and Ben Allen for pointing me in the right direction!

Hardware

The lovely Sony RX-100 mk3 that I bought in 2015 has spent most of its life languishing unused. During the Pandemic, once I was working from home all the time, I decided to press-gang it into service as a better-quality webcam. Newer models of this camera — the mark 4 onwards — have support for a USB mode called "PC Remote", which effectively makes them into webcams. Unfortunately my mark 3 does not support this, but it does have HDMI out, so I picked up a cheap "HDMI to USB Video Capture Card" from eBay.

Video modes

Before: wrong aspect ratio

Before: wrong aspect ratio

This device offers a selection of different video modes over a webcam interface. I used qv4l2 to explore the different modes. It became clear that the camera was outputting a signal at 16:9, but the modes on offer from the dongle were for a range of different aspect ratios. The picture for these other ratios was not letter or pillar-boxed, but stretched to fit.

I also noticed that the modes which had the correct aspect ratio were at very low framerates: 1920x1080@5fps, 1360x768@8fps, 1280x720@10fps. It felt to me that I would look unnatural at such a low framerate. The most promising mode was close to the right ratio, 720x480 and 30 fps.

Software

After: corrected aspect ratio

After: corrected aspect ratio

My initial solution is to use the v4l2loopback kernel module, which provides a virtual loop-back webcam interface. I can write video data to it from one process, and read it back from another. Loading it as follows:

modprobe v4l2loopback exclusive_caps=1

The option exclusive_caps configures the module into a mode where it initially presents a write-only interface, but once a process has opened a file handle, it then switches to read-only for subsequent processes. Assuming there are no other camera devices connected at the time of loading the module, it will create /dev/video0.1

I experimented briefly with OBS Studio, the very versatile and feature-full streaming tool, which confirmed that I could use filters on the source video to fix the aspect ratio, and emit the result to the virtual device. I don't otherwise use OBS, though, so I achieve the same result using ffmpeg:

fmpeg -s 720x480 -i /dev/video1 -r 30 -f v4l2 -vcodec rawvideo \
    -pix_fmt yuyv422 -s 720x405 /dev/video0

The source options are to select the source video mode I want. The codec and pixel formats are to match what is being emitted (I determined that using ffprobe on the camera device). The resizing is triggered by supplying a different size to the -s parameter. I think that is equivalent to explicitly selecting a "scale" filter, and there might be other filters that could be used instead (to add pillar boxes for example).

This worked just as well. In Google Meet, I select the Virtual Camera, and Google Meet is presented with only one video mode, in the correct aspect ratio, and no configurable options for it, so it can't misbehave.

Future

I'm planning to automate the loading (and unloading) of the module and starting the ffmpeg process in response to the real camera device being plugged or unplugged, using systemd events and services. (I don't leave the camera plugged in all the time due to some bad USB behaviour I've experienced if I do so.) If I get that working, I will write a follow-up.


  1. you can request a specific device name/number with another module option.

09 March, 2022 02:21PM

François Marier

Using a Streamzap remote control with MythTV on Debian Bullseye

After upgrading my MythTV machine to Debian Bullseye and MythTV 31, my Streamzap remote control stopped working correctly: the up and down buttons were working, but the OK button wasn't.

Here's the complete solution that made it work with the built-in kernel support (i.e. without LIRC).

Button re-mapping

Since some of the buttons were working, but not others, I figured that the buttons were probably not mapped to the right keys.

Inspired by these old v4l-utils-based instructions, I made my own custom keymap by by copying the original keymap:

cp /lib/udev/rc_keymaps/streamzap.toml /etc/rc_keymaps/

and then modifying it to adapt it to what MythTV needs. This is what I ended up with:

<span class="createlink"><a href="/blog.cgi?do=create&amp;from=posts%2Fusing-streamzap-remote-with-mythtv-debian-bullseye&amp;page=protocols" rel="nofollow">?</a>protocols</span>
name = "streamzap"
protocol = "rc-5-sz"
[protocols.scancodes]
0x28c0 = "KEY_0"
0x28c1 = "KEY_1"
0x28c2 = "KEY_2"
0x28c3 = "KEY_3"
0x28c4 = "KEY_4"
0x28c5 = "KEY_5"
0x28c6 = "KEY_6"
0x28c7 = "KEY_7"
0x28c8 = "KEY_8"
0x28c9 = "KEY_9"
0x28ca = "KEY_ESC"
0x28cb = "KEY_MUTE"
0x28cc = "KEY_UP"
0x28cd = "KEY_RIGHTBRACE"
0x28ce = "KEY_DOWN"
0x28cf = "KEY_LEFTBRACE"
0x28d0 = "KEY_UP"
0x28d1 = "KEY_LEFT"
0x28d2 = "KEY_ENTER"
0x28d3 = "KEY_RIGHT"
0x28d4 = "KEY_DOWN"
0x28d5 = "KEY_M"
0x28d6 = "KEY_ESC"
0x28d7 = "KEY_L"
0x28d8 = "KEY_P"
0x28d9 = "KEY_ESC"
0x28da = "KEY_BACK"
0x28db = "KEY_FORWARD"
0x28dc = "KEY_R"
0x28dd = "KEY_PAGEUP"
0x28de = "KEY_PAGEDOWN"
0x28e0 = "KEY_D"
0x28e1 = "KEY_I"
0x28e2 = "KEY_END"
0x28e3 = "KEY_A"

Note that the keycodes can be found in the kernel source code.

With my own keymap in place at /etc/rc_keymaps/streamzap.toml, I changed /etc/rc_maps.cfg to have the kernel driver automatically use it:

--- a/rc_maps.cfg
+++ b/rc_maps.cfg
@@ -126,7 +126,7 @@
 *      rc-real-audio-220-32-keys real_audio_220_32_keys.toml
 *      rc-reddo                 reddo.toml
 *      rc-snapstream-firefly    snapstream_firefly.toml
-*      rc-streamzap             streamzap.toml
+*      rc-streamzap             /etc/rc_keymaps/streamzap.toml
 *      rc-su3000                su3000.toml
 *      rc-tango                 tango.toml
 *      rc-tanix-tx3mini         tanix_tx3mini.toml

Button repeat delay

To adjust the delay before button presses are repeated, I followed these old out-of-date instructions on the MythTV wiki and put the following in /etc/udev/rules.d/streamzap.rules:

ACTION=="add", ATTRS{idVendor}=="0e9c", ATTRS{idProduct}=="0000", RUN+="/usr/bin/ir-keytable -s rc0 -D 1000 -P 250"

Note that the -d option has been replaced with -s in the latest version of ir-keytable.

To check that the Streamzap is indeed detected as rc0 on your system, use this command:

$ ir-keytable 
Found /sys/class/rc/rc0/ with:
    Name: Streamzap PC Remote Infrared Receiver (0e9c:0000)
    Driver: streamzap
    Default keymap: rc-streamzap
...

Make sure you don't pass the -c to ir-keytable or else it will clear the keymap set via /etc/rc_maps.cfg, removing all of the button mappings.

09 March, 2022 12:00AM

March 07, 2022

Ayoyimika Ajibade

Progress Report!! Modifying Expectations... 📝

Wait! Just like yesterday when I was accepted as an Outreachy intern and the first half of the internship is finished😲. How time flies when you are having a good time🎃

As part of the requirements for the final application during the contribution period for the Outreachy internship, I needed to provide a timeline to achieve our goal on my outreachy task which is transitioning of dependencies in node16 and webpack5. Having consulted my mentors who implied that the packages depending on webpack and nodejs combined are so numerous that its impossible to finish all within a space of three months but we have steps to guide us through the entire process to achieve most of our goals which are ➡

  • Find a list of packages(failing rebuild and testing with autopkgtest, of reverse dependencies of webpack and nodejs) to fix.
  • See if new upstream versions are available that support nodejs16 and webpack5 respectively.
  • See if the new upstream version works and doesn't fail while rebuilding or testing with autopkgtest.
  • Report bug 🐞 in Debian if any fails to rebuild or test with autopkgtest.
  • Forward bugs upstream if needed.
  • Fix packages and forward patches.

As of this writing(though a little late🕔) we have successfully rebuilt all reverse dependencies of webpack5 and split them equally each for I and my co-intern for all Javascript modules as ruby💎 packages also depend on webpack which is a total of 44 packages. Filed a bug report on Debian bug tracking system for failing packages, also the original maintainer or uploader of the package to the Debian archive mostly Debian developers also get a mail in references to the package bug 🐞report. Sometimes the uploader who also receives the bug report decides to help out to fix the package and forward the patch upstream if need be. We have also filed an issue to upstream repo mostly via github👆 where some respond and create PR to solve those errors and others are plain aversive to the whole idea. PR from the upstream developer is cherry-picked and a patch is created by us to incorporate the code into our own working repository. some package upstream maintainer rejects such issues or doesn't respond, we take it upon ourselves to fix the package. The total number of packages that are successfully updated and ready to be merged is 10 packages while 12 packages remain on my own end to be updated.

One of the most challenging packages to update so far was prop-type as its runs its large test suite using jest of a lower version 19.0.2 compared to that of Debian OS which is version 27.5.1 updating and migrating its API's and methods to use the Debian updated version is so challenging after several googling, testing out the solution from StackOverflow, trials, and errors, reading documentations we eventually made progress with the help of my mentor, co-intern and the whole community member. It's so crazy that when I got it working I said to myself. phew😅😌 it's not rocket science why can I figure it out sooner than expected🤷‍♀️

I initially proposed that I would be halfway done with the project by now, I guess the reason am not able to achieve some of our goals which are finishing up with the packages for webpack and moving to transition some of the nodejs packages at all is DEBUGGING. Yes DEBUGGING! where you never can predict what the solution is. is the problem coming from Debian? or dependencies of the package you are working on, upstream bug, or dependencies of dependencies of the package you are working on, so many questions to answer. You can't easily find a solution to a bug as it takes time to try out so many guesses more of an educated guess, or even try out all the solutions from stack overflow and still no viable progress. Obviously, you cannot really know about something to set up a plan for unless you get right into it.

One way of doing this, if I have to start again is the truly understand how the javascript package work under the hood, how its handles different interaction between packages, some of its dos and don't of transpiling, bundling, testing, e.t.c

I guess my unrealistic goals need to be modified because some drawback that was not envisaged popped up and I underestimated the complexity of the tasks, which will be reducing the number of packages to update in transitioning of nodejs from what I planned😢

My major focus for the second half of the internship is to fix bugs and errors I discover, file bug reports for future bugs to seek help from co-maintainer or developers, file issues upstream and close those whose bugs are already resolved for the remaining 12 packages, and ultimately successfully uploading all reverse dependencies. Also diving into transitioning of nodejs16.

Thanks for stopping by🙏

07 March, 2022 01:09PM by Ayoyimika Ajibade

March 05, 2022

Thorsten Alteholz

My Debian Activities in February 2022

FTP master

This month I accepted 484 and rejected 73 packages. The overall number of packages that got accepted was 495.

The overall number of rejected packages was 76, which is about 15% of the uploads to NEW. While most of the maintainers do a great job when creating their debian/copyright, others are a bit lax. Unfortunately those people seem to be more enthusiastic when fighting for changes in NEW processing or even removing NEW.

One argument in discussions about NEW is that the copyright verification of packages can be done by the community after accepting the packages in the archive.
Last month I did not get any hint that such checks have been done by anybody. As the past already showed several times, this community based checks simply do not exist.

So in the end poorly maintained copyright information will rot in the archive and I am not sure that this really corresponds with the Debian Social Contract.

Debian LTS

This was my ninety-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of:

  • [DLA 2928-1] htmldoc security update for three CVEs
  • [#1004049] buster-pu: zziplib debdiff was approved and package uploaded
  • [#1004050] bullseye-pu: zziplib debdiff was approved and package uploaded
  • [#1004055] buster-pu: debdiff was approved and package uploaded
  • [#1006493] bullseye-pu: htmldoc/1.9.11-4+deb11u2
  • [#1006494] buster-pu: htmldoc/1.9.3-1+deb10u3
  • [#1006550] buster-pu: tiff/4.1.0+git191117-2~deb10u4
  • [#1006551] bullseye-pu: tiff/4.2.0-1+deb11u1

Unfortunately salsa went down at the end of the month, so several planned uploads did not happen and have to be delayed to March.

I also continued to work on security support for golang packages. Further I worked on packages in NEW on security-master and injected missing sources. Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the forty-fourth ELTS month.

During my allocated time I uploaded:

  • ELA-567-1 for apache2
  • ELA-567-2 for apache2
  • ELA-568-1 for ksh
  • ELA-569-1 for tiff
  • ELA-570-1 for htmldoc

Further I worked on cyrus-sasl but did not do an upload yet.

Last but not least I did some days of frontdesk duties.

Debian Printing

As announced last month I uploaded a new version of cups.

Altogether I uploaded new upstream versions or improved packaging of:

Debian Astro

This month I uploaded new upstream versions or improved packaging of:

Other stuff

This month I uploaded new upstream versions or improved packaging of:

05 March, 2022 12:43PM by alteholz

Reproducible Builds

Reproducible Builds in February 2022

Welcome to the February 2022 report from the Reproducible Builds project. In these reports, we try to round-up the important things we and others have been up to over the past month. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.


Jiawen Xiong, Yong Shi, Boyuan Chen, Filipe R. Cogo and Zhen Ming Jiang have published a new paper titled Towards Build Verifiability for Java-based Systems (PDF). The abstract of the paper contains the following:

Various efforts towards build verifiability have been made to C/C++-based systems, yet the techniques for Java-based systems are not systematic and are often specific to a particular build tool (eg. Maven). In this study, we present a systematic approach towards build verifiability on Java-based systems.


GitBOM is a flexible scheme to track the source code used to generate build artifacts via Git-like unique identifiers. Although the project has been active for a while, the community around GitBOM has now started running weekly community meetings.


The paper Chris Lamb and Stefano Zacchiroli is now available in the March/April 2022 issue of IEEE Software. Titled Reproducible Builds: Increasing the Integrity of Software Supply Chains (PDF), the abstract of the paper contains the following:

We first define the problem, and then provide insight into the challenges of making real-world software build in a “reproducible” manner-this is, when every build generates bit-for-bit identical results. Through the experience of the Reproducible Builds project making the Debian Linux distribution reproducible, we also describe the affinity between reproducibility and quality assurance (QA).


In openSUSE, Bernhard M. Wiedemann posted his monthly reproducible builds status report.


On our mailing list this month, Thomas Schmitt started a thread around the SOURCE_DATE_EPOCH specification related to formats that cannot help embedding potentially timezone-specific timestamp. (Full thread index.)


The Yocto Project is pleased to report that it’s core metadata (OpenEmbedded-Core) is now reproducible for all recipes (100% coverage) after issues with newer languages such as Golang were resolved. This was announced in their recent Year in Review publication. It is of particular interest for security updates so that systems can have specific components updated but reducing the risk of other unintended changes and making the sections of the system changing very clear for audit.

The project is now also making heavy use of “equivalence” of build output to determine whether further items in builds need to be rebuilt or whether cached previously built items can be used. As mentioned in the article above, there are now public servers sharing this equivalence information. Reproducibility is key in making this possible and effective to reduce build times/costs/resource usage.


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 203, 204, 205 and 206 to Debian unstable, as well as made the following changes to the code itself:

  • Bug fixes:

    • Fix a file(1)-related regression where Debian .changes files that contained non-ASCII text were not identified as such, therefore resulting in seemingly arbitrary packages not actually comparing the nested files themselves. The non-ASCII parts were typically in the Maintainer or in the changelog text. [][]
    • Fix a regression when comparing directories against non-directories. [][]
    • If we fail to scan using binwalk, return False from BinwalkFile.recognizes. []
    • If we fail to import binwalk, don’t report that we are missing the Python rpm module! []
  • Testsuite improvements:

    • Add a test for recent file(1) issue regarding .changes files. []
    • Use our assert_diff utility where we can within the test_directory.py set of tests. []
    • Don’t run our binwalk-related tests as root or fakeroot. The latest version of binwalk has some new security protection against this. []
  • Codebase improvements:

    • Drop the _PATH suffix from module-level globals that are not paths. []
    • Tidy some control flow in Difference._reverse_self. []
    • Don’t print a warning to the console regarding NT_GNU_BUILD_ID changes. []

In addition, Mattia Rizzolo updated the Debian packaging to ensure that diffoscope and diffoscope-minimal packages have the same version. []


Vagrant Cascadian wrote to the debian-devel mailing list after noticing that the binutils source package contained unreproducible logs in one of its binary packages. Vagrant expanded the discussion to one about all kinds of build metadata in packages and outlines a number of potential solutions that support reproducible builds and arbitrary metadata.

Vagrant also started a discussion on debian-devel after identifying a large number of packages that embed build paths via RPATH when building with CMake, including a list of packages (grouped by Debian maintainer) affected by this issue. Maintainers were requested to check whether their package still builds correctly when passing the -DCMAKE_BUILD_RPATH_USE_ORIGIN=ON directive.

On our mailing list this month, kpcyrd announced the release of rebuilderd-debian-buildinfo-crawler a tool to parse the Packages.xz Debian package index file, attempts to discover the right .buildinfo file from buildinfos.debian.net and outputs it in a format that can be understood by rebuilderd. The tool, which is available on GitHub, solves a problem regarding correlating Debian version numbers with their builds.

bauen1 provided two patches for debian-cd, the software used to make Debian installer images. This involved passing --invariant and -i deb00001 to mkfs.msdos(8) and avoided embedding timestamps into the gzipped Packages and Translations files. After some discussion, the patches in question were merged and will be included in debian-cd version 3.1.36.

Roland Clobus wrote another in-depth status update about status of ‘live’ Debian images, summarising the current situation that “all major desktops build reproducibly with bullseye, bookworm and sid”.

The python3.10 package was uploaded to Debian by doko, fixing an issue where [.pyc files were not reproducible because the elements in frozenset data structures were not ordered reproducibly. This meant that to creating a bit-for-bit reproducible Debian chroot which included .pyc files was not reproducible. As of writing, the only remaining unreproducible parts of a standard chroot is man-db, but Guillem Jover has a patch for update-alternatives which will likely be part of the next release of dpkg.

Elsewhere in Debian, 139 reviews of Debian packages were added, 29 were updated and 17 were removed this month adding to our knowledge about identified issues. A large number of issue types have been updated too, including the addition of captures_kernel_variant, erlang_escript_file, captures_build_path_in_r_rdb_rds_databases, captures_build_path_in_vo_files_generated_by_coq and build_path_in_vo_files_generated_by_coq.


Website updates

There were quite a few changes to the Reproducible Builds website and documentation this month as well, including:

  • Chris Lamb:

  • Daniel Shahaf:

    • Try a different Markdown footnote content syntax to work around a rendering issue. [][][]
  • Holger Levsen:

    • Make a huge number of changes to the Who is involved? page, including pre-populating a large number of contributors who cannot be identified from the metadata of the website itself. [][][][][]
    • Improve linking to sponsors in sidebar navigation. []
    • drop sponsors paragraph as the navigation is clearer now. []
    • Add Mullvad VPN as a bronze-level sponsor . [][]
  • Vagrant Cascadian:


Upstream patches

The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. February’s patches included the following:


Testing framework

The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

  • Daniel Golle:

    • Update the OpenWrt configuration to not depend on the host LLVM, adding lines to the .config seed to build LLVM for eBPF from source. []
    • Preserve more OpenWrt-related build artifacts. []
  • Holger Levsen:

  • Temporary use a different Git tree when building OpenWrt as our tests had been broken since September 2020. This was reverted after the patch in question was accepted by Paul Spooren into the canonical openwrt.git repository the next day.
    • Various improvements to debugging OpenWrt reproducibility. [][][][][]
    • Ignore useradd warnings when building packages. []
    • Update the script to powercycle armhf architecture nodes to add a hint to where nodes named virt-*. []
    • Update the node health check to also fix failed logrotate and man-db services. []
  • Mattia Rizzolo:

    • Update the website job after contributors.sh script was rewritten in Python. []
    • Make sure to set the DIFFOSCOPE environment variable when available. []
  • Vagrant Cascadian:

    • Various updates to the diffoscope timeouts. [][][]

Node maintenance was also performed by Holger Levsen [] and Vagrant Cascadian [].


Finally…

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 March, 2022 11:17AM

March 04, 2022

Reproducible Builds (diffoscope)

diffoscope 207 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 207. This version includes the following changes:

* Fix a gnarly regression when comparing directories against non-directories.
  (Closes: reproducible-builds/diffoscope#292)
* Use our assert_diff utility where we can within test_directory.py

You find out more by visiting the project homepage.

04 March, 2022 12:00AM

Abiola Ajadi

Outreachy-And it’s a wrap!

Outreachy Wrap-up

Project Improve Debian Continuous Integration UX
Project Link: https://www.outreachy.org/outreachy-december-2021-internship-round/communities/debian/#improve-debian-continuous-integration-ux
Code Repository: https://salsa.debian.org/ci-team/debci
Mentors: Antonio Terceiro, Paul Gevers and Pavit Kaur

About the project

Debci exist to make sure packages work currently after an update, How it does this is by testing all of the packages that have tests written in them to make sure it works and nothing is broken This project entails making improvements to the platform to make it easier to use and maintain.

Deliverables of the project:

  • Package landing page displaying pending jobs
  • web frontend: centralize job listings in a single template
  • self-service: request test form forgets values when validation fails
  • Improvement to status

Work done

Package landing page displaying pending jobs

Previously, Jobs that were pending were not displayed on the package page. Working on this added a feature to display pending jobs on package landing. Working on this task made it known that the same block of codes was repeated in different files which led to the next task Screenshot-2022-03-04-at-02-03-06.png

Merge request

web frontend: centralize job listings in a single template

Jobs are listed in various landings such as status packages, Status alerts, status failing, History, and so on. The same Code was repeated in these pages to list the jobs, I worked on refactoring it and created a single template for job listing so it can be used anywhere it’s needed. I also wrote a test for the feature I added.
Merge request

self service: request test form forgets values when validation fails

When one tries to request for a test and it fails with an error, originally the form does not remember the values that were typed in the package name, suite field et. c. This fix ensures the form remembers the values inputted even when it throws an error. Image of request test page N/B: The form checks all architecture on the load of the page
merge request

Improvement to status

Originally the Status pages were rendered as static HTML pages but I converted these pages to be generated dynamically, I wrote endpoints for each page. Since most of the status pages have a list of jobs I modified it to use the template I created for job-listing. Previously, the status pages had a mechanism to filter such as All, Latest 50 et.c which wasn’t paginated. I removed this mechanism added a filter by architecture and suites to these pages and also add pagination. Last but not the least, I wrote tests for these implementations carried out on the status page. Image of Status failing page

merge request:
first task
second task

Major take-aways

I learnt a lot during my internship but most importantly I learnt how to:

  • write Tests in Ruby and how writing tests is an important aspect of software development
  • maintain good coding practice, Paying attending to commit messages, Indentation et.c are good areas I developed in writing code.
  • make contributions in Ruby Programming Language.

Acknowledgement

I can not end this without saying thank you to my mentors Antonio Terceiro, Paul Gevers, and Pavit Kaur for their constant support and guidance throughout the entire duration of this Internship. It has been a pleasure Interacting and learning from everyone.

Review

Outreachy has helped me feel more confident about open-source, especially during the application phase. I had to reach out to the community I was interested in and ask questions on how to get started. The informal chats week was awesome I was able to build my network and have interesting conversations with amazing individuals in open-source. To round up, Always ask questions and do not be afraid of making a mistake, as one of the outreachy blog post topics says Everyone struggles!, but never give up!

04 March, 2022 12:00AM by Abiola Ajadi ([email protected])