September 16 2020

Console-bound systemd services, the right way

Marek Szuba (marecki) September 16, 2020, 17:40

Let’s say that you need to run on your system some sort server software which instead of daemonising, has a command console permanently attached to standard input. Let us also say that said console is the only way for the administrator to interact with the service, including requesting its orderly shutdown – whoever has written it has not implemented any sort of signal handling so sending SIGTERM to the service process causes it to simply drop dead, potentially losing data in the process. And finally, let us say that the server in question is proprietary software so it isn’t really possible for you to fix any of the above in the source code (yes, I am talking about a specific piece of software – which by the way is very much alive and kicking as of late 2020). What do you do?

According to the collective wisdom of World Wide Web, the answer to this question is “use a terminal multiplexer like tmux or screen“, or at the very least a stripped-down variant of same such as dtach. OK, that sort of works – what if you want to run it as a proper system-managed service under e.g. OpenRC? The answer of the Stack Exchange crowd: have your init script invoke the terminal multiplexer. Oooooookay, how about under systemd, which actually prefers services it manages not to daemonise by itself? Nope, still “use a terminal multiplexer”.

What follows is my attempt to run a service like this under systemd more efficiently and elegantly, or at least with no extra dependencies beyond basic Unix shell commands.

Let us have a closer look at what systemd does with standard I/O of processes it spawns. The man page systemd.exec(5) tells us that what happens here is controlled by the directives StandardInput, StandardOutput and StandardError. By default the former is assigned to null while the latter two get piped to the journal, there are however quite a few other options here. According to the documentation, here is what systemd allows us to connect to standard input:

    • we are not interested in null (for obvious reasons) or any of the tty options (the whole point of this exercise is to run fully detached from any terminals);
    • data would work if we needed to feed some commands to the service when it starts but is useless for triggering a shutdown;
    • file looks promising – just point it to a FIFO on the file system and we’re all set – but it doesn’t actually take care of creating the FIFO for us. While we could in theory work around that by invoking mkfifo (and possibly chown if the service is to run as a specific user) in ExecStartPre, let’s see if we can find a better option
    • socket “is valid in socket-activated services only” and the corresponding socket unit must “have Accept=yes set”. What we want is the opposite, i.e. for the service to create its socket
    • finally, there is fd – which seems to be exactly what we need. According to the documentation all we have to do is write a socket unit creating a FIFO with appropriate ownership and permissions, make it a dependency of our service using the Sockets directive, and assign the corresponding named file descriptor to standard input.

Let’s try it out. To begin with, our socket unit “proprietarycrapd.socket”. Note that I have successfully managed to get this to work using unit templates as well, %i expansion works fine both here and while specifying unit or file-descriptor names in the service unit – but in order to avoid any possible confusion caused by the fact socket-activated services explicitly require being defined with templates, I have based my example on static units:

[Unit]
Description=Command FIFO for proprietarycrapd

[Socket]
ListenFIFO=/run/proprietarycrapd/pcd.control
DirectoryMode=0700
SocketMode=0600
SocketUser=pcd
SocketGroup=pcd
RemoveOnStop=true

Apart from the fact the unit in question has got no [Install] section (which makes sense given we want this socket to only be activated by the corresponding service, not by systemd itself), nothing out of the ordinary here. Note that since we haven’t used the directive FileDescriptorName, systemd will apply default behaviour and give the file descriptor associated with the FIFO the name of the socket unit itself.

And now, our service unit “proprietarycrapd.service”:

[Unit]
Description=proprietarycrap daemon
After=network.target

[Service]
User=pcd
Group=pcd
Sockets=proprietarycrapd.socket
StandardInput=socket
StandardOutput=journal
StandardError=journal
ExecStart=/opt/proprietarycrap/bin/proprietarycrapd
ExecStop=/usr/local/sbin/proprietarycrapd-stop

[Install]
WantedBy=multi-user.target

StandardInput=socket??? Whatever’s happened to StandardInput=fd:proprietarycrapd.socket??? Here is an odd thing. If I use the latter on my system, the service starts fine and gets the FIFO attached to its standard input – but when I try to stop the service the journal shows “Failed to load a named file descriptor: No such file or directory”, the ExecStop command is not run and systemd immediately fires a SIGTERM at the process. No idea why. Anyway, through trial and error I have found out that StandardInput=socket not only works fine in spite of being used in a service that is not socket-activated but actually does exactly what I wanted to achieve – so that is what I have ended up using.

Which brings us to the final topic, the ExecStop command. There are three reasons why I have opted for putting all the commands required to shut the server down in a shell script:

    • first and foremost, writing the shutdown command to the FIFO will return right away even if the service takes time to shut down. systemd sends SIGTERM to the unit process as soon as the last ExecStop command has exited so we have to follow the echo with something that waits for the server process to finish (see below)
    • systemd does not execute Exec commands in a shell so simply running echo > /run/proprietarycrapd/pcd.control doesn’t work, we would have to wrap the echo call in an explicit invocation of a shell
    • between the aforementioned two reasons and the fact the particular service for which I have created these units actually requires several commands in order to execute an orderly shutdown, I have decided that putting all those command in a script file instead of cramming them into the unit would be much cleaner.

The shutdown script itself is mostly unremarkable so I’ll only quote the bit responsible for waiting for the server to actually shut down. At present I am still looking for doing it in blocking fashion without adding more dependencies (wait only works on child processes of the current shell, the server in question does not create any lock files to which I could attach inotifywait, and attaching the latter to the relevant directory in /proc does not work) but in the meantime, the loop

while kill -0 “${MAINPID}” 2> /dev/null; do
sleep 1s
done

keeps the script ticking along until either the process has exited or the script has timed out (see the TimeoutStopSec directive in systemd.service(5)) and systemd has killed both it and the service itself.

Acknowledgements: with many thanks to steelman for having figured out the StandardInput=socket bit in particular and having let me bounce my ideas off him in general.

Let’s say that you need to run on your system some sort server software which instead of daemonising, has a command console permanently attached to standard input. Let us also say that said console is the only way for the administrator to interact with the service, including requesting its orderly shutdown – whoever has written it has not implemented any sort of signal handling so sending SIGTERM to the service process causes it to simply drop dead, potentially losing data in the process. And finally, let us say that the server in question is proprietary software so it isn’t really possible for you to fix any of the above in the source code (yes, I am talking about a specific piece of software – which by the way is very much alive and kicking as of late 2020). What do you do?

According to the collective wisdom of World Wide Web, the answer to this question is “use a terminal multiplexer like tmux or screen“, or at the very least a stripped-down variant of same such as dtach. OK, that sort of works – what if you want to run it as a proper system-managed service under e.g. OpenRC? The answer of the Stack Exchange crowd: have your init script invoke the terminal multiplexer. Oooooookay, how about under systemd, which actually prefers services it manages not to daemonise by itself? Nope, still “use a terminal multiplexer”.

What follows is my attempt to run a service like this under systemd more efficiently and elegantly, or at least with no extra dependencies beyond basic Unix shell commands.

Let us have a closer look at what systemd does with standard I/O of processes it spawns. The man page systemd.exec(5) tells us that what happens here is controlled by the directives StandardInput, StandardOutput and StandardError. By default the former is assigned to null while the latter two get piped to the journal, there are however quite a few other options here. According to the documentation, here is what systemd allows us to connect to standard input:

    • we are not interested in null (for obvious reasons) or any of the tty options (the whole point of this exercise is to run fully detached from any terminals);
    • data would work if we needed to feed some commands to the service when it starts but is useless for triggering a shutdown;
    • file looks promising – just point it to a FIFO on the file system and we’re all set – but it doesn’t actually take care of creating the FIFO for us. While we could in theory work around that by invoking mkfifo (and possibly chown if the service is to run as a specific user) in ExecStartPre, let’s see if we can find a better option
    • socket “is valid in socket-activated services only” and the corresponding socket unit must “have Accept=yes set”. What we want is the opposite, i.e. for the service to create its socket
    • finally, there is fd – which seems to be exactly what we need. According to the documentation all we have to do is write a socket unit creating a FIFO with appropriate ownership and permissions, make it a dependency of our service using the Sockets directive, and assign the corresponding named file descriptor to standard input.

Let’s try it out. To begin with, our socket unit “proprietarycrapd.socket”. Note that I have successfully managed to get this to work using unit templates as well, %i expansion works fine both here and while specifying unit or file-descriptor names in the service unit – but in order to avoid any possible confusion caused by the fact socket-activated services explicitly require being defined with templates, I have based my example on static units:

[Unit]
Description=Command FIFO for proprietarycrapd

[Socket]
ListenFIFO=/run/proprietarycrapd/pcd.control
DirectoryMode=0700
SocketMode=0600
SocketUser=pcd
SocketGroup=pcd
RemoveOnStop=true

Apart from the fact the unit in question has got no [Install] section (which makes sense given we want this socket to only be activated by the corresponding service, not by systemd itself), nothing out of the ordinary here. Note that since we haven’t used the directive FileDescriptorName, systemd will apply default behaviour and give the file descriptor associated with the FIFO the name of the socket unit itself.

And now, our service unit “proprietarycrapd.service”:

[Unit]
Description=proprietarycrap daemon
After=network.target

[Service]
User=pcd
Group=pcd
Sockets=proprietarycrapd.socket
StandardInput=socket
StandardOutput=journal
StandardError=journal
ExecStart=/opt/proprietarycrap/bin/proprietarycrapd
ExecStop=/usr/local/sbin/proprietarycrapd-stop

[Install]
WantedBy=multi-user.target

StandardInput=socket??? Whatever’s happened to StandardInput=fd:proprietarycrapd.socket??? Here is an odd thing. If I use the latter on my system, the service starts fine and gets the FIFO attached to its standard input – but when I try to stop the service the journal shows “Failed to load a named file descriptor: No such file or directory”, the ExecStop command is not run and systemd immediately fires a SIGTERM at the process. No idea why. Anyway, through trial and error I have found out that StandardInput=socket not only works fine in spite of being used in a service that is not socket-activated but actually does exactly what I wanted to achieve – so that is what I have ended up using.

Which brings us to the final topic, the ExecStop command. There are three reasons why I have opted for putting all the commands required to shut the server down in a shell script:

    • first and foremost, writing the shutdown command to the FIFO will return right away even if the service takes time to shut down. systemd sends SIGTERM to the unit process as soon as the last ExecStop command has exited so we have to follow the echo with something that waits for the server process to finish (see below)
    • systemd does not execute Exec commands in a shell so simply running echo > /run/proprietarycrapd/pcd.control doesn’t work, we would have to wrap the echo call in an explicit invocation of a shell
    • between the aforementioned two reasons and the fact the particular service for which I have created these units actually requires several commands in order to execute an orderly shutdown, I have decided that putting all those command in a script file instead of cramming them into the unit would be much cleaner.

The shutdown script itself is mostly unremarkable so I’ll only quote the bit responsible for waiting for the server to actually shut down. At present I am still looking for doing it in blocking fashion without adding more dependencies (wait only works on child processes of the current shell, the server in question does not create any lock files to which I could attach inotifywait, and attaching the latter to the relevant directory in /proc does not work) but in the meantime, the loop

while kill -0 “${MAINPID}” 2> /dev/null; do
sleep 1s
done

keeps the script ticking along until either the process has exited or the script has timed out (see the TimeoutStopSec directive in systemd.service(5)) and systemd has killed both it and the service itself.

Acknowledgements: with many thanks to steelman for having figured out the StandardInput=socket bit in particular and having let me bounce my ideas off him in general.

September 15 2020

Distribution kernel for Gentoo

Gentoo News (GentooNews) September 15, 2020, 5:00

The Gentoo Distribution Kernel project is excited to announce that our new Linux Kernel packages are ready for a wide audience! The project aims to create a better Linux Kernel maintenance experience by providing ebuilds that can be used to configure, compile, and install a kernel entirely through the package manager as well as prebuilt binary kernels. We are currently shipping three kernel packages:

  • sys-kernel/gentoo-kernel - providing a kernel with genpatches applied, built using the package manager with either a distribution default or a custom configuration
  • sys-kernel/gentoo-kernel-bin - prebuilt version of gentoo-kernel, saving time on compiling
  • sys-kernel/vanilla-kernel - providing a vanilla (unmodified) upstream kernel

All the packages install the kernel as part of the package installation process — just like the rest of your system! More information can be found in the Gentoo Handbook and on the Distribution Kernel project page. Happy hacking!

Larry with Tux as cowboy

The Gentoo Distribution Kernel project is excited to announce that our new Linux Kernel packages are ready for a wide audience! The project aims to create a better Linux Kernel maintenance experience by providing ebuilds that can be used to configure, compile, and install a kernel entirely through the package manager as well as prebuilt binary kernels. We are currently shipping three kernel packages:

All the packages install the kernel as part of the package installation process — just like the rest of your system! More information can be found in the Gentoo Handbook and on the Distribution Kernel project page. Happy hacking!

September 09 2020

New Packages site features

Gentoo News (GentooNews) September 09, 2020, 5:00

Our packages.gentoo.org site has recently received major feature upgrades thanks to the continued efforts of Gentoo developer Max Magorsch (arzano). Highlights include:

  • Tracking Gentoo bugs of specific packages (Bugzilla integration)
  • Tracking available upstream package versions (Repology integration)
  • QA check warnings for specific packages (QA reports integration)

Additionally, an experimental command-line client for packages.gentoo.org named “pgo” is in preparation, specifically also for our users with accesssibility needs.

Gentoo in a package

Our packages.gentoo.org site has recently received major feature upgrades thanks to the continued efforts of Gentoo developer Max Magorsch (arzano). Highlights include:

Additionally, an experimental command-line client for packages.gentoo.org named “pgo” is in preparation, specifically also for our users with accesssibility needs.

September 07 2020

py3status v3.29

Alexys Jacob (ultrabug) September 07, 2020, 20:39

Almost 5 months after the latest release (thank you COVID) I’m pleased and relieved to have finally packaged and pushed py3status v3.29 to PyPi and Gentoo portage!

This release comes with a lot of interesting contributions from quite a bunch of first-time contributors so I thought that I’d thank them first for a change!

Thank you contributors!
  • Jacotsu
  • lasers
  • Marc Poulhiès
  • Markus Sommer
  • raphaunix
  • Ricardo Pérez
  • vmoyankov
  • Wilmer van der Gaast
  • Yaroslav Dronskii
So what’s new in v3.29?

Two new exciting modules are in!

  • prometheus module: to display your promQL queries on your bar
  • watson module: for the watson time-tracking tool

Then some interesting bug fixes and enhancements are to be noted

  • py3.requests: return empty json on remote server problem fix #1401
  • core modules: remove deprectated function, fix type annotation support (#1942)

Some modules also got improved

  • battery_level module: add power consumption placeholder (#1939) + support more battery paths detection (#1946)
  • do_not_disturb module: change pause default from False to True
  • mpris module: implement broken chromium mpris interface workaround (#1943)
  • sysdata module: add {mem,swap}_free, {mem,swap}_free_unit, {mem,swap}_free_percent + try to use default intel/amd sensors first
  • google_calendar module: fix imports for newer google-python-client-api versions (#1948)

Next version of py3status will certainly drop support for EOL Python 3.5!

ultrabug (ultrabug ) September 07, 2020, 20:39

Almost 5 months after the latest release (thank you COVID) I’m pleased and relieved to have finally packaged and pushed py3status v3.29 to PyPi and Gentoo portage!

This release comes with a lot of interesting contributions from quite a bunch of first-time contributors so I thought that I’d thank them first for a change!

Thank you contributors!

  • Jacotsu
  • lasers
  • Marc Poulhiès
  • Markus Sommer
  • raphaunix
  • Ricardo Pérez
  • vmoyankov
  • Wilmer van der Gaast
  • Yaroslav Dronskii

So what’s new in v3.29?

Two new exciting modules are in!

  • prometheus module: to display your promQL queries on your bar
  • watson module: for the watson time-tracking tool

Then some interesting bug fixes and enhancements are to be noted

  • py3.requests: return empty json on remote server problem fix #1401
  • core modules: remove deprectated function, fix type annotation support (#1942)

Some modules also got improved

  • battery_level module: add power consumption placeholder (#1939) + support more battery paths detection (#1946)
  • do_not_disturb module: change pause default from False to True
  • mpris module: implement broken chromium mpris interface workaround (#1943)
  • sysdata module: add {mem,swap}_free, {mem,swap}_free_unit, {mem,swap}_free_percent + try to use default intel/amd sensors first
  • google_calendar module: fix imports for newer google-python-client-api versions (#1948)

Next version of py3status will certainly drop support for EOL Python 3.5!

September 05 2020

Portage 3.0 stabilized

Gentoo News (GentooNews) September 05, 2020, 5:00

We have good news! Gentoo’s Portage project has recently stabilized version 3.0 of the package manager.

What’s new? Well, this third version of Portage removes support for Python 2.7, which has been an ongoing effort across the main Gentoo repository by Gentoo’s Python project during the 2020 year (see this blog post).

In addition, due to a user provided patch, updating to the latest version of Portage can vastly speed up dependency calculations by around 50-60%. We love to see our community engaging in our software! For more details, see this Reddit post from the community member who provided the patch. Stay healthy and keep cooking with Gentoo!

Skating Larry

We have good news! Gentoo’s Portage project has recently stabilized version 3.0 of the package manager.

What’s new? Well, this third version of Portage removes support for Python 2.7, which has been an ongoing effort across the main Gentoo repository by Gentoo’s Python project during the 2020 year (see this blog post).

In addition, due to a user provided patch, updating to the latest version of Portage can vastly speed up dependency calculations by around 50-60%. We love to see our community engaging in our software! For more details, see this Reddit post from the community member who provided the patch. Stay healthy and keep cooking with Gentoo!

July 20 2020

Updated Gentoo RISC-V stages

Andreas K. Hüttel (dilfridge) July 20, 2020, 16:28
♦I finally got around to updating the experimental riscv stages. You can find the result on our webserver. All stages use the rv64gc instruction set; there is a multilib stage with both lp64 and lp64d support, and there are non-multilib stages for both lp64 and lp64d ABI. Please test, and report bugs if anything doesn't work.
As for the technical details, the stages are built using qemu-user on a big and beefy Gentoo amd64 AWS instance. We are currently working on automating that process, such that riscv (and potentially also arm and others) get the same level of support as amd64 and friends. Thanks a lot to Amazon for the credits via their open source promotial program!
I finally got around to updating the experimental riscv stages. You can find the result on our webserver. All stages use the rv64gc instruction set; there is a multilib stage with both lp64 and lp64d support, and there are non-multilib stages for both lp64 and lp64d ABI. Please test, and report bugs if anything doesn't work.
As for the technical details, the stages are built using qemu-user on a big and beefy Gentoo amd64 AWS instance. We are currently working on automating that process, such that riscv (and potentially also arm and others) get the same level of support as amd64 and friends. Thanks a lot to Amazon for the credits via their open source promotial program!

July 07 2020

Gentoo on Android 64-bit release

Gentoo News (GentooNews) July 07, 2020, 5:00

Gentoo Project Android is pleased to announce a new 64-bit release of the stage3 Android prefix tarball. This is a major release after 2.5 years of development, featuring gcc-10.1.0, binutils-2.34 and glibc-2.31. Enjoy Gentoo in your pocket!

gentoo-android logo

Gentoo Project Android is pleased to announce a new 64-bit release of the stage3 Android prefix tarball. This is a major release after 2.5 years of development, featuring gcc-10.1.0, binutils-2.34 and glibc-2.31. Enjoy Gentoo in your pocket!

July 04 2020

gentoo tinderbox

Agostino Sarubbo (ago) July 04, 2020, 13:03

If you are visiting this page, it is very likely that the software you maintain has been analyzed by my tinderbox system.

What is a tinderbox?

It is a machine that compiles 24/7 that aims to find build failures, test failures, QA issues and so on in the portage tree.
It can be differentiated into:

– tinderbox – ci


TINDERBOX:

It compiles the entire portage tree against a particular change like:
– a new version of compiler/libc/linker
– a new C/CXX/LD FLAG
– a different toolchain like clang/llvm/lld
– and so on

In short it uses uncommon but supported settings and looks for breakage.

CI:

It is a continuous integration; the CI system compiles the packages after they have been touched in gentoo.git

The CI system uses a standard set of settings, so if you get a bug report from it, it is very likely that the failure is reproducible for users too.

What are the rules that you may know when you see a report from those systems?

1) The reports are filed automatically.
2) Because of the first, it is not possible for me to set an exact error in the bug summary. Instead a general error is used.
3) Because of the above, maintainer is encouraged to set an appropriate summary at its convenience
4) Common additional logs (like test-suite.log testlog.txt CMakeOutput.log CMakeError.log LastTest.log config.log testsuite.log autoconf.out) are automatically attached but before of the first if you need something else please ask for them.
5) If you ask for another log, I have to stop the tinderbox service, so there may be a delay between your request and my reaction.
6) There may be an internal reference between round brackets on the “Discovered on” line. This is for me to understand where that failure was reproduced.
7) If you see ‘ci’ as internal reference after you pushed a fix, it is very probably that the bug still exist, or there is another failure in the same ebuild phase. Please inspect deeply the build log. Point 8 may help you about that.
8) At the beginning of the build log a git SHA of the repository at the time of emerging is provided. For convenience there is a link.
9) To avoid making a separate attachment on bugzilla, at the beginning of the build log there is the ’emerge –info’, please check it DEEPLY to understand the system configuration and what differs respect to a more ‘standard’ system.
10) If you see a compressed build log, is because the plain text version exceeds the limits on our bugzilla (1MB).
11) This system is not perfect. There may be duplicates or invalid bugs.
12) My best suggestion is try to reproduce the issue on empty stage3 (or docker for convenience).
13) When you close the bug with a resolution different from RESOLVED/FIXED please not be cryptic.
14) If new points will be added, there may be a mention like “Valid from YY:MM:DD”

How to fix the common errors:

1) Compile/build failure:
It depends on the error. Please get in touch with upstream if you are unsure.

2) Test failure:
It depends on the error. Please get in touch with upstream if you are unsure.

3) CFLAGS/LDFLAGS not respected:
You can touch the build system or inject the flags in the ebuild where possible. There are a lot of examples in the tracker.

4) -Wformat-security failure:
TBD

5) Metainfo installed in /usr/share/appdata:
Install metainfo into /usr/share instead of /usr/share/appdata

6) Python modules that are not byte-compiled
TBD

7) Unrecognized configure options:
Remove the configure options from the ebuild where possible. Sometimes there are false positives related to the option passed to configure in subdirectories.

8) Compressed manpages and documentation:
Decompress documentation and install it as plain text.

9) Icon cache not updated:
TBD

10) Deprecated configure.in:
TBD

11) .Desktop do not pass validation:
TBD

12) Path that should be created at runtime:
TBD

13) Libraries that lack NEEDED entries:
TBD

14) Libraries that lack a SONAME:
TBD

15) Text relocation:
TBD

16) Toolchain binaries called directly (cc/gcc/g++/c++/nm/ar/ranlib/cpp/ld/strip/objcopy/objdump/size/as/strings/readelf and so on):
TBD

17) Files with name not encoded with UTF-8:
TBD

18) Files with broken symlink:
TBD

19) Command that do not exist:
TBD

20) Pkg-config files with wrong LDFLAGS:
TBD

21) Pre-stripped files:
TBD

22) File collision:
TBD

23) Compile failure if CPP is set to CC -E:
TBD

24) Compile failure with -fno-common:
TBD

25) Files with unresolved SONAME dependencies:
TBD

26) Files that contain insecure RUNPATHs:
TBD

27) Files installed into unexpected paths:
TBD

28) LD usage instead of CC/CXX:
TBD

29) Link failure with LLD because of /usr/lib:
TBD

30) Compilation in src_install phase:
TBD

31) Automake usage in maintainer-mode:
TBD

32) Mimeinfo cache not updated when .desktop files with MimeType are installed:
TBD

33) Broken png files installed:
TBD

34) Mime-info files installed without update mime-info cache:
TBD

35) Udev rules installed into wrong directory:
TBD

ago (ago ) July 04, 2020, 13:03

If you are visiting this page, it is very likely that the software you maintain has been analyzed by my tinderbox system.

What is a tinderbox?

It is a machine that compiles 24/7 that aims to find build failures, test failures, QA issues and so on in the portage tree.
It can be differentiated into:

– tinderbox

– ci

TINDERBOX:

It compiles the entire portage tree against a particular change like:
– a new version of compiler/libc/linker
– a new C/CXX/LD FLAG
– a different toolchain like clang/llvm/lld
– and so on

In short it uses uncommon but supported settings and looks for breakage.

CI:

It is a continuous integration; the CI system compiles the packages after they have been touched in gentoo.git

The CI system uses a standard set of settings, so if you get a bug report from it, it is very likely that the failure is reproducible for users too.

What are the rules that you may know when you see a report from those systems?

1) The reports are filed automatically.
2) Because of the first, it is not possible for me to set an exact error in the bug summary. Instead a general error is used.
3) Because of the above, maintainer is encouraged to set an appropriate summary at its convenience
4) Common additional logs (like test-suite.log testlog.txt CMakeOutput.log CMakeError.log LastTest.log config.log testsuite.log autoconf.out) are automatically attached but before of the first if you need something else please ask for them.
5) If you ask for another log, I have to stop the tinderbox service, so there may be a delay between your request and my reaction.
6) There may be an internal reference between round brackets on the “Discovered on” line. This is for me to understand where that failure was reproduced.
7) If you see ‘ci’ as internal reference after you pushed a fix, it is very probably that the bug still exist, or there is another failure in the same ebuild phase. Please inspect deeply the build log. Point 8 may help you about that.
8) At the beginning of the build log a git SHA of the repository at the time of emerging is provided. For convenience there is a link.
9) To avoid making a separate attachment on bugzilla, at the beginning of the build log there is the ’emerge –info’, please check it DEEPLY to understand the system configuration and what differs respect to a more ‘standard’ system.
10) If you see a compressed build log, is because the plain text version exceeds the limits on our bugzilla (1MB).
11) This system is not perfect. There may be duplicates or invalid bugs.
12) My best suggestion is try to reproduce the issue on empty stage3 (or docker for convenience).
13) When you close the bug with a resolution different from RESOLVED/FIXED please not be cryptic.
14) If new points will be added, there may be a mention like “Valid from YY:MM:DD”

How to fix the common errors:

1) Compile/build failure:
It depends on the error. Please get in touch with upstream if you are unsure.

2) Test failure:
It depends on the error. Please get in touch with upstream if you are unsure.

3) CFLAGS/LDFLAGS not respected:
You can touch the build system or inject the flags in the ebuild where possible. There are a lot of examples in the tracker.

4) -Wformat-security failure:
TBD

5) Metainfo installed in /usr/share/appdata:
Install metainfo into /usr/share instead of /usr/share/appdata

6) Python modules that are not byte-compiled
TBD

7) Unrecognized configure options:
Remove the configure options from the ebuild where possible. Sometimes there are false positives related to the option passed to configure in subdirectories.

8) Compressed manpages and documentation:
Decompress documentation and install it as plain text.

9) Icon cache not updated:
TBD

10) Deprecated configure.in:
TBD

11) .Desktop do not pass validation:
TBD

12) Path that should be created at runtime:
TBD

13) Libraries that lack NEEDED entries:
TBD

14) Libraries that lack a SONAME:
TBD

15) Text relocation:
TBD

16) Toolchain binaries called directly (cc/gcc/g++/c++/nm/ar/ranlib/cpp/ld/strip/objcopy/objdump/size/as/strings/readelf and so on):
TBD

17) Files with name not encoded with UTF-8:
TBD

18) Files with broken symlink:
TBD

19) Command that do not exist:
TBD

20) Pkg-config files with wrong LDFLAGS:
TBD

21) Pre-stripped files:
TBD

22) File collision:
TBD

23) Compile failure if CPP is set to CC -E:
TBD

24) Compile failure with -fno-common:
TBD

25) Files with unresolved SONAME dependencies:
TBD

26) Files that contain insecure RUNPATHs:
TBD

27) Files installed into unexpected paths:
TBD

28) LD usage instead of CC/CXX:
TBD

29) Link failure with LLD because of /usr/lib:
TBD

30) Compilation in src_install phase:
TBD

31) Automake usage in maintainer-mode:
TBD

32) Mimeinfo cache not updated when .desktop files with MimeType are installed:
TBD

33) Broken png files installed:
TBD

34) Mime-info files installed without update mime-info cache:
TBD

35) Udev rules installed into wrong directory:
TBD

Official Gentoo Docker images

Gentoo News (GentooNews) July 04, 2020, 5:00

Did you already know that we have official Gentoo Docker images available on Docker Hub?! The most popular one is based on the amd64 stage. Images are created automatically; you can peek at the source code for this on our git server. Thanks to the Gentoo Docker project!

docker logo

Did you already know that we have official Gentoo Docker images available on Docker Hub?! The most popular one is based on the amd64 stage. Images are created automatically; you can peek at the source code for this on our git server. Thanks to the Gentoo Docker project!

June 04 2020

Baïkal (CalDAV) 0.7.0 in Gentoo

Nathan Zachary (nathanzachary) June 04, 2020, 3:30

Just this past week, the new version of of Baïkal (0.7.0)—a PHP CalDAV and CardDAV server based on Sabre—was released, and one of the key changes was that support was added for more modern versions of PHP (like 7.4).

Since my personal Gentoo server is running the ~amd64 branch, I had to wait for this release in order to get my CalDAV server up and running. For the most part, installing Baïkal 0.7.0 was a straightforward process, but there were a couple of “gotchas” along the way.

The first (and most confusing) problem came after the installation/initial configuration when I tried to access my newly-created user’s calendar via the URL:

dav.MYDOMAIN.com/html/dav.php/calendar/MYUSERNAME/default

I knew that something was wrong when it wouldn’t even prompt me for credentials. Instead, the logs indicated the following error message:

[Tue Jun 02 14:13:05.529805 2020] [proxy_fcgi:error] [pid 32165:tid 139743908050688] [client 71.81.87.208:38910] AH01071: Got error 'PHP message: LogicException: Requested uri (/html/dav.php) is out of base uri (/s/html/dav.php/) in /var/www/domains/MYDOMAIN/dav/htdocs/vendor/sabre/http/lib/Request.php:184

I couldn’t figure out where the “/s/” was coming in before the “/html” portion, but that was certainly the cause of the error message. I filed an issue for it, and though I still don’t know the source of the problem, I was able to work around it by adding a trailing slash to the DocumentRoot for that particular vhost:

# pwd && diff -Nut dav.MYDOMAIN.conf.PRE-20200602_docroot dav.MYDOMAIN.conf
/etc/apache2/vhosts.d/includes
--- dav.MYDOMAIN.conf.PRE-20200602_docroot 2020-06-02 17:23:20.246281195 -0400 +++ dav.MYDOMAIN.conf 2020-06-02 17:20:59.892270352 -0400
@@ -1,7 +1,7 @@
- DocumentRoot "/var/www/domains/MYDOMAIN/dav/htdocs"
+ DocumentRoot "/var/www/domains/MYDOMAIN/dav/htdocs/"

After solving that strange problem, I was at least prompted for credentials when I accessed the calendar URL from above. After logging in, I ran into one more problem, though:

Class 'XMLWriter' not found

This problem was much easier to fix. I simply needed to add the ‘xmlwriter‘ USE flag to dev-lang/php (I also added ‘xmlreader‘ for good measure), emerge it again, and restart PHP-FPM. Other distributions (like CentOS) will likely need to install the ‘php-xml’ package (or something similar).

After that fix, I am happy to report that Baïkal 0.7.0 is working beautifully, and I have my calendars synced across all my devices. I personally use Thunderbird with Lightning on my computers, and a combination of DAVx5 with Simple Calendar Pro on my Android devices.

Just this past week, the new version of of Baïkal (0.7.0)—a PHP CalDAV and CardDAV server based on Sabre—was released, and one of the key changes was that support was added for more modern versions of PHP (like 7.4).

Since my personal Gentoo server is running the ~amd64 branch, I had to wait for this release in order to get my CalDAV server up and running. For the most part, installing Baïkal 0.7.0 was a straightforward process, but there were a couple of “gotchas” along the way.

The first (and most confusing) problem came after the installation/initial configuration when I tried to access my newly-created user’s calendar via the URL:

https://dav.MYDOMAIN.com/html/dav.php/calendar/MYUSERNAME/default

I knew that something was wrong when it wouldn’t even prompt me for credentials. Instead, the logs indicated the following error message:

[Tue Jun 02 14:13:05.529805 2020] [proxy_fcgi:error] [pid 32165:tid 139743908050688] [client 71.81.87.208:38910] AH01071: Got error 'PHP message: LogicException: Requested uri (/html/dav.php) is out of base uri (/s/html/dav.php/) in /var/www/domains/MYDOMAIN/dav/htdocs/vendor/sabre/http/lib/Request.php:184

I couldn’t figure out where the “/s/” was coming in before the “/html” portion, but that was certainly the cause of the error message. I filed an issue for it, and though I still don’t know the source of the problem, I was able to work around it by adding a trailing slash to the DocumentRoot for that particular vhost:

# pwd && diff -Nut dav.MYDOMAIN.conf.PRE-20200602_docroot dav.MYDOMAIN.conf
/etc/apache2/vhosts.d/includes
--- dav.MYDOMAIN.conf.PRE-20200602_docroot 2020-06-02 17:23:20.246281195 -0400 +++ dav.MYDOMAIN.conf 2020-06-02 17:20:59.892270352 -0400
@@ -1,7 +1,7 @@
- DocumentRoot "/var/www/domains/MYDOMAIN/dav/htdocs"
+ DocumentRoot "/var/www/domains/MYDOMAIN/dav/htdocs/"

After solving that strange problem, I was at least prompted for credentials when I accessed the calendar URL from above. After logging in, I ran into one more problem, though:

Class 'XMLWriter' not found

This problem was much easier to fix. I simply needed to add the ‘xmlwriter‘ USE flag to dev-lang/php (I also added ‘xmlreader‘ for good measure), emerge it again, and restart PHP-FPM. Other distributions (like CentOS) will likely need to install the ‘php-xml’ package (or something similar).

After that fix, I am happy to report that Baïkal 0.7.0 is working beautifully, and I have my calendars synced across all my devices. I personally use Thunderbird with Lightning on my computers, and a combination of DAVx5 with Simple Calendar Pro on my Android devices.

May 10 2020

200th Gentoo Council meeting

Gentoo News (GentooNews) May 10, 2020, 5:00

Way back in 2005, the reorganization of Gentoo led to the formation of the Gentoo Council, a steering body elected annually by the Gentoo developers. Forward 15 years, and today we had our 200th meeting! (No earth shaking decisions were taken today though.) The logs and summaries of all meetings can be read online on the archive page.

council group photo

Way back in 2005, the reorganization of Gentoo led to the formation of the Gentoo Council, a steering body elected annually by the Gentoo developers. Forward 15 years, and today we had our 200th meeting! (No earth shaking decisions were taken today though.) The logs and summaries of all meetings can be read online on the archive page.

May 08 2020

Reviving Gentoo Bugday

Gentoo News (GentooNews) May 08, 2020, 5:00

Reviving an old tradition, the next Gentoo Bugday will take place on Saturday 2020-06-06. Let’s contribute to Gentoo and fix bugs! We will focus on two topics in particular:

  • Adding or improving documentation on the Gentoo wiki
  • Fixing packages that fail with -fno-common (bug #705764)

Join us on channel #gentoo-bugday, freenode IRC, for real-time help. See you on 2020-06-06!

bug outline

Reviving an old tradition, the next Gentoo Bugday will take place on Saturday 2020-06-06. Let’s contribute to Gentoo and fix bugs! We will focus on two topics in particular:

  • Adding or improving documentation on the Gentoo wiki
  • Fixing packages that fail with -fno-common (bug #705764)

Join us on channel #gentoo-bugday, freenode IRC, for real-time help. See you on 2020-06-06!

April 16 2020

Why I stopped fuzzing research

Agostino Sarubbo (ago) April 16, 2020, 17:53

If you followed me in the past, you may have noticed that I stopped fuzzing research. During this time many people have asked me why…so instead of repeating the same answer every time, why not write a few lines about it…

While fuzz research was in my case fully automated, if you want to do a nice job you should:
– Communicate with upstream by making an exhaustive bug-report;
– Publish an advisory that collects all the needed info (affected versions, fixed version, commit fix, reproducer, poc, and so on) otherwise you force each downstream maintainer to do that by himself.

What happens in the majority of cases instead?
– When there is no ticketing system, upstream maintainers do not answer to your emails but fix the issues silently so, if you aren’t familiar with the code or if you don’t have time for investigations, you don’t have enough data to post. Even if you had time and you knew the code, you could still make a mistake; so why take the responsibility of pointing out commit fixes and so on?
– If you pass the above step, you have to request a CVE. In the past it was enough to publish on oss-security and you would get a CVE from a member of the Mitre team. Nowadays you have to fill a request that includes all the mentioned data and………wait ♦

If you pass the above two points and publish your advisory, what’s the next step? Stay tuned and wait for duplicates ♦ .

Let’s see a real example:
In the past I did fuzzing research on audiofile. Here is a screenshot of the issues without any words in the search field:

Do you see anything strange? Yeah there is clearly a duplicate.
I’m showing this image to point out the fact that, in order to avoid the duplicate, it would have been enough to look a little further below, so I am wondering:
if you are able to compile the software, use ASAN, use AFL, why aren’t you able to make a simple search to check if this issue was already filed?
For now, the only answer that I can think of is: everyone is hungry to find security issues and be the discoverer of a CVE.
Let’s clarify: if you find security issues by fuzzing you are not a security researcher at all and you will not be more palatable to the cybersecurity world. You are just creating CVE confusion for the rest of us.

On the other side, dear Mitre: you force us to fill an exhaustive request so, since you have all the data, why are you mistakenly assigning CVEs for already reported issues?

The first few times I saw these duplicates, I tried to report them but, unfortunately, it’s not my job and I found it very hard to do because of the large amount.

So, in short, I stopped fuzzing research because due to the current state of things, it’s a big waste of time.

If you followed me in the past, you may have noticed that I stopped fuzzing research. During this time many people have asked me why…so instead of repeating the same answer every time, why not write a few lines about it…

While fuzz research was in my case fully automated, if you want to do a nice job you should:
– Communicate with upstream by making an exhaustive bug-report;
– Publish an advisory that collects all the needed info (affected versions, fixed version, commit fix, reproducer, poc, and so on) otherwise you force each downstream maintainer to do that by himself.

What happens in the majority of cases instead?
– When there is no ticketing system, upstream maintainers do not answer to your emails but fix the issues silently so, if you aren’t familiar with the code or if you don’t have time for investigations, you don’t have enough data to post. Even if you had time and you knew the code, you could still make a mistake; so why take the responsibility of pointing out commit fixes and so on?
– If you pass the above step, you have to request a CVE. In the past it was enough to publish on oss-security and you would get a CVE from a member of the Mitre team. Nowadays you have to fill a request that includes all the mentioned data and………wait 😀

If you pass the above two points and publish your advisory, what’s the next step? Stay tuned and wait for duplicates 😀 .

Let’s see a real example:
In the past I did fuzzing research on audiofile. Here is a screenshot of the issues without any words in the search field:

Do you see anything strange? Yeah there is clearly a duplicate.
I’m showing this image to point out the fact that, in order to avoid the duplicate, it would have been enough to look a little further below, so I am wondering:
if you are able to compile the software, use ASAN, use AFL, why aren’t you able to make a simple search to check if this issue was already filed?
For now, the only answer that I can think of is: everyone is hungry to find security issues and be the discoverer of a CVE.
Let’s clarify: if you find security issues by fuzzing you are not a security researcher at all and you will not be more palatable to the cybersecurity world. You are just creating CVE confusion for the rest of us.

On the other side, dear Mitre: you force us to fill an exhaustive request so, since you have all the data, why are you mistakenly assigning CVEs for already reported issues?

The first few times I saw these duplicates, I tried to report them but, unfortunately, it’s not my job and I found it very hard to do because of the large amount.

So, in short, I stopped fuzzing research because due to the current state of things, it’s a big waste of time.

April 15 2020

Spam increase due to SpamAssassin Bayes database not available for scanning

Nathan Zachary (nathanzachary) April 15, 2020, 20:04

For approximately the past six weeks or so, I’ve noticed an uptick in the amount of spam getting through (and delivered) on my primary mail server. At first the increase in false negatives (meaning spam not getting flagged as such) wasn’t all that bad, so I didn’t think much of it. However, starting last week and especially this week, the increase was so dramatic that it prompted me to look further into the problem.

I started by looking through my SpamAssassin and amavis settings to make sure that nothing was blatantly wrong, but nothing stood out as having recently changed. I made sure that I had the required Perl modules for all of SpamAssassin’s filtering, and again, nothing had recently changed. Coming up empty-handed, I decided to take a look at some headers for an email that came through even though it was spam:

X-Spam-Status: No, score=1.502 required=3.2 tests=[
	DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.25,
	FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.25,
	HTML_MESSAGE=0.001]

I thought that possibly something had changed with how SpamAssassin assigned points for various tests, and temporarily dropped the score required for spam flagging to 1.4. Delving more deeply, though, I found that the point assignments had not changed, so I reverted to 3.2 and kept investigating. After looking again, I noticed that one key test wasn’t showing in the ‘X-Spam-Status’ header for this email: Bayesian filtering. Normally, there would be some type of reference to ‘BAYES_%=#’ (where % represents the chance that the Bayesian filter thought that the message could be spam and # represents the score assigned to the email based on that chance) in the spam header. However, it was no longer showing up, which indicated to me that the Bayes filters weren’t running.

I then started with some basic Bayes troubleshooting steps, and found some clues. By analysing the output of spamassassin -D --lint, I saw that there could be problem with the Bayes database:

Apr 14 22:10:41.875 [20701] dbg: plugin: loading Mail::SpamAssassin::Plugin::Bayes from @INC
Apr 14 22:10:42.061 [20701] dbg: config: fixed relative path: /var/lib/spamassassin/3.004004/updates_spamassassin_org/23_bayes.cf
Apr 14 22:10:42.061 [20701] dbg: config: using "/var/lib/spamassassin/3.004004/updates_spamassassin_org/23_bayes.cf" for included file
Apr 14 22:10:42.061 [20701] dbg: config: read file /var/lib/spamassassin/3.004004/updates_spamassassin_org/23_bayes.cf
Apr 14 22:10:42.748 [20701] dbg: plugin: Mail::SpamAssassin::Plugin::Bayes=HASH(0x55f1700749b8) implements 'learner_new', priority 0
Apr 14 22:10:42.748 [20701] dbg: bayes: learner_new self=Mail::SpamAssassin::Plugin::Bayes=HASH(0x55f1700749b8), bayes_store_module=Mail::SpamAssassin::BayesStore::DBM
Apr 14 22:10:42.759 [20701] dbg: bayes: learner_new: got store=Mail::SpamAssassin::BayesStore::DBM=HASH(0x55f170db5ba8)
Apr 14 22:10:42.759 [20701] dbg: plugin: Mail::SpamAssassin::Plugin::Bayes=HASH(0x55f1700749b8) implements 'learner_is_scan_available', priority 0
Apr 14 22:10:42.759 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_toks
Apr 14 22:10:42.760 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_seen
Apr 14 22:10:42.760 [20701] dbg: bayes: found bayes db version 3
Apr 14 22:10:42.760 [20701] dbg: bayes: DB journal sync: last sync: 1586894330
Apr 14 22:10:42.760 [20701] dbg: bayes: not available for scanning, only 0 ham(s) in bayes DB < 200
Apr 14 22:10:42.760 [20701] dbg: bayes: untie-ing
Apr 14 22:10:42.764 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_toks
Apr 14 22:10:42.764 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_seen
Apr 14 22:10:42.765 [20701] dbg: bayes: found bayes db version 3
Apr 14 22:10:42.765 [20701] dbg: bayes: DB journal sync: last sync: 1586894330
Apr 14 22:10:42.765 [20701] dbg: bayes: not available for scanning, only 0 ham(s) in bayes DB < 200
Apr 14 22:10:42.765 [20701] dbg: bayes: untie-ing

In particular, the following lines indicated a problem to me:

Apr 14 22:10:42.760 [20701] dbg: bayes: not available for scanning, only 0 ham(s) in bayes DB < 200
<snip>
Apr 14 22:10:42.765 [20701] dbg: bayes: not available for scanning, only 0 ham(s) in bayes DB < 200

Thinking back, I remembered that there were some changes to the amavis implementation in Gentoo that caused me problems in late February of 2020. One of those changes was relocating the amavis user’s home/runtime directory from /var/amavis/ to /var/lib/amavishome/. That’s when I saw it in the debugging output:

Apr 14 22:10:42.759 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_toks
Apr 14 22:10:42.760 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_seen

The directory for the Bayes database shouldn’t be /var/amavis/.spamassassin/bayes* any longer, but instead should be /var/lib/amavishome/.spamassassin/bayes*. I made that change:

# grep bayes_path /etc/mail/spamassassin/local.cf 
bayes_path /var/lib/amavishome/.spamassassin/bayes

and restarted both amavis and spamd, and now I could see the Bayes filter in the ‘X-Spam-Status’ header:

X-Spam-Status: Yes, score=18.483 required=3.2 tests=[BAYES_99=5.75,
	BAYES_999=8, FROM_SUSPICIOUS_NTLD=0.499,
	FROM_SUSPICIOUS_NTLD_FP=0.514, HTML_MESSAGE=0.001,
	HTML_OFF_PAGE=0.927, PDS_OTHER_BAD_TLD=1.999, RDNS_NONE=0.793]

After implementing the change for the Bayes database location in amavis, I have seen the false negative level drop back to where it used to be. ♦

Cheers,
Zach

For approximately the past six weeks or so, I’ve noticed an uptick in the amount of spam getting through (and delivered) on my primary mail server. At first the increase in false negatives (meaning spam not getting flagged as such) wasn’t all that bad, so I didn’t think much of it. However, starting last week and especially this week, the increase was so dramatic that it prompted me to look further into the problem.

I started by looking through my SpamAssassin and amavis settings to make sure that nothing was blatantly wrong, but nothing stood out as having recently changed. I made sure that I had the required Perl modules for all of SpamAssassin’s filtering, and again, nothing had recently changed. Coming up empty-handed, I decided to take a look at some headers for an email that came through even though it was spam:

X-Spam-Status: No, score=1.502 required=3.2 tests=[
	DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.25,
	FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.25,
	HTML_MESSAGE=0.001]

I thought that possibly something had changed with how SpamAssassin assigned points for various tests, and temporarily dropped the score required for spam flagging to 1.4. Delving more deeply, though, I found that the point assignments had not changed, so I reverted to 3.2 and kept investigating. After looking again, I noticed that one key test wasn’t showing in the ‘X-Spam-Status’ header for this email: Bayesian filtering. Normally, there would be some type of reference to ‘BAYES_%=#’ (where % represents the chance that the Bayesian filter thought that the message could be spam and # represents the score assigned to the email based on that chance) in the spam header. However, it was no longer showing up, which indicated to me that the Bayes filters weren’t running.

I then started with some basic Bayes troubleshooting steps, and found some clues. By analysing the output of spamassassin -D --lint, I saw that there could be problem with the Bayes database:

Apr 14 22:10:41.875 [20701] dbg: plugin: loading Mail::SpamAssassin::Plugin::Bayes from @INC
Apr 14 22:10:42.061 [20701] dbg: config: fixed relative path: /var/lib/spamassassin/3.004004/updates_spamassassin_org/23_bayes.cf
Apr 14 22:10:42.061 [20701] dbg: config: using "/var/lib/spamassassin/3.004004/updates_spamassassin_org/23_bayes.cf" for included file
Apr 14 22:10:42.061 [20701] dbg: config: read file /var/lib/spamassassin/3.004004/updates_spamassassin_org/23_bayes.cf
Apr 14 22:10:42.748 [20701] dbg: plugin: Mail::SpamAssassin::Plugin::Bayes=HASH(0x55f1700749b8) implements 'learner_new', priority 0
Apr 14 22:10:42.748 [20701] dbg: bayes: learner_new self=Mail::SpamAssassin::Plugin::Bayes=HASH(0x55f1700749b8), bayes_store_module=Mail::SpamAssassin::BayesStore::DBM
Apr 14 22:10:42.759 [20701] dbg: bayes: learner_new: got store=Mail::SpamAssassin::BayesStore::DBM=HASH(0x55f170db5ba8)
Apr 14 22:10:42.759 [20701] dbg: plugin: Mail::SpamAssassin::Plugin::Bayes=HASH(0x55f1700749b8) implements 'learner_is_scan_available', priority 0
Apr 14 22:10:42.759 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_toks
Apr 14 22:10:42.760 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_seen
Apr 14 22:10:42.760 [20701] dbg: bayes: found bayes db version 3
Apr 14 22:10:42.760 [20701] dbg: bayes: DB journal sync: last sync: 1586894330
Apr 14 22:10:42.760 [20701] dbg: bayes: not available for scanning, only 0 ham(s) in bayes DB < 200
Apr 14 22:10:42.760 [20701] dbg: bayes: untie-ing
Apr 14 22:10:42.764 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_toks
Apr 14 22:10:42.764 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_seen
Apr 14 22:10:42.765 [20701] dbg: bayes: found bayes db version 3
Apr 14 22:10:42.765 [20701] dbg: bayes: DB journal sync: last sync: 1586894330
Apr 14 22:10:42.765 [20701] dbg: bayes: not available for scanning, only 0 ham(s) in bayes DB < 200
Apr 14 22:10:42.765 [20701] dbg: bayes: untie-ing

In particular, the following lines indicated a problem to me:

Apr 14 22:10:42.760 [20701] dbg: bayes: not available for scanning, only 0 ham(s) in bayes DB < 200
<snip>
Apr 14 22:10:42.765 [20701] dbg: bayes: not available for scanning, only 0 ham(s) in bayes DB < 200

Thinking back, I remembered that there were some changes to the amavis implementation in Gentoo that caused me problems in late February of 2020. One of those changes was relocating the amavis user’s home/runtime directory from /var/amavis/ to /var/lib/amavishome/. That’s when I saw it in the debugging output:

Apr 14 22:10:42.759 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_toks
Apr 14 22:10:42.760 [20701] dbg: bayes: tie-ing to DB file R/O /var/amavis/.spamassassin/bayes_seen

The directory for the Bayes database shouldn’t be /var/amavis/.spamassassin/bayes* any longer, but instead should be /var/lib/amavishome/.spamassassin/bayes*. I made that change:

# grep bayes_path /etc/mail/spamassassin/local.cf 
bayes_path /var/lib/amavishome/.spamassassin/bayes

and restarted both amavis and spamd, and now I could see the Bayes filter in the ‘X-Spam-Status’ header:

X-Spam-Status: Yes, score=18.483 required=3.2 tests=[BAYES_99=5.75,
	BAYES_999=8, FROM_SUSPICIOUS_NTLD=0.499,
	FROM_SUSPICIOUS_NTLD_FP=0.514, HTML_MESSAGE=0.001,
	HTML_OFF_PAGE=0.927, PDS_OTHER_BAD_TLD=1.999, RDNS_NONE=0.793]

After implementing the change for the Bayes database location in amavis, I have seen the false negative level drop back to where it used to be. 🙂

Cheers,
Zach

April 14 2020

py3status v3.28 – goodbye py2.6-3.4

Alexys Jacob (ultrabug) April 14, 2020, 14:51

The newest version of py3status starts to enforce the deprecation of Python 2.6 to 3.4 (included) initiated by Thiago Kenji Okada more than a year ago and orchestrated by Hugo van Kemenade via #1904 and #1896.

Thanks to Hugo, I discovered a nice tool by @asottile to update your Python code base to recent syntax sugars called pyupgrade!

Debian buster users might be interested in the installation war story that @TRS-80 kindly described and the final (and documented) solution found.

Changelog since v3.26
  • drop support for EOL Python 2.6-3.4 (#1896), by Hugo van Kemenade
  • i3status: support read_file module (#1909), by @lasers thx to @dohseven
  • clock module: add “locale” config parameter to change time representation (#1910), by inemajo
  • docs: update debian instructions fix #1916
  • mpd_status module: use currentsong command if possible (#1924), by girst
  • networkmanager module: allow using the currently active AP in formats (#1921), by Benoît Dardenne
  • volume_status module: change amixer flag ordering fix #1914 (#1920)
Thank you contributors
  • Thiago Kenji Okada
  • Hugo van Kemenade
  • Benoît Dardenne
  • @dohseven
  • @inemajo
  • @girst
  • @lasers

The newest version of py3status starts to enforce the deprecation of Python 2.6 to 3.4 (included) initiated by Thiago Kenji Okada more than a year ago and orchestrated by Hugo van Kemenade via #1904 and #1896.

Thanks to Hugo, I discovered a nice tool by @asottile to update your Python code base to recent syntax sugars called pyupgrade!

Debian buster users might be interested in the installation war story that @TRS-80 kindly described and the final (and documented) solution found.

Changelog since v3.26

  • drop support for EOL Python 2.6-3.4 (#1896), by Hugo van Kemenade
  • i3status: support read_file module (#1909), by @lasers thx to @dohseven
  • clock module: add “locale” config parameter to change time representation (#1910), by inemajo
  • docs: update debian instructions fix #1916
  • mpd_status module: use currentsong command if possible (#1924), by girst
  • networkmanager module: allow using the currently active AP in formats (#1921), by Benoît Dardenne
  • volume_status module: change amixer flag ordering fix #1914 (#1920)

Thank you contributors

  • Thiago Kenji Okada
  • Hugo van Kemenade
  • Benoît Dardenne
  • @dohseven
  • @inemajo
  • @girst
  • @lasers
VIEW

SCOPE

FILTER
  from
  to