Fedora People

OpenWRT et mise en place du Wi-Fi 802.11r

Posted by Guillaume Kulakowski on November 25, 2020 12:52 PM

On continue l’aventure autour d’OpenWRT avec ce coup-ci, la mise en place du Wi-Fi 802.11r. Tout d’abord, qu’est ce que le Wi-Fi 802.11r ? Si on simplifie à l’extrême, c’est pouvoir passer d’une borne Wi-Fi à une autre sans coupure. En image : je suis dans mon salon, je suis connecté à la borne la […]

Cet article OpenWRT et mise en place du Wi-Fi 802.11r est apparu en premier sur Guillaume Kulakowski's blog.

How to install the NVIDIA drivers on Fedora 33 with Hybrid Switchable Graphics

Posted by Mohammed Tayeh on November 25, 2020 07:27 AM

This is guide, how to install NVIDIA proprietary drivers on Fedora 33 with Hybrid Switchable Graphics [Intel + Nvidia GeForce]

Backup important files before you start installation. And this is of course at your own risk, because graphic cards, components and monitors are different and some combinations might cause totally unexpected results.

identify your Nvidia graphic

lspci -vnn | grep VGA

the output of the above command will be like:

00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics [8086:9bc4] (rev 05) (prog-if 00 [VGA controller])
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU116M [GeForce GTX 1660 Ti Mobile] [10de:2191] (rev a1) (prog-if 00 [VGA controller])

Enable RPM fusion

sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

update your system

sudo dnf update

It is recommended to reboot your system after update

install Nvidia driver

install akmod-nvidia

sudo dnf install gcc kernel-headers kernel-devel akmod-nvidia xorg-x11-drv-nvidia xorg-x11-drv-nvidia-libs xorg-x11-drv-nvidia-libs.i686

install cuda

sudo dnf install xorg-x11-drv-nvidia-cuda

Drivers are installed, run the below command

run sudo akmods --force and sudo dracut --force and then reboot.

and that’s it, the switch happens automatically when needed.

note this guide will disable wayland, X.org only supported.

to check NVIDIA Processes run the command nvidia-smi

nvidia-smi
Wed Nov 25 09:51:36 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.45.01    Driver Version: 455.45.01    CUDA Version: 11.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce GTX 166...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   33C    P8     4W /  N/A |      5MiB /  5944MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1776      G   /usr/libexec/Xorg                   4MiB |
+-----------------------------------------------------------------------------+

screenshot

NVIDIA X Server Setting nvidia-smi fedora_about

Fedora program update: 2020-48

Posted by Fedora Community Blog on November 24, 2020 09:22 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora this week. Elections voting is open through 3 December. Fedora 31 has reached end of life. EPEL 6 will reach end-of-life on Monday. There will be no FPgM office hours this week (25 November) due to PTO. Announcements Calls for Participation Help wanted Upcoming meetings Releases […]

The post Fedora program update: 2020-48 appeared first on Fedora Community Blog.

Web interfaces for your syslog server – an overview

Posted by Peter Czanik on November 24, 2020 12:29 PM

This is the 2020 edition of my most read blog entry about syslog-ng web-based graphical user interfaces (web GUIs). Many things have changed in the past few years. In 2011, only a single logging as a service solution was available, while nowadays, I regularly run into others. Also, while some software disappeared, the number of logging-related GUIs is growing. This is why in this post, I will mostly focus on generic log management and open source instead of highly specialized software, like SIEMs.

Why grep is not enough?

Centralized event logging has been an important part of IT for many years for many reasons. Firstly, it is more convenient to browse logs in a central location rather than viewing them on individual machines. Secondly, central storage is also more secure. Even if logs stored locally are altered or removed, you can still check the logs on the central log server. Finally, compliance with different regulations also makes central logging necessary.

System administrators often prefer to use the command line. Utilities such as grep and AWK are powerful tools, but complex queries can be completed much faster with logs indexed in a database and a web interface. In the case of large amounts of messages, a web-based database solution is not just convenient, it is a necessity. With tens of thousands of incoming messages per second, the indexes of log databases still give Google-like response times even for the most complex queries, while traditional text-based tools are not able to scale as efficiently.

Why still syslog-ng?

Many software used for log analysis come with their own log aggregation agents. So why should you still use syslog-ng then? As organizations grow, so does the IT staff starts to diversify. Separate teams are created for operations, development and security, each with its own specialized needs in log analysis. And even the business side often needs log analysis as an input for business decisions. You can quickly end up with 4-5 different log analysis and aggregation systems running in parallel and working from the very same log messages.

This is where syslog-ng can come handy: creating a dedicated log management layer, where syslog-ng collects all of the log messages centrally, does initial basic log analysis, and feeds all the different log analysis software with relevant log messages. This can save you time and resources in multiple ways:

  • You only have to learn one tool instead of many.

  • Only a single tool to push through security and operations teams.

  • There are less computing resources on clients.

  • Logs travel only once over the network.

  • Long term archival in a single location with syslog-ng instead of using multiple log analysis software.

  • Filtering on the syslog-ng side can save significantly on the hardware costs of the log analysis software, and also on licensing in case of a commercial solution.

The syslog-ng application can collect both system and application logs, and can be installed both as a client and a server. Thus, you have a single application to install for log management everywhere on your network. It can reliably collect and transport huge amounts of log messages, parse (“look into”) your log messages, enrich them with geographical location and other extra data, making filters and thus, log routing, much more accurate.

Logging as a Service (LaaS)

A couple years ago, Loggly was the pioneer of logging as a service (LaaS). Today, there are many other LaaS providers (Papertrail, Logentries, Sumo Logic, and so on) and syslog-ng works perfectly with all of them.

Structured fields and name-value pairs in logs are increasingly important, as they are easier to search, and it is easier to create meaningful reports from them. The more recent IETF RFC 5424 syslog standard supports structured data, but it is still not in widespread use.

People started to use JSON embedded into legacy (RFC 3164) syslog messages. The syslog-ng application can send JSON-formatted messages – for example, you can convert the following messages into structured JSON messages:

  • RFC5424-formatted log messages.

  • Windows EventLog messages received from the syslog-ng Agent for Windows application.

  • Name-value pairs extracted from a log message with PatternDB or the CSV parser.

Loggly and other services can receive JSON-formatted messages, and make them conveniently available from the web interface.

A number of LaaS providers are already supported by syslog-ng out of the box. If your service of choice is not yet directly supported, the following blog can help you create a new LaaS destination: https://www.syslog-ng.com/community/b/blog/posts/how-to-use-syslog-ng-with-laas-and-why

Some non-syslog-ng-based solutions

Before focusing on the solutions with syslog-ng at their heart, I would like to say a few words about the others, some which were included in the previous edition of the blog.

LogAnalyzer from the makers of Rsyslog was a simple, easy to use PHP application a few years ago. While it has developed quite a lot, recently I could not get it to work with syslog-ng. Some of the popular monitoring software have syslog support to some extent, for example, Nagios, Cacti and several others. I have tested some of these, I have even sent patches and bug reports to enhance their syslog-ng support, but syslog is clearly not their focus, just one of the possible inputs.

The ELK stack (Elasticsearch + Logstash + Kibana) and Graylog2 have become popular recently, but they have their own log collectors instead of syslog-ng, and syslog is just one of many log sources. Syslog support is quite limited both in performance and protocol support. They recommend using file readers for collecting syslog messages, but that increases complexity, as it is an additional software on top of syslog(-ng), and filtering still needs to be done on the syslog side. Note that syslog-ng can send logs to Elasticsearch natively, which can greatly simplify your logging architecture.

Collecting and displaying metrics data

You can collect metrics data using syslog-ng. Examples include netdata or collectd. You can send the collected data to Graphite or Elasticsearch. Graphite has its own web interface, while you can use Kibana to query and visualize data collected to Elasticsearch.

Another option is to use Grafana. Originally, it was developed as an alternative web interface to the Graphite databases, but now it can also visualize data from many more data sources, including Elasticsearch. It can combine multiple data sources to a single dashboard and provides fine-grained access control.

Loki by Grafana is one of the latest applications that lets you aggregate and query log messages, and of course, to visualize logs using Grafana. It does not index the contents of log messages, only the labels associated with logs. This way, processing and storing log messages requires less resources, making Loki more cost-effective. Promtail, the log collector component of Loki, can collect log messages using the new, RFC 5424 syslog protocol. Learn here how syslog-ng can send its log messages to Loki.

Splunk

One of the most popular web-based interfaces for log messages is Splunk. A returning question is whether to use syslog-ng or Splunk. Well, the issue is a bit of apples vs. oranges: they do not replace, but rather complement each other. As I already mentioned in the introduction, syslog-ng is good at reliably collecting and processing huge amounts of data. Splunk, on the other hand, is good at analyzing log messages for various purposes. Learn more about how you can integrate syslog-ng with Splunk from our white paper!

Syslog-ng based solutions

Here I show a number of syslog-ng based solutions. While every software described below is originally based on syslog-ng Open Source Edition (except for One Identity’s own syslog-ng Store Box (SSB)), there are already some large-scale deployments available also with syslog-ng Premium Edition as their syslog server.

  • The syslog-ng application and SSB focus on generic log management tasks and compliance.

  • LogZilla focuses on logs from Cisco devices.

  • Security Onion focuses on network and host security.

  • Recent syslog-ng releases are also able to store log messages directly into Elasticsearch, a distributed, scalable database system popular in DevOps environments, which enables the use of Kibana for analyzing log messages.

Benefits of using syslog-ng PE with these solutions include the logstore, a tamper-proof log storage (even if it means that your logs are stored twice), Windows support, and enterprise grade support.

LogZilla

LogZilla is the commercial reincarnation of one of the oldest syslog-ng web GUIs: PHP-Syslog-NG. It provides the familiar user interface of its predecessor, but also includes many new features. The user interface supports Cisco Mnemonics, extended graphing capabilities, and e-mail alerts. Behind the scenes, LDAP integration, message de-duplication, and indexing for quick searching were added for large datasets.

Over the past years, it received many small improvements. It became faster, and role-based access control was added, as well as the live tailing of log messages. Of course, all these new features come with a price; the free edition, which I have often recommended for small sites with Cisco logs is completely gone now.

A few years ago, a complete rewrite became available with many performance improvements under the hood and a new dashboard on the surface. Development never stopped, and now LogZilla can parse and enrich log messages, and can also automatically respond to events.

Therefore, it is an ideal solution for a network operations center (NOC) full of Cisco devices.

Web site: http://logzilla.net/

Security Onion

One of the most interesting projects utilizing syslog-ng is Security Onion, a free and open source Linux distribution for threat hunting, enterprise security monitoring, and log management. It is utilizing syslog-ng for log collection and log transfer, and uses the Elastic stack to store and search log messages. Even if you do not use its advanced security features, you can still use it for centralized log collection and as a nice web interface for your logs. But it is also worth getting acquainted with its security monitoring features, as it can provide you some useful insights about your network. Best of all, Security Onion is completely free and open source, with commercial support available for it.

You can learn more about it at https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-and-security-onion

Elastisearch and Kibana

Elasticsearch is gaining momentum as the ultimate destination for log messages. There are two major reasons for this:

  • You can store arbitrary name-value pairs coming from structured logging or message parsing.

  • You can use Kibana as a search and visualization interface.

The syslog-ng application can send logs directly into Elasticsearch. We call this an ESK stack (Elasticsearch + syslog-ng + Kibana).

Learn how you can simplify your logging to Elasticsearch by using syslog-ng: https://www.syslog-ng.com/community/b/blog/posts/logging-to-elasticsearch-made-simple-with-syslog-ng

syslog-ng Store Box (SSB)

SSB is a log management appliance built on syslog-ng Premium Edition. SSB adds a powerful indexing engine, authentication and access control, customized reporting capabilities, and an easy-to-use web-based user interface.

Recent versions introduced AWS and Azure cloud support and horizontal scalability using remote logspaces. The new content-based alerting can send an e-mail alert whenever a match between the contents of a log message and a search expression is found.

SSB is really fast when it comes to indexing and searching log data. To put this scalability in context, the largest SSB appliance stores up to 10 terabytes of uncompressed, raw logs. With SSB’s current indexing performance of 100,000 events per second, that equates to approximately 8.6 billion logs per day or 1.7 terabytes of log data per day (calculating with an average event size of 200 bytes). Using compression, a single, large SSB appliance could store approximately one month of log data for an enterprise generating 1.7 terabytes of event data a day. This compares favorably to other solutions that require several nodes for collecting this amount of messages, and even more additional nodes for storing them. While storing logs to the cloud is getting popular, on-premise log storage is still a lot cheaper for a large amount of logs.

The GUI makes searching logs, configuring and managing the SSB easy. The search interface allows you to use wildcards and Boolean operators to perform complex searches, and drill down on the results. You can gain a quick overview and pinpoint problems fast by generating ad-hoc charts from the distribution of the log messages.

Configuring the SSB is done through the user interface. Most of the flexible filtering, classification and routing features in the syslog-ng Open Source and Premium Editions can be configured with the UI. Access and authentication policies can be set to integrate with Microsoft Active Directory, LDAP and RADIUS servers. The web interface is accessible through a network interface dedicated to the management traffic. This management interface is also used for backups, sending alerts, and other administrative traffic.

SSB is a ready-to-use appliance, which means that no software installation is necessary. It is easily scalable, because SSB is available both as a virtual machine and as a physical appliance, ranging from entry-level servers to multiple-unit behemoths. For mission critical applications, you can use SSB in High Availability mode. Enterprise-level support for SSB and syslog-ng PE is also available.

Read more about One Identity’s syslog-ng and SSB products here.

Request evaluation version / callback.

12/20 Élections pour le Conseil, FESCo et Mindshare pendant encore quelques jours

Posted by Charles-Antoine Couret on November 24, 2020 12:17 PM

Comme le projet Fedora est communautaire, une partie du collège des organisations suivantes doit être renouvelée : Council, FESCo et Mindshare. Et ce sont les contributeurs qui décident. Chaque candidat a bien sûr un programme et un passif qu'ils souhaitent mettre en avant durant leur mandat pour orienter le projet Fedora dans certaines directions. Je vous invite à étudier les propositions des différents candidats pour cela.

J'ai voté

Pour voter, il est nécessaire d'avoir un compte FAS actif et de faire son choix sur le site du scrutin. Vous avez jusqu'au vendredi 4 décembre à 1h heure française pour le faire. Donc n'attendez pas trop.

Par ailleurs, comme pour le choix des fonds d'écran additionnel, vous pouvez récupérer un badge si vous cliquez sur un lien depuis l'interface après avoir participé à un vote.

Je vais profiter de l'occasion pour résumer le rôle de chacun de ces comités afin de clarifier l'aspect décisionnel du projet Fedora mais aussi visualiser le caractère communautaire de celui-ci.

Council

Le Council est ce qu'on pourrait qualifier le grand conseil du projet. C'est donc l'organe décisionnaire le plus élevé de Fedora. Le conseil définit les objectifs à long terme du projet Fedora et participe à l'organisation de celui-ci pour y parvenir. Cela se fait notamment par le biais de discussions ouvertes et transparentes vis à vis de la communauté.

Mais il gère également l'aspect financier. Cela concerne notamment les budgets alloués pour organiser les évènements, produire les goodies, ou des initiatives permettant de remplir les dits objectifs. Ils ont enfin la charge de régler les conflits personnels importants au sein du projet, tout comme les aspects légaux liés à la marque Fedora.

Les rôles au sein du conseil sont complexes.

Ceux avec droit de vote complet

Tout d'abord il y a le FPL (Fedora Project Leader) qui est le dirigeant du conseil et de facto le représentant du projet. Son rôle est lié à la tenue de l'agenda et des discussions du conseil, mais aussi de représenter le projet Fedora dans son ensemble. Il doit également servir à dégager un consensus au cours des débats. Ce rôle est tenu par un employé de Red Hat et est choisi avec le consentement du conseil en question.

Il y a aussi le FCAIC (Fedora Community Action and Impact Coordinator) qui fait le lien entre la communauté et l'entreprise Red Hat pour faciliter et encourager la coopération. Comme pour le FPL, c'est un employé de Red Hat qui occupe cette position avec l'approbation du conseil.

Il y a deux places destinées à la représentation technique et à la représentation plus marketing / ambassadrice du projet. Ces deux places découlent d'une nomination décidée au sein des organes dédiées à ces activités : le FESCo et le Mindshare. Ces places sont communautaires mais ce sont uniquement ces comités qui décident des attributions.

Il reste deux places communautaires totalement ouvertes et dont tout le monde peut soumettre sa candidature ou voter. Cela permet de représenter les autres secteurs d'activité comme la traduction ou la documentation mais aussi la voix communautaire au sens la plus large possible. C'est pour une de ces places que le vote est ouvert cette semaine !

Ceux avec le droit de vote partiel

Un conseiller en diversité est nommé par le FPL avec le soutien du conseil pour favoriser l'intégration au sein du projet des populations le plus souvent discriminées. Son objectif est donc de déterminer les programmes pour régler cette problématique et résoudre les conflits associés qui peuvent se présenter.

Un gestionnaire du programme Fedora qui s'occupe du planning des différentes versions de Fedora. Il s'assure du bon respect des délais, du suivi des fonctionnalités et des cycles de tests. Il fait également office de secrétaire du conseil. C'est un employé de Red Hat qui occupe ce rôle toujours avec l'approbation du conseil.

FESCo

Le FESCo (Fedora Engineering Steering Committee) est un conseil entièrement composé de membres élus et totalement dévoués à l'aspect technique du projet Fedora.

Ils vont donc traiter en particulier les points suivants :

  • Les nouvelles fonctionnalités de la distribution ;
  • Les sponsors pour le rôle d'empaqueteur (ceux qui pourront donc superviser un débutant) ;
  • La création et la gestion des SIGs (Special Interest Group) pour organiser des équipes autour de certaines thématiques ;
  • La procédure d'empaquetage des paquets.

Le responsable de ce groupe est tournant. Les 9 membres sont élus pour un an, sachant que chaque élection renouvelle la moitié du collège. Ici 5 places sont à remplacer.

Mindshare

Mindshare est une évolution du FAmSCo (Fedora Ambassadors Steering Committee) qu'il remplace. Il est l'équivalent du FESCo sur l'aspect plus humain du projet. Pendant que le FESCo se préoccupera beaucoup plus des empaqueteurs, la préoccupation de ce conseil est plutôt l'ambassadeur et les nouveaux contributeurs.

Voici un exemple des thèmes dont il a compétence qui viennent du FAmSCo :

  • Gérer l'accroissement des ambassadeurs à travers le mentoring ;
  • Pousser à la création et au développement des communautés plus locales comme la communauté française par exemple ;
  • Réaliser le suivi des évènements auxquels participent les ambassadeurs ;
  • Accorder les ressources aux différentes communautés ou activités, en fonction des besoin et de l'intérêt ;
  • S'occuper des conflits entre ambassadeurs.

Et ses nouvelles compétences :

  • La communication entre les équipes, notamment entre la technique et le marketing ;
  • Motiver les contributeurs à s'impliquer dans différents groupes de travail ;
  • Gérer l'arrivé de nouveaux contributeurs pour les guider, essayer de favoriser l'inclusion de personnes souvent peu représentées dans Fedora (femmes, personnes non américaines et non européennes, étudiants, etc.) ;
  • Gestion de l'équipe marketing.


Il y a 9 membres pour gérer ce comité. Un gérant, 2 proviennent des ambassadeurs, un du design et web, un de la documentation, un du marketing, un de la commops et les deux derniers sont élus. C'est pour un de ces derniers sièges que le scrutin est ouvert.

Fin de vie de Fedora 31

Posted by Charles-Antoine Couret on November 24, 2020 08:37 AM

C'est en ce mardi 24 novembre 2020 que Fedora 31 a été déclaré comme en fin de vie.

Qu'est-ce que c'est ?

Un mois après la sortie d'une version de Fedora n, ici Fedora 33, la version n-2 (donc Fedora 31) est déclarée comme en fin de vie.

Ce mois sert à donner du temps aux utilisateurs pour faire la mise à niveau. Ce qui fait qu'en moyenne une version est officiellement maintenue pendant 13 mois.

En effet, la fin de vie d'une version signifie qu'elle n'aura plus de mises à jour et plus aucun bogue ne sera corrigé. Pour des questions de sécurité, avec des failles non corrigées, il est vivement conseillé aux utilisateurs de Fedora 31 et antérieurs d'effectuer la mise à niveau vers Fedora 33 ou 32.

Que faire ?

Si vous êtes concernés, il est nécessaire de faire la mise à niveau de vos systèmes. Vous pouvez télécharger des images CD ou USB plus récentes.

Il est également possible de faire la mise à niveau sans réinstaller via DNF ou GNOME Logiciels.

GNOME Logiciels a également dû vous prévenir par une pop-up de la disponibilité de Fedora 32 ou 33. N'hésitez pas à lancer la mise à niveau par ce biais.

How to automate a deploy with GitHub actions via SSH

Posted by Mohammed Tayeh on November 24, 2020 08:07 AM

Introduction

GitHub Actions is an API for cause and effect on GitHub: orchestrate any workflow, based on any event, while GitHub manages the execution, provides rich feedback, and secures every step along the way.

In this article, we will be exploring a hands-on approach to managing your CD processes using GitHub Actions via SSH.

The workflow:

  1. Connect to VPS via SSH
  2. Move to project directory
  3. git pull the new changes
  4. execute any necessary command

Prerequisites

  • A GitHub account. If you don’t have one, you can sign up here
  • A server with SSH access
  • Basic knowledge of writing valid YAML
  • Basic knowledge of GitHub and Git

Configuring workflows

we should create a yml file on .github/workflows/. For example .github/workflows/ci.yml and add this code to the file:

name: CI

on: [push]

jobs:
  deploy:
    if: github.ref == 'refs/heads/master'
    runs-on: [ubuntu-latest]
    steps:
      - uses: actions/checkout@v1
      - name: Push to server
        uses: appleboy/ssh-action@master
        with:
          host: ${{ secrets.SERVER_IP }}
          username: ${{ secrets.SERVER_USERNAME }}
          key: ${{ secrets.KEY }}
          passphrase: ${{ secrets.PASSPHRASE }} 
          script: cd ${{ secrets.PROJECT_PATH }} && git pull

After add this file go to Settings -> Secrets and add secrets SERVER_IP, SERVER_USERNAME, KEY, PASSPHRASE, PROJECT_PATH github_actions_ssh_secrets

note: you can use password insted of keys just you need to replace the key and passphrase line with password in the workflow file password: ${{ secrets.PASSWORD }} and add the password to secrets

I use the GitHub secrets to keep important information hidden

also you can add more commands to the script line as you need

the next time we push to the master branch, it will automatically be deployed to our server.

github_actions_run_job

Hording AD groups through wbinfo

Posted by Ingvar Hagelund on November 24, 2020 07:44 AM

In a samba setup where users and groups are fetched from Active Directory to be used in a unix/linux environment, AD may prohibit the samba winbind tools like wbinfo to recurse into its group structure. You may get groups and users and their corresponding gids and uids, but you may not get the members of a group.

It is usually possible to do the opposite, that is, probing a user object and get the groups that user is member of. Here is a little script that collects all users, probing AD for the groups of each and every user, and sorting and putting it together. In perl of course.

https://github.com/ingvarha/groupmembers

Site and blog migration

Posted by Adam Williamson on November 24, 2020 12:36 AM

So I've been having an adventurous week here at HA Towers: I decided, after something more than a decade, I'm going to get out of the self-hosting game, as far as I can. It makes me a bit sad, because it's been kinda cool to do and I think it's worked pretty well, but I'm getting to a point where it seems silly that a small part of me has to constantly be concerned with making sure my web and mail servers and all the rest of it keep working, when the services exist to do it much more efficiently. It's cool that it's still possible to do it, but I don't think I need to actually do it any more.

So, if you're reading this...and I didn't do something really weird...it's not being served to you by a Fedora system three feet from my desk any more. It's being served to you by a server owned by a commodity web hoster...somewhere in North America...running Lightspeed (boo) on who knows what OS. I pre-paid for four years of hosting before realizing they were running proprietary software, and I figured what the hell, it's just a web serving serving static files. If it starts to really bug me I'll move it, and hopefully you'll never notice.

All the redirects for old Wordpress URLs should still be in place, and also all URLs for software projects I used to host here (fedfind etc) should redirect to appropriate places in Pagure and/or Pypi. Please yell if you see something that seems to be wrong. I moved nightlies and testcase_stats to the Fedora openQA server for now; that's still a slightly odd place for them to be, but at least it's in the Fedora domain not on my personal domain, and it was easiest to do since I have all the necessary permissions, putting them anywhere else would be more work and require other people to do stuff, so this is good enough for now. Redirects are in place for those too.

I've been working on all the other stuff I self-host, too. Today I set up all the IRC channels I regularly read in my Matrix account and I'm going to try using that setup for IRC instead of my own proxy (which ran bip). It seems to work okay so far. I'm using the Quaternion client for now, as it seems to have the most efficient UI layout and isn't a big heavy wrapper around a web client. Matrix is a really cool thing, and it'd be great to see more F/OSS projects adopting it to lower barriers to entry without compromising F/OSS principles; IRC really is getting pretty creaky these days, folks. There's some talk about both Fedora and GNOME adopting Matrix officially, and I really hope that happens.

I also set up a Kolab Now account and switched my contacts and calendar to it, which was nice and easy to do (download the ICS files from Radicale, upload them to Kolab, switch my accounts on my laptops and phone, shut down the Radicale server, done). I also plan to have it serve my mail, but that migration is going to be the longest and most complicated as I'll have to move several gigs of mail and re-do all my filters. Fun!

I also refreshed my "desktop" setup; after (again) something more than a decade having a dedicated desktop PC I'm trying to roll without one again. Back when I last did this, I got to resenting the clunky nature of docking at the time, and also I still ran quite a lot of local code compiles and laptops aren't ideal for that. These days, though, docking is getting pretty slick, and I don't recall the last time I built anything really chunky locally. My current laptop (a 2017 XPS 13) should have enough power anyhow, for the occasional case. So I got me a fancy Thunderbolt dock - yes, from the Apple store, because apparently no-one else has it in stock in Canada - and a 32" 4K monitor and plugged the things into the things and waited a whole night while all sorts of gigantic things I forgot I had lying around my home directory synced over to the laptop and...hey, it works. Probably in two months I'll run into something weird that's only set up on the old desktop box, but hey.

So once I have all this wrapped up I'm aiming to have substantially fewer computers lying around here and fewer Sysadmin Things taking up space in my brain. At the cost of being able to say I run an entire domain out of a $20 TV stand in my home office. Ah, well.

Oh, I also bought a new domain as part of this whole thing, as a sort of backup / staging area for transitions and also possibly as an alternative vanity domain. Because it is sometimes awkward telling people yes, my email address is happyassassin.net, no, I'm not an assassin, don't worry, it's a name based on a throwaway joke from university which I probably wouldn't have picked if I knew I'd be signing up for bank accounts with it fifteen years later. So if I do start using it for stuff, here is your advance notice that yeah, it's me. This name I just picked to be vaguely memorable and hopefully to be entirely inoffensive, vaguely professional-sounding, and composed of sounds that are unambiguous when read over an international phone line to a call centre in India. It doesn't mean anything at all.

fwupd 1.5.2

Posted by Richard Hughes on November 23, 2020 04:36 PM

The last few posts I did about fwupd releases were very popular, so I’ll do the same thing again: I’ve just tagged fwupd 1.5.2 – This release changes a few things:

  • Add a build time flag to indicate if packages are supported – this would be set for “traditional” package builds done by the distro, and unset by things like the Fedora COPR build, the Flatpak or Snap bundles. There are too many people expecting that the daily snap or flatpak packages represent the “official fwupd” and we wanted to make it clear to people using these snapshots that we’ve done basically no QA on the snapshots.
  • A plugin for the Pinebook Pro laptop has been added, although it needs further work from PINE64 before it will work correctly. At the moment there’s no way of getting the touchpad version, or finding out which keyboard layout is installed so we can tag the correct firmware file. It’s nearly there and is still very useful for playing with the hardware on the PB Pro.
  • Components can now set the icon from the metadata from the LVFS, if supported by the fwupd plugin. This allows us to tag “generic” ESRT devices as things like EC devices, or, ahem, batteries.
  • I’ve been asked by a few teams, including the Red Hat Edge team, the CoreOS team and also by Google to switch from libsoup to libcurl for downloading data – as this reduces the image size by over 5MB. Even NetworkManager depends on libcurl now, and this seemed like a sensible thing to do given fwupd is now being used in so many different places.
  • Fall back to FAT32 internal partitions for detecting ESP, as some users were complaining that fwupd did not properly detect their ESP that didn’t have the correct partition GUID set. Although I think fixing the GUID is the right thing to do, the system firmware also falls back, and pragmatically so should we.
  • Fix detection of ColorHug version on older firmware versions, which was slightly embarrassing as ColorHug is one of the devices in the device regression tests, but we were not testing an old enough firmware version to detect this bug.
  • Fix reading BCM57XX vendor and device ids from firmware – firmware for the Talos II machine is already on the LVFS and can replace the non-free firmware there in almost all situations now.
  • For this release we had to improve synaptics-mst reliability when writing data, which was found occasionally when installing firmware onto a common dock model. A 200ms delay is the difference between success and failure, which although not strictly required seemed pragmatic to add.
  • Fix replugging the MSP430 device which was the last device that was failing a specific ODM QA. This allows us to release a ton of dock firmware on the LVFS.
  • Fix a deadlock seen when calling libfwupd from QT programs. This was because we were calling a sync method from threads without a context, which we’ve now added.
  • In 1.5.0 we switched to the async libfwupd by default, and accidentally dropped the logic to only download the remote metadata as required. Most users only need to download the tiny .jcat file every day, and the much larger .xml.gz is only downloaded if the signature has changed in the last 24h. Of course, it’s all hitting the CDN, but it’s not nice to waste bandwidth for no reason.
  • As Snap is bundling libfwupd with gnome-software now, we had to restore recognizing GPG and PKCS7 signature types. This allows a new libfwupd to talk to an old fwupd daemon which is something we’d not expected before.
  • We’re also now setting the SMBIOS chassis type to portable if a DeviceTree battery exists, although I’d much rather see a ChassisType in the DT specification one day. This allows us to support HSI on platforms like the PineBook Pro, although the number of tests is still minimal without more buy-in from ARM.
  • We removed the HSI update and attestation suffixes; we decided they complicated the HSI specification and didn’t really fit in. Most users won’t even care and the spec is explicitly WIP so expect further changes like this in the future.
  • If you’re running 1.5.0 or 1.5.1 you probably want to update to this release now as it fixes a hard-to-debug hang we introduced in 1.5.0. If you’re running 1.4.x you might want to let the libcurl changes settle, although we’ve been using it without issue for more than a week on a ton of hardware here. Expect 1.5.3 in a few weeks time, assuming we’re all still alive by then. :)

    Gnome Asia summit 2020

    Posted by Robbi Nespu on November 23, 2020 04:04 PM

    24-26 November 2020 @ https://events.gnome.org/event/24

    I think it maybe too late to help out telling anyone about this event since registeration already closed but look like the registeration still open. Anyway, let see the schedule:

    Some of the segment that offer good topic and caught my eyes :

    Gnome Asia summit 2020 will start by tomorrow today and conference will be online. This event was sponsor by Gitlab and openSUSE.

    Gnome Asia submit 2020

    Posted by Robbi Nespu on November 23, 2020 04:04 PM

    24-26 November 2020 @ https://events.gnome.org/event/24

    I think it maybe too late to help out telling anyone about this event since registeration already closed but look like the registeration still open. Anyway, let see the schedule:

    Some of the segment that offer good topic and caught my eyes :

    Gnome Asia submit 2020 will start by tomorrow and conference will be online. This event was sponsor by Gitlab and openSUSE.

    Musical Midi Accompaniment: First Tune

    Posted by Adam Young on November 23, 2020 02:16 AM

    Here is a tune I wrote called “Standard Deviation” done as an accompaniment track using MMA. This is a very simplistic interpretation that makes no use of dynamics, variations in the BossaNova Groove, or even decent repeat logic. But it compiles.

    Here’s the MMA file.

    Slightly Greater than one Standard Deviation from the Mean:

    Episode 225 – Who is responsible if IoT burns down your house?

    Posted by Josh Bressers on November 23, 2020 12:01 AM

    Josh and Kurt talk about the safety and liability of new devices. What happens when your doorbell can burn down your house? What if it’s your fault the doorbell burned down your house? There isn’t really any prior art for where our devices are taking us, who knows what the future will look like.

    <audio class="wp-audio-shortcode" controls="controls" id="audio-2077-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_225_Who_is_responsible_if_IoT_burns_down_your_house.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_225_Who_is_responsible_if_IoT_burns_down_your_house.mp3</audio>

    Show Notes

    On Safety Razors and Technology

    Posted by Michel Alexandre Salim on November 23, 2020 12:00 AM

    rex-ambassador

    On safety razors

    I recently switched over from the ubiquitous cartridge razors to double-edge safety razors. The original impetus was not finding a non-charging base for my GiletteLabs Heated Razors - the battery in the stem made it too wide for most razor holders - and noticing that a lot of reviews swear by various safety razors.

    I ended up buying the Rex Ambassador1 a few months ago - and then held off on actually using it, telling myself I need to learn how to properly use it first. In the end I told myself I would stop using my Gilette the day after the US Presidential Election, and start using the safety razor the morning after the US Presidential Election is finally called – which was Sunday the 8th, with a nice 5-day stubble to test it on.

    The first shave went surprisingly smoothly; the next few shaves ended with some minor mishaps - cockiness and distraction getting in the way - but overall there is no way I’m going back to cartridge razors after this. Feeling more in control, getting a closer shave, no plastic waste to dispose – and hey much lower total cost of ownership!

    … and technology

    There seems to be a parallel here between the world of personal care and that of technology:

    • most people are trapped on proprietary, heavily marketed solutions (cartridge razors, proprietary operating systems, apps and services)
    • these proprietary solutions are at first glance more user friendly
    • the more open solutions have a steeper learning curve but are eventually more empowering
    • vendor lock-in
    • the incentives for the manufacturers/vendors and customers/users are not aligned

    Think Windows on one side, vs Linux (and the BSDs) on the other (with macOS initially being in the middle and increasingly swaying to becoming even more constraining than Windows). Think proprietary gaming consoles and mobile IAP-chasing games, vs game platforms that encourage participation like TIC-80 and LÖVE. Think US-centric proprietary social networks (Facebook, Twitter) and services (Dropbox, Google Suite) vs distributed social networks (Mastodon, Pleroma, Diaspora etc.) and self-hosted services (Nextcloud, Cryptpad etc.).

    What are most people sacrificing to the altar of promised convenience? Literally both time and money: our attention, higher costs; also our autonomy (you’re locked in) and our privacy (… so platform owners can mine your attention and monetize what they observe of your behavior).

    If you believe in capitalism, this is bad news. If you don’t it’s even worse.

    So what can we do?

    Part of the solution is regulatory. In the EU, a recent ECJ ruling requires EU companies to stop using US-based cloud services to host data from EU citizens. This could help push the adoption of more open, user-empowering, privacy-friendly alternatives.

    But in other jurisdictions like the US, regulation might be a long time coming, except maybe in California (plus the companies we’re trying to unshackle users from are mostly US-based). So a lot of the solution has to be bottom up.

    We simply need to lower barriers to entry, both actual and perceived, to using the platforms we’re championing. Some involve compromises (e.g. Flatpak is a great way to abstract away the differences between Linux distributions, to the point that it’s easier to install proprietary apps, including Steam – which improves the availability of games on Linux despite, yes, being proprietary). Some involve corporate backing (e.g. Fedora on Lenovo laptops). A lot would involve being more welcoming to newcomers, and bridging the actual usability gaps there are.

    It’s hard enough to overcome incumbency and the network effect. Let’s not make it harder for ourselves.

    This post is day 5 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

    Have a comment on one of my posts? Start a discussion in my public inbox by sending an email to ~michel-slm/[email protected] [ mailing list etiquette]

    Posts are also tooted to @[email protected]

    <section class="footnotes" role="doc-endnotes">
    1. Not a product placement, honest! ↩︎

    </section>

    Musical Midi Accompaniment: Understanding the Format

    Posted by Adam Young on November 22, 2020 07:52 PM

    Saxophone is a solo instrument. Unless you are into the sounds of Saxophone multiphonics, harmony requires playing with some other instrument. For Jazz, this tends to be a rhythms section of Piano, Bass, and Drums. As a kid, my practicing (without a live Rhythm section) required playing along with pre-recordings of tunes. I had my share of Jamie Aebersold records.

    Nowadays, the tool of choice for most Jazz muscians, myself included is iReal Pro. A lovely little app for the phone. All of the Real Book tunes have their chord progressions been posted and generated. The format is simple enough.

    But it is a proprietary app. While I continue to support and use it, I am also looking for alternatives that let me get more involved. One such tool is Musical MIDI Accompaniment. I’m just getting started with it, and I want to keep my notes here.

    First is just getting it to play. Whether you get the tarball or checkout from Git, there is a trick that you need to do in order to even play examples: regenerate the libraries.

    ./mma.py -G
    

    That allows me to generate a midi file from a file in the MMA Domain Specific Language (DSL) which is also called MMA. I downloaded the backing track for I’ve Got You Under My Skin https://www.mellowood.ca/mma/examples/examples.html and, once I regenerated the libraries with the above command, was able to run :

    ./mma.py ~/Downloads/ive-got-you-under-my-skin.mma
    Creating new midi file (120 bars, 4.57 min / 4:34 m:s): '/home/ayoung/Downloads/ive-got-you-under-my-skin.mid'
    

    Which I can then play with timidity.

    The file format is not quite as simplistic as iReal Pro, but does not look so complex that I won’t be able to learn it.

    There are examples of things that look like real programming. Begin and End Blocks.

    Line Numbers. This is going to give my flashbacks to coding in Basic on my C64…not such an unpleasant set of memories. And musical ones at that.

    Ok, lets take this apart. Here is the first few lines:

    // I've Got You Under My Skin
    
    Tempo 105
    Groove Metronome2-4
    
    	z * 2
    

    Comments are doubles slashes. Title is just for documentation.

    Tempo is in BPM.

    Groove Metronome2-4 Says to use a Groove, the MMA “Grooves, in some ways, are MMA ‘s answer to macros … but they are cooler, easier to use, and have a more musical name. ” Says the manual. So, somewhere we have inherited a Groove called Metronome…something. Is the 2-4 part of the name? It looks it. Found this in the library

    lib/stdlib/metronome.mma:97:DefGroove Metronome2-4 A very useful introduction. On bar one we have hits on beats 1 and 3; on bar two hits on beats 1, 2, 3 and 4.

    Which is based on a leader counting off the time in the song. If you play the midi file, you can hear the cowbell-effect used to count off

    z * 2 is the way of saying that this extends for 2 measures.

    The special sequences, “-” or “z”, are also the equivalent of a rest or “tacet” sequence. For example, in defining a 4 bar sequence with a bass pattern on the first 3 bars and a walking bass on bar 4 you might do something like:

    If you already have a sequence defined5.2 you can repeat or copy the existing pattern by using a single “*” as the pattern name. This is useful when you are modifying an existing sequence.

    The next block is the definition of a section he calls Solo. This is a Track.

    Begin Solo
    	Voice Piano2
    	Octave 4
    	Harmony 3above
    	Articulate 90
    	Accent 1 20
     	Volume f
    End
    

    I think that the expectation is that you get the majority of the defaults from the Groove, and customize the Solo track.


    As a general rule, MELODY tracks have been designed as a “voice” to accompany a predefined form defined in a GROOVE—it is a good idea to define MELODY parameters as part of a GROOVE. SOLO tracks are thought to be specific to a certain song file, with their parameters defined in the song file.

    So if it were a Melody track definition is would be ignored, and the track from the Rhumba base would be used instead.

    The next section defines what is done overall.

    Keysig 3b
    
    
    Groove Rhumba
    Alltracks SeqRnd Off
    Bass-Sus Sequence -		// disable the strings
    
    Cresc pp mf 4
    

    Keysig directive can be found here. This will generate a MIDI KeySignature event. 3b means 3 flats in the midi spec. Major is assumed if not specified. Thus this is the key of E Flat.

    The Groove Rhumba directive is going to drive most of the song. The definitions for this Groove can be found under the standard library I might tear apart a file like this one in a future post.

    The next two lines specify how the Groove is to be played. SeqRnd inserts randomness into the sequencing, to make it more like a live performance. This directive shuts down the randomness.

    Bass-Sus Sequence – seems to be defining a new, blank sequence. The comment implies that it is shutting off the strings. I have to admit, I don’t quite understand this. I’ve generated the file with this directive commented out and detect no differences. Since Bass-Sus is defined in the Bossa Nova Groove under the standard library, I’m tempted to think this is an copy-pasta error. Note that it defines “Voice Strings” and I think that is what he was trying to disable. I suspect a git history will show the Bass-Sus getting pulled out of the Rhumba file.

    Cresc pp mf 4 Grow in volume from piano (soft) to mezzo-forte (Medium Loud) over 4 bars. Since no track is specified, it is for the master volume.

    // 4 bar intro
    
    1 	Eb		{4.g;8f;2e;}
    2 	Ab      {4.a;8g;2f;}
    3 	Gm7     {1g;}
    4 	Bb7     {2b;}
    
    Delete Solo
    

    Now we start seeing the measures. The numbers are optional, and just for human readers to keep track.
    Measure 1 is an E flat chord. The braces delineate a Riff line. The 4 means a Quarter note. The period after it Makes it Dotted, half again as long, or the equivalent of 3 tied eighth notes. The Note played is a g. This is adjusted for the octave appropriate to the voice. This is followed by an eighth note f, an a half note e. This adds up to a full measure; 3/8 + 1/8 + 4/8.

    After the four bar intro, the solo part is deleted, and the normal Rhumba patterns take effect.

    The next line is a Repeat directive, which is paired with the repeatending directive on line 129 and repeatend directives on line 135. This says that measures 5-60 should be repeated once, first and second ending style.

    The Groove changes many times during the song, and I think this leads to the one bug I noticed: the time keeps changing, speeding up and slowing down. I think these match up with the Groove changes, but I am not yet certain.

    It should be fairly easy to translate one of my songs into this format.

    OpenWRT derrière une Freebox: IPv6, DMZ et Bridge

    Posted by Guillaume Kulakowski on November 22, 2020 05:02 PM

    Bien que je sois le très récent et heureux possesseur d’une Freebox Pop, j’ai fait le choix de continuer à déléguer la gestion de mon réseau ainsi que de mon partage Wi-Fi, non pas à la Pop, mais à OpenWRT. Les avantages pour moi sont les suivants : Plus de contrôle au niveau des règles […]

    Cet article OpenWRT derrière une Freebox: IPv6, DMZ et Bridge est apparu en premier sur Guillaume Kulakowski's blog.

    Fedora program update: 2020-47

    Posted by Fedora Community Blog on November 20, 2020 09:51 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora this week. Elections voting is open through 3 December. Fedora 31 will reach end-of-life on Tuesday. EPEL 6 will reach end-of-life on 30 November. Announcements Calls for Participation Help wanted Upcoming meetings Releases Announcements Elections voting CfPs Conference Location Date CfP Balkan FLOSStival 2020 virtual 5-6 […]

    The post Fedora program update: 2020-47 appeared first on Fedora Community Blog.

    Scaling Flathub 100x

    Posted by Alexander Larsson on November 20, 2020 04:06 PM

    Flatpak relies on OSTree to distribute apps. This means that flatpak repositories, such as Flathub, are really just OSTree repositories. At the core of an OSTree repository is the summary file, which describes the content of the repository.  This is similar to the metadata that “apt-get update” downloads.

    Every time you do an flatpak install it needs the information in the summary file. The file is cached between operations, but any time the repository changes the local copy needs to be updated.

    This can be pretty slow, with Flathub having around 1000 applications (times 4 architectures). In addition, the more applications there are, the more likely it is that one has been updated since the last time which means you need to update.

    This isn’t yet a major problem for Flathub, but its just a matter of time before it is, as apps keep getting added:

    This is particularly problematic if we want to add new architectures, as that multiplies the number of applications.

    So, the last month I’ve been working in OSTree and Flatpak to solve this by changing the flatpak repository format. Today I released Flatpak 1.9.2 which is the first version to support the new format, and Flathub is already serving it (and the old format for older clients).

    The new format is not only more efficient, it is also split by architecture meaning each user only downloads a subset of the total summary. Additionally there is a delta-based incremental update method for updating from a previous version.

    Here are some data for the latest Flathub summary:

    • Original summary: 6,6M  (1.8M compressed)
    • New (x86-64) summary: 2.7M (554k compressed)
    • Delta from previous summary: 20k

    So, if you’re able to use the delta, then it needs 100 times less network bandwidth compared to the original (compressed) summary and will be much faster.

    Also, this means we can finally start looking at supporting other architectures in Flathub, as doing so will not inconvenience users of the major architectures.

    To the future and beyond!

    We can’t move forward by looking back

    Posted by Josh Bressers on November 19, 2020 03:24 PM

    For the last few weeks Kurt and I have been having a lively conversation about security ratings scales. Is CVSS good enough? What about the Microsoft scale? Are there other scales we should be looking at? What’s good, what’s missing, what should we be talking about.

    There’s been a lot of back and forth and different ideas, over the course of our discussions I’ve come to realize an important aspect of security which is we don’t look forward very often. What I mean by this is there is a very strong force in the world of security to use prior art to drive our future decisions. Except all of that prior art is comically out of date in the world of today.

    An easy example are existing security standards. All of the working groups that build the standards, and ideas the working groups bring to the table, are using ideas from the past to solve problems for the future. You can argue that standards are at best a snapshot of the past, made in the present, to slow down the future. I will elaborate on that “slow down the future” line in a future blog post, for now I just want to focus on the larger problem.

    It might be easiest to use an example, I shall pick on CVSS. The vast majority of ideas and content in a standard such as CVSS is heavily influenced by what once was. If you look at how CVSS scores things, it’s clear a computer in a datacenter was in mind for many of the metrics. That was fine a decade ago, but it’s not fine anymore. Right now anyone overly familiar with CVSS is screaming “BUT CVSS DOESN’T MEASURE RISK IT MEASURES SEVERITY”, which I will say: you are technically correct, nobody cares, and nobody uses it like this. Sit down. CVSS is a perfect example of the theory being out of touch with reality.

    Am I suggesting CVSS has no value? I am not not. In its current form CVSS has some value (it should have a lot more). It’s all we have today, so everyone is using it, and it’s mostly good enough in the same way you can drive a nail with a rock. I have a suspicion it won’t be turned into something truly valuable because it is a standard based on the past. I would like to say we should take this into account when we use CVSS, but nobody will. The people doing the work don’t have time to care about improving something that’s mostly OK, and the people building the standards don’t do the work, so it’s sort of like a Mexican standoff, but one where nobody showed up.

    There are basically two options for CVSS: don’t use it because it doesn’t work properly, or use it and just deal with the places it falls apart. Both of those are terrible options. There’s little chance it’s going to get better in the near future. There is a CVSSv4 design document here. If you look at it, does it look like something describing a modern cloud based architecture? They’ve been working on this for almost five years; do you remember what your architecture looked like even a year ago? For most of us in the world of IT a year is a lifetime now. Looking backwards isn’t going to make anything better.

    OK, I’ve picked on CVSS enough. The real reason to explain all of this is to change the way we think about problems. Trying to solve problems we already had in the past won’t help with problems we have today, or will have in the future. I think this is more about having a different mindset than security had in the past. If you look at the history of infosec and security, there has been a steady march of progress, but much of that progress has been slower than the forward movement of IT in general. What’s holding us back?

    Let’s break this down into People, Places, and Things

    People

    I use the line above “The people doing the work don’t have time to care, and the people building the standards don’t do the work”. What I mean by this is there are plenty of people doing amazing security work. We don’t hear about them very often though because they’re busy working. Go talk to someone building detection rules for their SIEM, those are the people making a difference. They don’t have time to work on the next version of CVSS. They probably don’t even have the time to file a bug report against an open source project they use. There are many people in this situation in the security world. They are doing amazing work and getting zero credit. These are the heroes we need.

    But we have the heroes we deserve. If you look at many of the people working on standards, and giving keynotes, and writing blogs (oh hi), a lot of them live in a world that no longer exists. I willingly admit I used to live in a world that didn’t exist. I had an obsession with problems nobody cared about because I didn’t know what anyone was really doing. I didn’t understand cloud, or detection, or compliance, or really anything new. Working at Elastic and seeing what our customers are accomplishing in the world of security has been a life changing experience. It made me realize some those people I thought were leaders weren’t actually making the world a better place. They were desperately trying to keep the world in a place that they were relevant and could understand.

    Places

    One of my favorite examples these days is the fact that cloud won, but a lot of people are still talking about data centers or “hybrid cloud” or some other term that means owning a computer. A data center is a place. Places don’t exist anymore, at least not for the people making a difference. Now there are reasons to have a data center, just like there are reasons to own an airplane. Those reasons are pretty niche and solve a unique problem. We’re not worried about those niche problems today.

    How many of our security standards focus on having a computer in a room, in a place? Too many. Why doesn’t your compliance document ask about the seatbelts on your airplane? Because you don’t own an airplane, just like you don’t (or shouldn’t) own a server. The world changed, security is still catching up. There are no places anymore. Trying to secure a server in a room isn’t actually helping anyone.

    Things

    Things is one of the most interesting topics today. How many of us have corporate policies that say you can only access company systems from your laptop, while connected to a VPN, and wearing a hat. Or some other draconian rule. Then how many of us have email on our personal phones? But that’s not a VPN, or a hat, or a laptop! Trying to secure a device is silly because there are a near infinite number devices and possible problems.

    We used to think about securing computers. Servers, desktops, laptops, maybe a router or two. Those are tangible things that exist. We can look at them, we can poke them with a stick, we can unplug them. We don’t have real things to protect anymore and that’s a challenge. It’s hard to think about protecting something that we can’t hold in our hand. The world has changed in a such a way that the “things” we care about aren’t even things anymore.

    The reality is we used to think of things as objects we use, but things of today are data. Data is everything now. Every service, system, and application we use is just a way to understand and move around data. How many of our policies and ideas focus on computers that don’t really exist instead of the data we access and manipulate?

    Everything new is old again

    I hope the one lesson you take away from all of this is to be wary of leaning on the past. The past contains lessons, not directions. Security exists in a world unlike any we’ve ever seen, the old rules are … old. But it’s also important to understand that even what we think of as a good idea today might not be a good idea tomorrow.

    Progress is ahead of you, not behind.

    Acer Aspire Switch 10 E SW3-016's and SW5-012's and S1002's horrible EFI firmware

    Posted by Hans de Goede on November 19, 2020 09:28 AM
    Recently I acquired an Acer Aspire Switch 10 E SW3-016, this device was the main reason for writing my blog post about the shim boot loop. The EFI firmware of this is bad in a number of ways:

    1. It considers its eMMC unbootable unless its ESP contains an EFI/Microsoft/Boot/bootmgfw.efi file.

    2. But it will actually boot EFI/Boot/bootx64.efi ! (wait what? yes really)

    3. It will only boot from an USB disk connected to its micro-USB connector, not from the USB-A connector on the keyboard-dock.

    4. You must first set a BIOS admin password before you can disable secure-boot (which is necessary to boot home-build kernels without doing your own signing)

    5. Last but not least it has one more nasty "feature", it detect if the OS being booted is Windows, Android or unknown and it updates the ACPI DSDT based in this!

    Some more details on the OS detection mis feature. The ACPI "Device (SDHB) node for the MMC controller connected to the SDIO wifi module contains:

            Name (WHID, "80860F14")
            Name (AHID, "INT33BB")


    Depending on what OS the BIOS thinks it is booting it renames one of these 2 to _HID. This is weird given that it will only boot if EFI/Microsoft/Boot/bootmgfw.efi exists, but it still does this. Worse it looks at the actual contents of EFI/Boot/bootx64.efi for this. It seems that that file must be signed, otherwise it goes in OS unknown mode and keeps the 2 above DSDT bits as is, so there is no _HID defined for the wifi's mmc controller and thus no wifi. I hit this issue when I replaced EFI/Boot/bootx64.efi with grubx64.efi to break the bootloop. grubx64.efi is not signed so the DSDT as Linux saw it contained the above AML code. Using the proper workaround for the bootloop from my previous blog post this bit of the DSDT morphes into:

            Name (_HID, "80860F14")
            Name (AHID, "INT33BB")


    And the wifi works.

    The Acer Aspire Switch 10 E SW3-016's firmware also triggers an actual bug / issue in Linux' ACPI implementation, causing the bluetooth to not work. This is discussed in much detail here. I have a patch series fixing this here.

    And the older Acer Aspire Switch 10 SW5-012's and S1002's firmware has some similar issues:

    1. It considers its eMMC unbootable unless its ESP contains an EFI/Microsoft/Boot/bootmgfw.efi file

    2. These models will actually always boot the EFI/Microsoft/Boot/bootmgfw.efi file, so that is somewhat more sensible.

    3. On the SW5-012 you must first set a BIOS admin password before you can disable secure-boot.

    4. The SW5-012 is missing an ACPI device node for the PWM controller used for controlling the backlight brightness. I guess that the Windows i915 gfx driver just directly pokes the registers (which are in a whole other IP block), rather then relying on a separate PWM driver as Linux does. Unfortunately there is no way to fix this, other then using a DSDT overlay. I have a DSDT overlay for the V1.20 BIOS and only for the v1.20 BIOS available for this here.

    Because of 1. and 2. you need to take the following steps to get Linux to boot on the Acer Aspire Switch 10 SW5-012 or the S1002:

    1. Rename the original bootmgfw.efi (so that you can chainload it in the multi-boot case)

    2. Replace bootmgfw.efi with shimia32.efi

    3. Copy EFI/fedora/grubia32.efi to EFI/Microsoft/Boot

    This assumes that you have the files from a 32 bit Windows install in your ESP already.

    Release of osbuild-composer 25

    Posted by OSBuild Project on November 19, 2020 12:00 AM

    We are happy to announce that we released osbuild-composer 25. It now supports building RHEL 8.4. 🤗

    Below you can find the official change log, compiled by Ondřej Budai. Everyone is encouraged to upgrade!


    • Composer now supports RHEL 8.4! Big thanks to Jacob Kozol! If you want to build RHEL 8.4 using Composer API or Composer API for Koji, remember to pass “rhel-84” as a distribution name.

    • Composer can now be started without Weldr API. If you need it, start osbuild-composer.socket before osbuild-composer.service is started. Note that cockpit-composer starts osbuild-composer.socket so this change is backward compatible.

    • When Koji call failed, both osbuild-composer and osbuild-worker errored. This is now fixed.

    • The dependency on osbuild in the spec file is now moved to the worker subpackage. This was a mistake that could cause the worker to use an incompatible version of osbuild.

    • As always, testing got some upgrades. This time, mostly in the way we build our testing RPMs.

    Contributions from: Jacob Kozol, Lars Karlitski, Ondřej Budai, Tom Gundersen

    — Liberec, 2020-11-19

    Release of koji-osbuild 3

    Posted by OSBuild Project on November 19, 2020 12:00 AM

    We are happy to announce that we released koji-osbuild 3, our new project to integrate osbuild-composer with koji, the build and tracking system primarily used by the Fedora Project and Red Hat.

    Below you can find the official change log, compiled by Christian Kellner.


    • Ship tests in koji-osbuild-tests package. The tests got reworked so that they can be installed and run from the installation. This will be useful for reverse dependency testing, i.e. testing the plugins from other projects, like composer as well as in gating tests.

    • Add the ability to skip the tagging. An new command line option, –skip-tag is added, which translate into an a new field in the options for the hub and builder. If that option is present, the builder plugin will skip the tagging step.

    • builder plugin: the compose status is attached to the koji task as compose-status.json and updated whenever it is fetched from composer. This allows to follow the individual image builds.

    • builder plugin: The new logs API, introduce in composer version 24, is used to fetch and attach build logs as well as the koji init/import logs.

    • builder plugin: Support for the dynamic build ids, i.e. don’t use the koji build id returned from the compose request API call but use the new koji_build_id field included in the compose status response. This makes koji-osbuild depend on osbuild composer 24!

    • test: lots of improvements to the tests and ci, e.g. using the quay mirror for the postgres container or matching the container versions to the host.

    Contributions from: Christian Kellner, Lars Karlitski, Ondřej Budai

    — Berlin, 2020-11-19

    Release of cockpit-composer 26

    Posted by OSBuild Project on November 19, 2020 12:00 AM

    We are happy to announce the release of cockpit-composer 26. This release has no major new features, but contains useful fixes.

    Below you can find the official change log, compiled by Jacob Kozol. Everyone is encouraged to upgrade!


    • Add additional form validation for the Create Image Wizard
    • Improve page size dropdown styling
    • Update minor NPM dependencies
    • Improve code styling
    • Improve test reliability

    Contributions from: Jenn Giardino, Jacob Kozol, Martin Pitt, Sanne Raymaekers, Xiaofeng Wang

    — Berlin, 2020-11-19

    Fedora 33 elections voting now open

    Posted by Fedora Community Blog on November 19, 2020 12:00 AM
    Fedora 26 Supplementary Wallpapers: Vote now!

    Voting in the Fedora 33 elections is now open. Go to the Elections app to cast your vote. Voting closes at 23:59 UTC on Thursday 3 December. Don’t forget to claim your “I Voted” badge when you cast your ballot. Links to candidate interviews are below. Fedora Council There is one seat open on the Fedora Council. Tom Callaway […]

    The post Fedora 33 elections voting now open appeared first on Fedora Community Blog.

    RTL (bidi) in Nextcloud (Farsi)

    Posted by Ahmad Haghighi on November 19, 2020 12:00 AM

    امنیت و محرمانگی اطلاعات و این‌که داده‌های ما کجا، چگونه و توسط چه فرد یا نهادی ذخیره می‌شود برای بسیاری از افراد و شرکت‌ها/سازمان‌ها مهم و به عبارتی دغدغه است، و به همین دلیل (+ دلایل دیگری نظیر تحریم‌های خارجی و یا وضعیت قوانین داخلی) شرکت‌ها و افراد بهترین گزینه را این می یابند که خود داده‌های خود را مدیریت و نگهداری کنند و واقعا و کاملا مالک داده‌های خود باشند.

    بدون شک نکست‌کلود (https://nextcloud.com) یکی از بهترین راهکار‌های ابری موجود است که نه‌تنها امکانات و app‌هایی فراوان و رایگان دارد، بلکه از همه مهم‌تر آزاد (نرم‌افزار آزاد) است و تیم فعال، پویا و در حال رشدی دارد.

    از آنجایی که این مطلب در مورد نکست‌کلود و معرفی آن نیست به همین مقدار بسنده می‌کنم و سایر اطلاعات را می‌توانید از وب‌سایت رسمی نکست‌کلود به آدس nextcloud.com دریافت کنید.

    برای راه‌اندازی سرور نکست‌کلود شخصی نیاز به منابع زیاد و سنگینی ندارید و لذا اینکونه نیست که باید حتما دارای کسب و کار و شرکت باشید تا به فکر راه‌اندازی سرور ابری شخصی خود بیفتید، چرا که اگر به امنیت و مالکیت و محرمانگی داده‌های خود اهمیت می‌دهید، می‌توانید یک سرور ارزان قیمت با منابع کم تهیه کنید و برای خود و خانواده بدون مشکل از آن استفاده کنید.

    رفع مشکل متن‌های دوسویه (Bidirectional) در نکست‌کلود

    برای این منظور ابتدا وارد قسمت Apps شوید و سپس Custom CSS را جستجو و سپس نصب و فعال کنید.

    nextcloud-bidi-custom-css-app

    سپس وارد قسمت Settings شوید و از بخش Administration وارد قسمت تنظیمات مربوط به پوسته یا همان Theming شوید:

    nextcloud-bidi-custom-css-settings

    حال اسکریپت CSS مورد نظر خود را که مایلید روی تمامی سرور اعمال شود در این قسمت وارد کرده و کلید Save را بزنید. پس از ذخیره تغییرات اعمال می‌شود.

    اسکریپت پیشنهادی برای استفاده به عنوان Custom CSS :

    p,h1,div,span,a,ul,h2,h3,h4,li,input {
        direction: ltr;
        unicode-bidi: plaintext;
        text-align: initial;
    }
    

    تصاویر ضمیمه

    nextcloud-bidi-deck
    nextcloud-bidi-talk

    Mindshare election: Interview with Nasir Hussain (nasirhm)

    Posted by Fedora Community Blog on November 18, 2020 11:55 PM

    This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 19 November and closes promptly at 23:59:59 UTC on Thursday, 3 December 2020. Interview with Nasir Hussain Fedora Account: nasirhm IRC: nasirhm, nasirhm[m] (found in fedora-i3 #fedora-mindshare #fedora-badges #fedora-mote #fedora-noc #fedora-admin #fedora-devel) Fedora User Wiki Page […]

    The post Mindshare election: Interview with Nasir Hussain (nasirhm) appeared first on Fedora Community Blog.

    Mindshare election: Interview with Till Maas (till)

    Posted by Fedora Community Blog on November 18, 2020 11:55 PM

    This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 19 November and closes promptly at 23:59:59 UTC on Thursday, 3 December 2020. Interview with Till Maas Fedora Account: till IRC: tyll (found in #fedora-devel, #fedora-de, #nm, #nmstate, #systemroles and others) Fedora User Wiki Page Questions […]

    The post Mindshare election: Interview with Till Maas (till) appeared first on Fedora Community Blog.

    Council Election: Interview with Till Maas (till)

    Posted by Fedora Community Blog on November 18, 2020 11:50 PM

    This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 19 November and closes promptly at 23:59:59 UTC on Thursday, 3 December 2020. Interview with Till Maas Fedora Account: till IRC: tyll (found in #fedora-devel, #fedora-de, #nm, #nmstate, #systemroles and others) Fedora User Wiki Page Questions […]

    The post Council Election: Interview with Till Maas (till) appeared first on Fedora Community Blog.

    Council Election: Interview with Tom Callaway (spot)

    Posted by Fedora Community Blog on November 18, 2020 11:50 PM

    This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 19 November and closes promptly at 23:59:59 UTC on Thursday, 3 December 2020. Interview with Tom Callaway Fedora Account: spot IRC: spot (found in #fedora, #fedora-devel) Fedora User Wiki Page Questions Why are you running for […]

    The post Council Election: Interview with Tom Callaway (spot) appeared first on Fedora Community Blog.

    FESCo election: Interview with Fabio Valentini (decathorpe)

    Posted by Fedora Community Blog on November 18, 2020 11:45 PM

    This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 19 November and closes promptly at 23:59:59 UTC on Thursday, 3 December 2020. Interview with Fabio Valentini Fedora Account: decathorpe IRC: decathorpe (found in #fedora-devel, #fedora-java, #fedora-rust, #fedora-meeting*) Fedora User Wiki Page Questions Why do you […]

    The post FESCo election: Interview with Fabio Valentini (decathorpe) appeared first on Fedora Community Blog.

    FESCo election: Interview with Zbigniew Jędrzejewski-Szmek (zbyszek)

    Posted by Fedora Community Blog on November 18, 2020 11:45 PM

    This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 19 November and closes promptly at 23:59:59 UTC on Thursday, 3 December 2020. Interview with Zbigniew Jędrzejewski-Szmek Fedora Account: zbyszek IRC: zbyszek (found in fedora-devel, #systemd, #fedora-python, #fedora-neuro) Fedora User Wiki Page Questions Why do you […]

    The post FESCo election: Interview with Zbigniew Jędrzejewski-Szmek (zbyszek) appeared first on Fedora Community Blog.

    FESCo election: Interview with Kevin Fenzi (kevin)

    Posted by Fedora Community Blog on November 18, 2020 11:45 PM

    This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 19 November and closes promptly at 23:59:59 UTC on Thursday, 3 December 2020. Interview with Kevin Fenzi Fedora Account: kevin IRC: nirik (found in #fedora-admin, #fedora-noc, #fedora-apps, #fedora-devel, #fedora, #fedora-arm #fedora-releng, #fedora-phone, #fedora-council, etc…) Fedora User […]

    The post FESCo election: Interview with Kevin Fenzi (kevin) appeared first on Fedora Community Blog.

    FESCo election: Interview with David Cantrell (dcantrell)

    Posted by Fedora Community Blog on November 18, 2020 11:45 PM

    This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 19 November and closes promptly at 23:59:59 UTC on Thursday, 3 December 2020. Interview with David Cantrell Fedora Account: dcantrell IRC: dcantrell (found in #fedora-devel, #fedora-qa, #fedora-ambassadors and other channels as needed. Quite often people will […]

    The post FESCo election: Interview with David Cantrell (dcantrell) appeared first on Fedora Community Blog.

    FESCo election: Interview with Miro Hrončok (churchyard)

    Posted by Miro Hrončok on November 18, 2020 11:45 PM

    This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 19 November and closes promptly at 23:59:59 UTC on Thursday, 3 December 2020. Interview with Miro Hrončok Fedora Account: churchyard IRC: mhroncok (found in #fedora-python #fedora-3dprinting #fedora-devel #fedora-ambassadors #fedora-cs) Fedora User Wiki Page Questions Why do […]

    The post FESCo election: Interview with Miro Hrončok (churchyard) appeared first on Fedora Community Blog.

    Keystone and Cassandra: Parity with SQL

    Posted by Adam Young on November 18, 2020 09:41 PM

    Look back at our Pushing Keystone over the Edge presentation from the OpenStack Summit. Many of the points we make are problems faced by any application trying to scale across multiple datacenters. Cassandra is a database designed to deal with this level of scale. So Cassandra may well be a better choice than MySQL or other RDBMS as a datastore to Keystone. What would it take to enable Cassandra support for Keystone?

    Lets start with the easy part: defining the tables. Lets look at how we define the Federation back end for SQL. We use SQL Alchemy to handle the migrations: we will need something comparable for Cassandra Query Language (CQL) but we also need to translate the table definitions themselves.

    Before we create the tables, we need to create keyspace. I am going to make separate keyspaces for each of the subsystems in Keystone: Identity, Assignment, Federation, and so on. Here’s the Federated one:

    CREATE KEYSPACE keystone_federation WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '3'}  AND durable_writes = true;
    

    The Identity provider table is defined like this:

        idp_table = sql.Table(
            'identity_provider',
            meta,
            sql.Column('id', sql.String(64), primary_key=True),
            sql.Column('enabled', sql.Boolean, nullable=False),
            sql.Column('description', sql.Text(), nullable=True),
            mysql_engine='InnoDB',
            mysql_charset='utf8')
        idp_table.create(migrate_engine, checkfirst=True)
    

    The comparable CQL to create a table would look like this:

    CREATE TABLE identity_provider (id text PRIMARY KEY , enables boolean , description text);
    

    However, when I describe the schema to view the table defintion, we see that there are many tuning and configuration parameters that are defaulted:

    CREATE TABLE federation.identity_provider (
        id text PRIMARY KEY,
        description text,
        enables boolean
    ) WITH additional_write_policy = '99p'
        AND bloom_filter_fp_chance = 0.01
        AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
        AND cdc = false
        AND comment = ''
        AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
        AND compression = {'chunk_length_in_kb': '16', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
        AND crc_check_chance = 1.0
        AND default_time_to_live = 0
        AND extensions = {}
        AND gc_grace_seconds = 864000
        AND max_index_interval = 2048
        AND memtable_flush_period_in_ms = 0
        AND min_index_interval = 128
        AND read_repair = 'BLOCKING'
        AND speculative_retry = '99p';
    

    I don’t know Cassandra well enough to say if these are sane defaults to have in production. I do know that someone, somewhere, is going to want to tweak them, and we are going to have to provide a means to do so without battling the upgrade scripts. I suspect we are going to want to only use the short form (what I typed into the CQL prompt) in the migrations, not the form with all of the options. In addition, we might want an if not exists  clause on the table creation to allow people to make these changes themselves. Then again, that might make things get out of sync. Hmmm.

    There are three more entities in this back end:

    CREATE TABLE federation_protocol (id text, idp_id text, mapping_id text,  PRIMARY KEY(id, idp_id) );
    cqlsh:federation> CREATE TABLE mapping (id text primary key, rules text,    );
    CREATE TABLE service_provider ( auth_url text, id text primary key, enabled boolean, description text, sp_url text, RELAY_STATE_PREFIX  text);
    

    One thing that is interesting is that we will not be limiting the ID fields to 32, 64, or 128 characters. There is no performance benefit to doing so in Cassandra, nor is there any way to enforce the length limits. From a Keystone perspective, there is not much value either; we still need to validate the UUIDs in Python code. We could autogenerate the UUIDs in Cassandra, and there might be some benefit to that, but it would diverge from the logic in the Keystone code, and explode the test matrix.

    There is only one foreign key in the SQL section; the federation protocol has an idp_id that points to the identity provider table. We’ll have to accept this limitation and ensure the integrity is maintained in code. We can do this by looking up the Identity provider before inserting the protocol entry. Since creating a Federated entity is a rare and administrative task, the risk here is vanishingly small. It will be more significant elsewhere.

    For access to the database, we should probably use Flask-CQLAlchemy. Fortunately, Keystone is already a Flask based project, so this makes the two projects align.

    For migration support, It looks like the best option out there is cassandra-migrate.

    An effort like this would best be started out of tree, with an expectation that it would be merged in once it had shown a degree of maturity. Thus, I would put it into a namespace that would not conflict with the existing keystone project. The python imports would look like:

    from keystone.cassandra import migrations
    from keystone.cassandra import identity
    from keystone.cassandra import federation
    

    This could go in its own git repo and be separately pip installed for development. The entrypoints would be registered such that the configuration file would have entries like:

    [application_credential] driver = cassandra

    Any tuning of the database could be put under a [cassandra] section of the conf file, or tuning for individual sections could be in keys prefixed with cassanda_ in the appropriate sections, such as application_credentials as shown above.

    It might be interesting to implement a Cassandra token backend and use the default_time_to_live value on the table to control the lifespan and automate the cleanup of the tables. This might provide some performance benefit over the fernet approach, as the token data would be cached. However, the drawbacks due to token invalidation upon change of data would far outweigh the benefits unless the TTL was very short, perhaps 5 minutes.

    Just making it work is one thing. In a follow on article, I’d like to go through what it would take to stretch a cluster from one datacenter to another, and to make sure that the other considerations that we discussed in that presentation are covered.

    Feedback?

    Installing Fedora on the NVIDIA Jetson nano

    Posted by Peter Robinson on November 18, 2020 06:36 PM

    Nvidia launched the Jetson Nano Developer Kit in March 2019, since there there’s been a few minor refreshes including a just announced cheaper 2Gb model. I received the original 4Gb rev A device shortly after they were launched.

    Over the last year or so as part of my role at Red Hat I started working with some of the NVidia Tegra team to improve support for the Jetson devices. This work has been wide ranging and though it’s taken awhile, with Fedora 33 we’re starting to see the fruits of that collaboration. The first is improved support for the Jetson Nano. The official L4T (Linux 4 Tegra) Jetson Nano images look a lot like an Android phone with numerous partitions across the mSD card. This makes it harder to support a generic Linux distribution like Fedora as there are assumptions by distributions of control they can have over the storage, so while it was certainly possible to get Fedora to run on these devices it generally wasn’t for the faint of heart. As of the recent L4T releases, you definitely want R32.4.4, it’s now a supported option to flash all the firmware to the onboard SPI flash enabling the use of the entire mSD card for the OS of your choice, which as we all know will be Fedora 😉 but the instructions here should be adaptable to work for any distribution.

    We do it in two stages, first is to flash the new firmware to the SPI over the micr USB port, second we’ll prepare the Fedora OS for the mSD card. For the first stage you’ll need the latest L4T Release R32.4.4 and the Fedora U-Boot builds installed locally.

    Before we get started you’ll need the following:

    • A USB-A to micro USB cable for flashing
    • A HDMI monitor and a USB keyboard
    • A jumper, a jumper wire or something to close the connection on the FRC pins for recovery mode
    • A 3.3v USB Serial TTY (optional)
    • An appropriate 5v barrel PSU (optional)

    If you wish to use a serial TTY there’s a good guide here for connecting it to the RevA nano, the RevB has two camera connectors so they’ve moved the serial console headers to near the mSD card slot. The command to see serial output is:

    screen /dev/ttyUSB0 115200

    So let’s get started with flashing the firmware. This step with the firmware on the SPI doesn’t have to be done often. First we’ll extract the L4T release and get all the bits installed that we need to flash the firmware:

    sudo dnf install -y usbutils uboot-images-armv8 arm-image-installer
    tar xvf ~/Downloads/Tegra210_Linux_R32.4.4_aarch64.tbz2
    cd Linux_for_Tegra
    cp /usr/share/uboot/p3450-0000/u-boot.bin bootloader/t210ref/p3450-porg/
    

    Next, based on instructions from the NVidia Jetson Nano Quick Start Guide, we need to put the Jetson Nano into Force Recovery Mode (FRC) to prepare for flashing the firmware:

    1. Ensure that your Jetson Nano Developer Kit is powered off. There’s no need for a mSD card ATM, we’re just writing to the SPI flash.
    2. Connect the Micro-USB OTG cable to the Micro USB port on the Nano. Don’t plug it into the host computer just yet.
    3. Enable Force Recovery mode by placing a jumper across the FRC pins of the Button Header on the carrier board.
      a. For carrier board revision A02, these are pins 3 and 4 of Button Header (J40) which is located near the camera header.
      b. For carrier board revision B01, these are pins 9 and 10 of Button Header (J50), which is located on the edge of the carrier board under the Jetson module.
    4. Only if you wish to use a separate PSU place a jumper across J48 to enable use of a DC power adapter.
    5. Connect a DC power adapter to J25. The developer kit powers on automatically and enters Force Recovery mode. Note it may be possible to do this with USB power but I’ve not tested it.
    6. Remove the jumper from the FRC pins of the Button Header.
    7. See if you can see the Jetson Nano is in recovery mode by running:
      lsusb | grep -i nvidia

    Now we can actually flash the firmware (make sure you’re still in the Linux_for_Tegra directory):

    sudo ./flash.sh p3448-0000-max-spi external

    You will see a lot of output as the command runs, and if you have a serial TTY you’ll see some output there but eventually you’ll be returned to the command prompt and the system will reset. If you have a HDMI monitor attached you’ll see the NVidia logo pop up, if you have a serial console you’ll see a bunch of output and eventually the output of U-Boot and the associated U-Boot prompt.

    Now we have the firmware flashed we can prepare Fedora for the mSD card. Download the Fedora Workstation for aarch64 raw image. You can of course also use XFCE, Minimal or Server images. Put the mSD card in reader and after unmounting any filesystem run the following command (look at the help for other options around users/ssh-keys):

    sudo arm-image-installer --media=/dev/XXX --resizefs --target=none --image=~/Downloads/Fedora-Workstation-33-1.3.aarch64.raw.xz
    

    Note you need to replace XXX with the right device, and you don’t need a target option as we’re not writing the firmware to the mSD card.

    Once that completes you should be able to pop the mSD card into your Jetson Nano and reset the device and see it boot. You will see all the output if you have a serial console attached. If you’re using HDMI it may take a little while once the NVidia logo disappears for the GNOME first user setup to appear.

    Also note that while a lot of things work on this device, like the nouveau driver for display, it’s not perfect yet and we’re actively working to fix and improve the support for the Jetson Nano, most of these will come via the standard Fedora update mechanism. If you have queries please engage in the usual ways via the mailing list or #fedora-arm on Freenode.

    How to fix Linux EFI secure-boot shim bootloop issue

    Posted by Hans de Goede on November 18, 2020 09:05 AM
    How to fix the Linux EFI secure-boot shim bootloop issue seen on some systems.

    Quite a few Bay- and Cherry-Trail based systems have bad firmware which completely ignores any efibootmgr set boot options. They basically completely reset the boot order doing some sort of auto-detection at boot. Some of these even will given an error about their eMMC not being bootable unless the ESP has a EFI/Microsoft/Boot/bootmgfw.efi file!

    Many of these end up booting EFI/Boot/bootx64.efi unconditionally every boot. This will cause a boot loop since when Linux is installed EFI/Boot/bootx64.efi is now shim. When shim is started with a path of EFI/Boot/bootx64.efi, shim will add a new efibootmgr entry pointing to EFI/fedora/shimx64.efi and then reset. The goal of this is so that the firmware's F12 bootmenu can be used to easily switch between Windows and Linux (without chainloading which breaks bitlocker). But since these bad EFI implementations ignore efibootmgr stuff, EFI/Boot/bootx64.efi shim will run again after the reset and we have a loop.

    There are 2 ways to fix this loop:

    1. The right way: Stop shim from trying to add a bootentry pointing to EFI/fedora/shimx64.efi:

    rm EFI/Boot/fbx64.efi
    cp EFI/fedora/grubx64.efi EFI/Boot


    The first command will stop shim from trying to add a new efibootmgr entry (it calls fbx64.efi to do that for it) instead it will try to execute grubx64.efi from the from which it was executed, so we must put a grubx64.efi in the EFI/Boot dir, which the second command does. Do not use the livecd EFI/Boot/grubx64.efi file for this as I did at first, that searches for its config and env under EFI/Boot which is not what we want.

    Note that upgrading shim will restore EFI/Boot/fbx64.efi. To avoid this you may want to backup EFI/Boot/bootx64.efi, then do "sudo rpm -e shim-x64" and then restore the backup.

    2. The wrong way: Replace EFI/Boot/bootx64.efi with a copy of EFI/fedora/grubx64.efi

    This is how I used to do this until hitting the scenario which caused me to write this blog post. There are 2 problems with this:

    2a) This requires disabling secure-boot (which I could live with sofar)
    2b) Some firmwares change how they behave, exporting a different DSDT to the OS dependending on if EFI/Boot/bootx64.efi is signed or not (even with secure boot disabled) and their behavior is totally broken when it is not signed. I will post another rant ^W blogpost about this soon. For now lets just say that you should use workaround 1. from above since it simply is a better workaround.

    Note for better readability the above text uses bootx64, shimx64, fbx64 and grubx64 throughout. When using a 32 bit EFI (which is typical on Bay Trail systems) you should replace these with bootia32, shimia32, fbia32 and grubia32. Note 32 bit EFI Bay Trail systems should still use a 64 bit Linux distro, the firmware being 32 bit is a weird Windows related thing.

    Also note that your system may use another key then F12 to show the firmware's bootmenu.

    CoreOS install via Live ISO --copy-network

    Posted by Dusty Mabe on November 18, 2020 12:00 AM
    A couple of us recently gave an update to our Customer Experience team at Red Hat on the improvements that were made in Red Hat CoreOS for OpenShift 4.6. My part of the presentation focused on the new Live ISO that is now used for Fedora/Red Hat CoreOS installations and also the improvements that we made for being able to copy the install environment networking configuration into the installed system via coreos-installer --copy-network.

    Online Meetings: The Temptation to Censor Tricky Questions

    Posted by Daniel Pocock on November 17, 2020 10:50 PM

    Early in 2020, at the outset of the pandemic, the UN's special rapporteur on torture and other cruel, inhumane or degrading treatment or punishment, Professor Nils Melzer of Switzerland, spoke out about the growing problem of cybertorture.

    I could immediately relate to this. I had been a volunteer mentor in the Google Summer of Code since 2013. I withdrew from the program at an acutely painful time for my family, losing two family members in less than a year. Within a week, Stephanie Taylor, the head of the program at Google, was sending me threats and insults. Taylor's Open Source Program Office at Google relies on hundreds of volunteers like me to do work for them.

    Everybody else in my life, my employer, friends and other non-profit organizations that I contribute to responded with compassion and sympathy. Taylor and her associates chose threats and insults. Taylor had chosen to take the pain from my personal life and drag it into my professional life. Why does Google use events like this to hurt volunteers and our families? Despite exercising my rights under the GDPR and asking her to stop this experiment and get out of my life, Taylor continues to sustain it.

    The UN's Forum on Business and Human Rights is taking place this week. It is online due to the pandemic. In the session about accountability and remedies for victims of human rights abuse, my experience with Google and people like Taylor was at the front of my mind. I'm not the only one thinking about Google as a bunch of gangsters: a British parliamentary report and US Department of Justice investigation has also used terms like digital gangster and unlawful to describe the way that people like this are operating.

    Yet when I entered the UN's online event and asked a very general question about the connection from Professor Melzer's analysis to Google's modus operandi, the question vanished. I posted a subsequent question asking why my query was censored and it was immediately subject to censorship. This is the golden rule of censorship: don't ask about censorship. I never received any correspondence or complaints about the question.

    united nations, censorship

    Article 19 of the Universal Declaration of Human Rights proclaims the right to free speech. Within the first year of the pandemic, the UN has already set that aside, not wanting to offend the Googlers who have planted lobbyists everywhere in the corridors of power. If somebody asked the same question in a real world event at the UN in Geneva or New York, would a trap door open up underneath them and make them disappear? Or would members of the panel and the audience need to contemplate Professor Melzer's work on cybertorture seriously?

    Some people suggested the spam-filter may have been triggered by my name. This is simply offensive and once again, there is no parallel to this at a real world event.

    If you wish to participate in the final day of the forum, you can use the following links:

    This incident emphasizes the extent to which online events are been scripted and choreographed to look like spontaneous discussions while in reality, they are maintaining the status quo.

    This demonstrates a huge difference between real world events and online events. In a real world event, when somebody stands up to ask a question, the chair of the meeting or the panel has no forewarning about the question. Online events change that dramatically. Observers may not know which questions were really avoided.

    When technology gives leaders the opportunity to simply avoid difficult questions, they are tempted to do so, whether it is in the annual meeting of a local bridge club or a UN forum.

    daniel pocock, dante pesce, united nations, geneva, palais des nations daniel pocock, united nations, geneva, palais des nations

    Fedora 33 : Smokeping tool.

    Posted by mythcat on November 17, 2020 08:03 PM
    Smokeping is a latency measurement tool. It sends test packets out to the net and measures the amount of time they need to travel from one place to the other and back. SmokePing consists of a daemon process which organized the latency measurements and a CGI which presents the graphs. 
    You can install it like a webpage, see a demo on this webpage with Customers.SALAG.
    [root@desk mythcat]# dnf search SmokePing
    Last metadata expiration check: 0:12:18 ago on Tue 17 Nov 2020 08:45:57 PM EET.
    =============================== Name Matched: SmokePing ===============================
    smokeping.noarch : Latency Logging and Graphing System
    [root@desk mythcat]# dnf install smokeping.noarch
    Last metadata expiration check: 0:12:24 ago on Tue 17 Nov 2020 08:45:57 PM EET.
    Dependencies resolved.
    =======================================================================================
    Package Architecture Version Repository Size
    =======================================================================================
    Installing:
    smokeping noarch 2.7.3-2.fc33 fedora 564 k
    Installing dependencies:
    fedora-logos-httpd noarch 30.0.2-5.fc33 fedora 15 k
    fping x86_64 5.0-1.fc33 fedora 38 k
    httpd x86_64 2.4.46-1.fc33 fedora 1.4 M
    httpd-filesystem noarch 2.4.46-1.fc33 fedora 14 k
    httpd-tools x86_64 2.4.46-1.fc33 fedora 83 k
    libdbi x86_64 0.9.0-16.fc33 fedora 50 k
    mod_fcgid x86_64 2.3.9-21.fc33 fedora 77 k
    mod_http2 x86_64 1.15.14-2.fc33 fedora 152 k
    perl-CGI noarch 4.50-4.fc33 fedora 198 k
    perl-CGI-Fast noarch 2.15-6.fc33 fedora 18 k
    perl-Config-Grammar noarch 1.13-6.fc33 fedora 29 k
    perl-FCGI x86_64 1:0.79-5.fc33 fedora 47 k
    perl-Net-DNS noarch 1.21-5.fc33 fedora 356 k
    perl-Net-Telnet noarch 3.04-15.fc33 fedora 62 k
    perl-Path-Tiny noarch 0.114-3.fc33 fedora 67 k
    perl-SNMP_Session noarch 1.13-25.fc33 fedora 67 k
    perl-Unicode-UTF8 x86_64 0.62-13.fc33 fedora 26 k
    rrdtool x86_64 1.7.2-14.fc33 fedora 569 k
    rrdtool-perl x86_64 1.7.2-14.fc33 fedora 43 k

    Transaction Summary
    =======================================================================================
    Install 20 Packages

    Total download size: 3.8 M
    Installed size: 11 M
    Is this ok [y/N]: y
    Downloading Packages:
    ...
    Complete!
    [root@desk mythcat]# dnf install lighttpd
    ...
    Complete!

    [root@desk mythcat]# dnf install lighttpd-fastcgi
    ...
    Complete!
    Most users use the smokeping service:
    sudo service smokeping start
    sudo service smokeping status
    You can set your configuration file using this file:
    [mythcat@desk ~]$ sudo vi /etc/smokeping/config
    I let this file unchanged and I run these commands:
    [mythcat@desk ~]$ sudo smokeping --check
    Configuration file '/etc/smokeping/config' syntax OK.
    [mythcat@desk ~]$ sudo smokeping --debug
    ### Compiling alert detector pattern 'someloss'
    ### >0%,*12*,>0%,*12*,>0%
    ...
    Smokeping version 2.007003 successfully launched.
    Not entering multiprocess mode with '--debug'. Use '--debug-daemon' for that.
    FPing: probing 3 targets with step 300 s and offset 118 s.
    FPing: Executing /usr/sbin/fping -C 20 -q -B1 -r1 -4 -i10 planet.fedoraproject.org fedoraproject.org
    docs.fedoraproject.org
    FPing: Got fping output: 'planet.fedoraproject.org : 165 166 167 167 165 172 165 165 165 164 168 165 166 165
    165 164 165 164 171 165'
    FPing: Got fping output: 'fedoraproject.org : 77.8 75.6 75.0 68.3 69.1 73.6 71.1 71.1 69.0 67.5
    69.9 69.5 70.6 70.8 76.9 76.0 70.8 70.6 72.1 68.3'
    FPing: Got fping output: 'docs.fedoraproject.org : 171 165 165 165 180 164 170 164 164 165 163 171
    169 160 170 167 166 166 166 164'
    Calling RRDs::update(/var/lib/smokeping/rrd/Ping/FedoraprojectOrg.rrd --template uptime:loss:median:
    ping1:ping2:ping3:ping4:ping5:ping6:ping7:ping8:ping9:ping10:ping11:ping12:ping13:ping14:ping15:
    ping16:ping17:ping18:ping19:ping20 1605642550:U:0:7.0800000000e-02:6.7500000000e-02:
    ...
    1.6700000000e-01:1.6800000000e-01:1.7100000000e-01:1.7200000000e-01)

    Fedora 33 : Install PyGame 2.0 on Fedora.

    Posted by mythcat on November 17, 2020 08:00 PM
    Today I will show you how to install the python PyGame version 2.0 package with python version 3.9 in Fedora 33 distro. Let's install all Fedora packages need for this python package:
    [root@desk pygame]# dnf install SDL2-devel.x86_64 
    ...
    Installed:
    SDL2-devel-2.0.12-4.fc33.x86_64

    Complete!
    [root@desk pygame]# dnf install SDL2_ttf-devel.x86_64
    ...
    Installed:
    SDL2_ttf-2.0.15-6.fc33.x86_64 SDL2_ttf-devel-2.0.15-6.fc33.x86_64

    Complete!
    [root@desk pygame]# dnf install SDL2_image-devel.x86_64
    ...
    Installed:
    SDL2_image-2.0.5-5.fc33.x86_64 SDL2_image-devel-2.0.5-5.fc33.x86_64

    Complete!
    [root@desk pygame]# dnf install SDL2_mixer-devel.x86_64
    ...
    Installed:
    SDL2_mixer-2.0.4-7.fc33.x86_64 SDL2_mixer-devel-2.0.4-7.fc33.x86_64

    Complete!
    [root@desk pygame]# dnf install SDL2_gfx-devel.x86_64
    ...
    Installed:
    SDL2_gfx-1.0.4-3.fc33.x86_64 SDL2_gfx-devel-1.0.4-3.fc33.x86_64

    Complete!
    [root@desk pygame]# dnf install portmidi-devel.x86_64
    ...
    Installed:
    portmidi-devel-217-38.fc33.x86_64

    Complete!
    Use this command to clone it from GitHub and install it:
    [mythcat@desk ~]$ git clone https://github.com/pygame/pygame
    Cloning into 'pygame'...
    remote: Enumerating objects: 4, done.
    remote: Counting objects: 100% (4/4), done.
    remote: Compressing objects: 100% (4/4), done.
    remote: Total 38509 (delta 0), reused 0 (delta 0), pack-reused 38505
    Receiving objects: 100% (38509/38509), 17.78 MiB | 11.66 MiB/s, done.
    Resolving deltas: 100% (29718/29718), done.
    [mythcat@desk ~]$ cd pygame/
    [mythcat@desk pygame]$ python3.9 setup.py install --user


    WARNING, No "Setup" File Exists, Running "buildconfig/config.py"
    Using UNIX configuration...


    Hunting dependencies...
    SDL : found 2.0.12
    FONT : found
    IMAGE : found
    MIXER : found
    PNG : found
    JPEG : found
    SCRAP : found
    PORTMIDI: found
    PORTTIME: found
    FREETYPE: found 23.4.17

    If you get compiler errors during install, double-check
    the compiler flags in the "Setup" file.
    ...
    copying docs/pygame_tiny.gif -> build/bdist.linux-x86_64/egg/pygame/docs
    creating build/bdist.linux-x86_64/egg/EGG-INFO
    copying pygame.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying pygame.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying pygame.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying pygame.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying pygame.egg-info/not-zip-safe -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying pygame.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt
    creating dist
    creating 'dist/pygame-2.0.1.dev1-py3.9-linux-x86_64.egg' and adding 'build/bdist.linux-x86_64/egg' to it
    removing 'build/bdist.linux-x86_64/egg' (and everything under it)
    Processing pygame-2.0.1.dev1-py3.9-linux-x86_64.egg
    creating /home/mythcat/.local/lib/python3.9/site-packages/pygame-2.0.1.dev1-py3.9-linux-x86_64.egg
    Extracting pygame-2.0.1.dev1-py3.9-linux-x86_64.egg to /home/mythcat/.local/lib/python3.9/site-packages
    Adding pygame 2.0.1.dev1 to easy-install.pth file

    Installed /home/mythcat/.local/lib/python3.9/site-packages/pygame-2.0.1.dev1-py3.9-linux-x86_64.egg
    Processing dependencies for pygame==2.0.1.dev1
    Finished processing dependencies for pygame==2.0.1.dev1
    Let's test it:
    [mythcat@desk pygame]$ ls
    build dist examples README.rst setup.cfg src_c test
    buildconfig docs pygame.egg-info Setup setup.py src_py
    [mythcat@desk pygame]$ python3.9
    Python 3.9.0 (default, Oct 6 2020, 00:00:00)
    [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import pygame
    pygame 2.0.1.dev1 (SDL 2.0.12, python 3.9.0)
    Hello from the pygame community. https://www.pygame.org/contribute.html
    >>>

    Fedora 33 : Upgrade from Fedora 32.

    Posted by mythcat on November 17, 2020 08:00 PM
    It is recommended to put SElinux in the disabled ... Use the following commands for this operation Edit the /etc/selinux/config file, run:
    sudo vi /etc/selinux/config
    Set SELINUX to disabled:
    SELINUX=disabled
    These commands will prepare the Fedora 32 for update
    [root@desk mythcat]# dnf config-manager --set-disabled "*"
    [root@desk mythcat]# dnf repolist
    [root@desk mythcat]# dnf config-manager --set-enabled updates
    [root@desk mythcat]# dnf repolist
    repo id repo name
    fedora Fedora 32 - x86_64
    updates Fedora 32 - x86_64 - Updates
    [root@desk mythcat]# dnf upgrade --refresh
    ...
    [root@desk mythcat]# dnf install dnf-plugin-system-upgrade
    ...
    Let's upgrade with new Fedora 33 packages:
    [root@desk mythcat]# dnf system-upgrade download --releasever=33 --allowerasing
    Before you continue ensure that your system is fully upgraded by running "dnf --refresh upgrade".
    Do you want to continue [y/N]: y
    ...
    file /usr/bin/ocamlprof.byte conflicts between attempted installs of ocaml-4.11.1-1.fc33.i686
    and ocaml-4.11.1-1.fc33.x86_64
    ...
    [root@desk mythcat]# dnf remove ocaml
    ...
    I try again to update:
    [root@desk mythcat]# dnf system-upgrade download --releasever=33 --allowerasing
    ...
    Running transaction check
    Transaction check succeeded.
    Running transaction test
    Transaction test succeeded.
    Complete!
    Download complete! Use 'dnf system-upgrade reboot' to start the upgrade.
    To remove cached metadata and transaction use 'dnf system-upgrade clean'
    The downloaded packages were saved in cache until the next successful transaction.
    You can remove cached packages by executing 'dnf clean packages'.
    [root@desk mythcat]# dnf system-upgrade reboot
    After reboot and system upgrade the new Fedora 33 is ready to use.
    If exist another upgrade then use dnf tool command:
    [sudo] password for mythcat: 
    [root@desk mythcat]# dnf upgrade
    Last metadata expiration check: 1:58:19 ago on Sat 31 Oct 2020 04:06:13 PM EET.
    Dependencies resolved.
    ================================================================================
    Package Arch Version Repository Size
    ================================================================================
    Upgrading:
    dnf noarch 4.4.0-3.fc33 updates 445 k
    dnf-data noarch 4.4.0-3.fc33 updates 46 k
    libdnf x86_64 0.54.2-3.fc33 updates 604 k
    php-symfony-polyfill noarch 1.19.0-1.fc33 updates 57 k
    python3-dnf noarch 4.4.0-3.fc33 updates 410 k
    python3-hawkey x86_64 0.54.2-3.fc33 updates 112 k
    python3-libdnf x86_64 0.54.2-3.fc33 updates 775 k
    unixODBC x86_64 2.3.9-1.fc33 updates 460 k
    yum noarch 4.4.0-3.fc33 updates 43 k

    Transaction Summary
    ================================================================================
    Upgrade 9 Packages

    Total download size: 2.9 M
    Is this ok [y/N]: y
    ...
    Upgraded:
    dnf-4.4.0-3.fc33.noarch dnf-data-4.4.0-3.fc33.noarch
    libdnf-0.54.2-3.fc33.x86_64 php-symfony-polyfill-1.19.0-1.fc33.noarch
    python3-dnf-4.4.0-3.fc33.noarch python3-hawkey-0.54.2-3.fc33.x86_64
    python3-libdnf-0.54.2-3.fc33.x86_64 unixODBC-2.3.9-1.fc33.x86_64
    yum-4.4.0-3.fc33.noarch

    Complete!
    [root@desk mythcat]# uname -a
    Linux desk 5.8.16-300.fc33.x86_64 #1 SMP Mon Oct 19 13:18:33 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
    A job very well done by Fedora team.

    Use SSH keys for authentication

    Posted by Mohammed Tayeh on November 17, 2020 02:01 PM

    Set up your first SSH keys

    Use SSH keys for authentication without password when you are connecting to your server. simple and secure login process.

    To Generate a new SSH Key

    [root@server ~]$ ssh-keygen 
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa): 
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:kxPyLTxxqwobFXoOxxxABaDD0xxnZzCB6xxxf38 root@server
    The key's randomart image is:
    +---[RSA 2048]----+
    |=+==*            |
    |xo.o =           |
    |+oo.O .          |
    |=o.* * o         |
    |.x. = X S        |
    | .   O *         |
    |    o = o        |
    |     + qo.x.   P |
    |    . .xx+o..o.  |
    +----[SHA256]-----+
    
    

    First way: Copy the public key to your server using the command

    [root@server ~]$ ssh-copy-id root@<instance_ip>
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    The authenticity of host '<instance_ip> (<instance_ip>)' can't be established.
    ECDSA key fingerprint is SHA256:aF/iyxxxKqx1LUyM/uyr/xxxxxxxxxxx.
    ECDSA key fingerprint is MD5:xx:c3:xx:48:b4:ef:xx:e4:58:a4:xx:14:c1:xx:c5:af.
    Are you sure you want to continue connecting (yes/no)? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@<instance_ip>'s password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'root@<instance_ip>'"
    and check to make sure that only the key(s) you wanted were added.
    
    

    Second way: Download the public key to your server using the Github, Gitlab

    • upload your key to Github or Gitlab: settings -> SSH keys -> New SSH key
    • after uoload the SSH key you can access key on Github, Gitlab

    now you can import the SSH key using curl command

    [root@server ~]$ curl -L https://github.com/tayeh.keys >> ~/.ssh/authorized_keys
    
    [root@server ~]$ curl -L https://github.com/tayeh.keys >> ~/.ssh/authorized_keys
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  1315  100  1315    0     0   6015      0 --:--:-- --:--:-- --:--:--  6032
    

    note this way will import all keys on your github account

    now you can access your server without password try:

    ssh root@<instance_ip>
    

    Turn off password authentication

    With SSH key authentication, you can disable password authentication for SSH to prevent brute-forcing. open SSH configuration file

    vim /etc/ssh/sshd_config
    

    search for PasswordAuthentication and PermitRootLogin change it to:

    PasswordAuthentication no
    PermitRootLogin without-password
    

    Restart the SSH service

    systemctl restart sshd
    

    Conclusions

    Remember to always keep your private keys safe.

    Finding the real source IP: using the PROXY protocol with syslog-ng

    Posted by Peter Czanik on November 17, 2020 12:04 PM

    Until now collecting logs behind proxies or load balancers needed some compromises. You either trusted the host information included in the log messages or you could only see the proxy as the sender host. Starting with syslog-ng 3.30 there is a third option available: using the PROXY protocol. While not an official Internet standard, it is supported by a number of popular software, like HAProxy. Other software can be extended to use it, like F5 load balancers using iRules. This way crucial information about the original network connection is not lost, but it is forwarded to the server by the proxy.

    From this blog you can learn about the PROXY protocol, how to enable it in the syslog-ng configuration, and how to send test messages using loggen directly and through HAProxy.

    Before you begin

    You need to use at least sylog-ng version 3.30 (or syslog-ng PE 7.0.23 of the commercial version) to utilize PROXY protocol support. Most Linux distributions still carry older versions. You can find information about unofficial 3rd party syslog-ng repositories with up-to-date syslog-ng packages at https://www.syslog-ng.com/3rd-party-binaries. At the moment these versions are not yet released, so I used git snapshot packages for testing.

    In my blog I will show you a simple configuration for HAProxy, as it is available for free and it is included in most Linux distributions. I ran my tests on three openSUSE virtual machines separately for the client sending logs, for HAProxy and for the syslog-ng server. But you can use any platform that HAProxy and syslog-ng supports and can actually have all three on a single host.

    The PROXY protocol

    Before we take a deep dive into syslog-ng configuration, let’s take a closer look at the PROXY protocol. The PROXY protocol was created by HAProxy developers and it is available on their website: http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt While it resembles an RFC, it is not an official standard, yet many devices and software support it because it solves a very common problem. When a TCP connection goes through a proxy or a load balancer, the original TCP information, including the source IP address, is lost. The PROXY protocol makes sure that this information reaches the servers behind the proxy. The PROXY protocol has two versions. The first version is text-based, while the second version forwards information in a binary form. The syslog-ng implementation supports the first version of the protocol.

    When the text-based first version of the PROXY protocol is enabled, the proxy starts each new TCP connection with a similar line:

    PROXY TCP4 192.168.0.1 192.168.0.11 56324 443

    As you can see, it starts with a fixed text – PROXY – followed by the version of the TCP protocol, the source and destination IP addresses and ports of the connection. The PROXY protocol is not auto detected, it does not involve any heuristics. If the first line does not follow this format, the server side rejects the connection. The only exception is when the proxy sends the following line:

    PROXY UNKNOWN

    What happens here depends on the implementation: some servers simply reject these connections, but most accept them, including syslog-ng.

    Configuring syslog-ng

    Append the following configuration snippet to your syslog-ng.conf or place it in a file with .conf extension under the /etc/syslog-ng/conf.d/ directory, if syslog-ng on your host is configured to use it.

    source s_tcp_pp {
        network(
            port(7777)
            ip(0.0.0.0)
            transport("proxied-tcp")
        );
    };
    
    destination d_file {
        file("/var/log/pp.log" template("$(format-json --scope nv-pairs)\n"));
    };
    
    log {
        source(s_tcp_pp);
        destination(d_file);
    };

    The source listens on port 7777 and expects incoming connections to use the PROXY protocol. You can also use encrypted connections here by replacing “proxied-tcp” with “proxied-tls” and adding the TLS-related options, just like with a regular encrypted source.

    The file destination uses JSON formatting. This way you can see the name-value pairs created by this source: PROXIED_SRCPORT, PROXIED_SRCIP, PROXIED_IP_VERSION, PROXIED_DSTPORT and PROXIED_DSTIP. Note that these name-value pairs are not created with PROXY UNKNOWN.

    Finally, the log statement connects the source and destination into a pipeline together. Reload syslog-ng for the new configuration to take effect.

    Testing with loggen

    The easiest way to test the above configuration is to use the loggen utility of syslog-ng. First try to send a few logs without enabling PROXY protocol support:

    loggen -i -S localhost 7777

    You will not find any new log messages in the new file destination. However, /var/log/messages will contain messages similar to these (if logging of the internal() source is enabled):

    Nov  6 16:16:07 localhost syslog-ng[891]: PROXY proto header with invalid header length; max_parsable_length='216', max_length_by_spec='108', length='255', header='<38>2020-11-06T16:16:07 localhost prg00000[1234]: seq: 0000000000, thread: 0000, runid: 1604675767, stamp: 2020-11-06T16:16:07 PADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADD\x0a<38>2020-11-06T16:16:07 localhost prg00000[1234]: seq: 0000000001, thread: 0000, runid: 1604675767, stamp: 2020-11-06T16:16:07 [...] PADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADD\x0a-client.c'
    Nov  6 16:16:07 localhost syslog-ng[891]: Error parsing PROXY protocol header;
    Nov  6 16:16:07 localhost syslog-ng[891]: Syslog connection closed; fd='16', client='AF_INET(127.0.0.1:41066)', local='AF_INET(0.0.0.0:7777)'

    It means that loggen did not use the PROXY header and thus the connection was rejected. Let’s try again, this time using the new -H option of loggen:

    loggen -i -S -H localhost 7777

    This time /var/log/messages show a successful connection:

    Nov  6 16:37:25 localhost syslog-ng[891]: Initializing PROXY protocol source driver; driver='0x560b2fb9b310'
    Nov  6 16:37:25 localhost syslog-ng[891]: Syslog connection accepted; fd='16', client='AF_INET(127.0.0.1:41068)', local='AF_INET(0.0.0.0:7777)'
    Nov  6 16:37:25 localhost syslog-ng[891]: PROXY protocol header parsed successfully;
    Nov  6 16:37:29 localhost syslog-ng[891]: Syslog connection closed; fd='16', client='AF_INET(127.0.0.1:41068)', local='AF_INET(0.0.0.0:7777)'

    And in /var/log/pp.log you will find similar messages:

    {"SOURCE":"s_tcp_pp","PROXIED_SRCPORT":"7075","PROXIED_SRCIP":"192.168.1.48","PROXIED_IP_VERSION":"4","PROXIED_DSTPORT":"514","PROXIED_DSTIP":"192.168.1.47","PROGRAM":"prg00000","PID":"1234","MESSAGE":"seq: 0000003961, thread: 0000, runid: 1604677045, stamp: 2020-11-06T16:37:29 PADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADD","LEGACY_MSGHDR":"prg00000[1234]: ","HOST_FROM":"127.0.0.1","HOST":"localhost"}
    {"SOURCE":"s_tcp_pp","PROXIED_SRCPORT":"7075","PROXIED_SRCIP":"192.168.1.48","PROXIED_IP_VERSION":"4","PROXIED_DSTPORT":"514","PROXIED_DSTIP":"192.168.1.47","PROGRAM":"prg00000","PID":"1234","MESSAGE":"seq: 0000003962, thread: 0000, runid: 1604677045, stamp: 2020-11-06T16:37:29 PADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADD","LEGACY_MSGHDR":"prg00000[1234]: ","HOST_FROM":"127.0.0.1","HOST":"localhost"}

    The PROXY protocol related name-value pairs contain random IP addresses and ports by default, but you can specify your own values as well if you want to test your configuration with specific values.

    Installing and configuring HAProxy

    HAProxy is part of most Linux distributions and it is also available on FreeBSD. On openSUSE the HAProxy package comes with a sample configuration. All it needs is appending four lines at the end. Append these lines:

    listen sng
      bind *:6666
      mode tcp
      server server1 172.16.167.153:7777 maxconn 32 send-proxy

    Of course, you also need to change the IP address to the address of your syslog-ng server. The above configuration snippet listens on port 6666 and forwards connections to port 7777 on the given IP address. The “mode tcp” makes sure that HAProxy handles the connection as a generic TCP connection instead of as a HTTP connection. The “send-proxy” keyword enables PROXY protocol for this destination. If it is not included, syslog-ng rejects the connections coming from this proxy. Once you saved the new configuration and reloaded HAProxy, you are ready for testing!

    Testing through HAProxy

    You are not longer limited to using loggen when testing through HAProxy. You can use any software that can send logs to port 6666 through a TCP connection, but for testing, the generic logger and the loggen utility from syslog-ng are the easiest to use.

    logger --tcp --port 6666 --server 172.16.167.139 --rfc3164 this is a test

    or

    loggen -i -S 172.16.167.139 6666

    Of course, replace the IP address with the IP address of your HAProxy server. You should see logs in /var/log/pp.log with real IP addresses of your hosts:

    {"SOURCE":"s_tcp_pp","PROXIED_SRCPORT":"59516","PROXIED_SRCIP":"172.16.167.1","PROXIED_IP_VERSION":"4","PROXIED_DSTPORT":"6666","PROXIED_DSTIP":"172.16.167.139","PROGRAM":"prg00000","PID":"1234","MESSAGE":"seq: 0000002318, thread: 0000, runid: 1604679827, stamp: 2020-11-06T17:23:48 PADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADD","LEGACY_MSGHDR":"prg00000[1234]: ","HOST_FROM":"172.16.167.139","HOST":"localhost"}
    {"SOURCE":"s_tcp_pp","PROXIED_SRCPORT":"59518","PROXIED_SRCIP":"172.16.167.1","PROXIED_IP_VERSION":"4","PROXIED_DSTPORT":"6666","PROXIED_DSTIP":"172.16.167.139","PROGRAM":"czanik","MESSAGE":"This is a test","LEGACY_MSGHDR":"czanik: ","HOST_FROM":"172.16.167.139","HOST":"czplaptop"}

    One more thing

    In your log analysis software, you most likely want to use the real source IP instead of the IP of the proxy server / load balancer. You can train your analytics software about the PROXIED_SRCIP name-value pair, but it is easier to handle this on the syslog-ng side, so I suggest rewriting HOST_FROM with the value of PROXIED_SRCIP. Here is a slightly modified version of the previous configuration with a rewrite rule added to it:

    source s_tcp_pp {
        network(
            port(7777)
            ip(0.0.0.0)
            transport("proxied-tcp")
        );
    };
    
    rewrite r_fixfrom {
        set("$PROXIED_SRCIP", value("HOST_FROM"));
    };
    
    destination d_file {
        file("/var/log/pp.log" template("$(format-json --scope nv-pairs)\n"));
    };
    
    log {
        source(s_tcp_pp);
        rewrite(r_fixfrom);
        destination(d_file);
    };

    When you send another test message, HOST_FROM will now contain the real source IP address instead of the proxy IP address:

    {"SOURCE":"s_tcp_pp","PROXIED_SRCPORT":"52532","PROXIED_SRCIP":"172.16.167.1","PROXIED_IP_VERSION":"4","PROXIED_DSTPORT":"6666","PROXIED_DSTIP":"172.16.167.139","PROGRAM":"czanik","MESSAGE":"This is a test fixed","LEGACY_MSGHDR":"czanik: ","HOST_FROM":"172.16.167.1","HOST":"czplaptop"}

    What is next?

    From this blog you could learn how to configure syslog-ng for the PROXY protocol and how to validate your configuration using loggen directly. I also showed you a very basic HAProxy configuration and an example for sending logs to syslog-ng through HAProxy. This setup was sufficient to test the PROXY protocol, but using a single server in a production environment does not make much sense.

    If you need commercial level support to integrate syslog-ng with a proxy or load balancer like HAProxy or F5, consider buying syslog-ng PE which not only providesenterprise support, but also comes with a number of extra features. Do not hesitate to contact us at https://www.syslog-ng.com/products/log-management-software/

    If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

    Podman with capabilities on Fedora

    Posted by Fedora Magazine on November 16, 2020 08:00 AM

    Containerization is a booming technology. As many as seventy-five percent of global organizations could be running some type of containerization technology in the near future. Since widely used technologies are more likely to be targeted by hackers, securing containers is especially important. This article will demonstrate how POSIX capabilities are used to secure Podman containers. Podman is the default container management tool in RHEL8.

    Determine the Podman container’s privilege mode

    Containers run in either privileged or unprivileged mode. In privileged mode, the container uid 0 is mapped to the host’s uid 0. For some use cases, unprivileged containers lack sufficient access to the resources of the host machine. Technologies and techniques including Mandatory Access Control (apparmor, SELinux), seccomp filters, dropping of capabilities, and namespaces help to secure containers regardless of their mode of operation.

    To determine the privilege mode from outside the container:

    $ podman inspect --format="{{.HostConfig.Privileged}}" <container id>

    If the above command returns true then the container is running in privileged mode. If it returns false then the container is running in unprivileged mode.

    To determine the privilege mode from inside the container:

    $ ip link add dummy0 type dummy

    If this command allows you to create an interface then you are running a privileged container. Otherwise you are running an unprivileged container.

    Capabilities

    Namespaces isolate a container’s processes from arbitrary access to the resources of its host and from access to the resources of other containers running on the same host. Processes within privileged containers, however, might still be able to do things like alter the IP routing table, trace arbitrary processes, and load kernel modules. Capabilities allow one to apply finer-grained restrictions on what resources the processes within a container can access or alter; even when the container is running in privileged mode. Capabilities also allow one to assign privileges to an unprivileged container that it would not otherwise have.

    For example, to add the NET_ADMIN capability to an unprivileged container so that a network interface can be created inside of the container, you would run podman with parameters similar to the following:

    [root@vm1 ~]# podman run -it --cap-add=NET_ADMIN centos
    [root@b27fea33ccf1 /]# ip link add dummy0 type dummy
    [root@b27fea33ccf1 /]# ip link

    The above commands demonstrate a dummy0 interface being created in an unprivileged container. Without the NET_ADMIN capability, an unprivileged container would not be able to create an interface. The above commands demonstrate how to grant a capability to an unprivileged container.

    Currently, there are about 39 capabilities that can be granted or denied. Privileged containers are granted many capabilities by default. It is advisable to drop unneeded capabilities from privileged containers to make them more secure.

    To drop all capabilities from a container:

    $ podman run -it -d --name mycontainer --cap-drop=all centos

    To list a container’s capabilities:

    $ podman exec -it 48f11d9fa512 capsh --print

    The above command should show that no capabilities are granted to the container.

    Refer to the capabilities man page for a complete list of capabilities:

    $ man capabilities

    Use the capsh command to list the capabilities you currently possess:

    $ capsh --print

    As another example, the below command demonstrates dropping the NET_RAW capability from a container. Without the NET_RAW capability, servers on the internet cannot be pinged from within the container.

    $ podman run -it --name mycontainer1 --cap-drop=net_raw centos
    >>> ping google.com (will output error, operation not permitted)

    As a final example, if your container were to only need the SETUID and SETGID capabilities, you could achieve such a permission set by dropping all capabilities and then re-adding only those two.

    $ podman run -d --cap-drop=all --cap-add=setuid --cap-add=setgid fedora sleep 5 > /dev/null; pscap | grep sleep

    The pscap command shown above should show the capabilities that have been granted to the container.

    I hope you enjoyed this brief exploration of how capabilities are used to secure Podman containers.

    Thank You!

    آموزش نصب ویرایشگر Visual Studio Code در لینوکس فدورا

    Posted by Fedora fans on November 16, 2020 06:30 AM
    visualstudiocode

    visualstudiocodeویرایشگر یکی از مهمترین ابزارها برای توسعه دهنده ها می باشد. یکی از این ویرایشگر ها Visual Studio Code می باشد که با اختصار به آن VS Code نیز گفته می شود. Visual Studio Code یک نرم افزار Open source می باشد که توسط ماکروسافت توسعه داده می شود.

    برخی از ویژگی های ویرایشگر Visual Studio Code به شرح زیر می باشند:

    • ویژگی debugging
    • پشتیبانی از git
    • syntax highlighting
    • intelligent code completion
    • snippets
    • ابزار code refactoring

    همچنین Visual Studio Code دارای extension های گوناگونی می باشد که قابلیت های دیگری شامل analyze code مانند linters و static analysis در اختیار توسعه دهندگان قرار می دهد.

    نرم افزار ویرایشگر Visual Studio Code قابل نصب بر روی سیستم عامل های مختلف می باشد که در این مطلب قصد داریم تا آن را بر روی لینوکس فدورا نصب کنیم.

     

    نصب VS Code در فدورا:

    برای نصب نرم افزار Visual Studio Code می توانید به وب سایت رسمی آن مراجعه کنید و بسته ی باینری (rpm) آن را دانلود و سپس با استفاده از dnf آن را نصب کنید.

    https://code.visualstudio.com

    راه ساده تر برای نصب، استفاده از مخزنی می باشد که ماکروسافت برای VS Code فراهم کرده است. برای اینکار ابتدا مخزن VS Code را روی سیستم خود نصب کنید:

    # sh -c 'echo -e "\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/vscode.repo'

     

    سپس برای نصب نرم افزار ویرایشگر Visual Studio Code کافیست تا دستور زیر را اجرا کنید:

    # dnf install code

     

    اکنون پس از نصب می توانید آن را اجرا و استفاده کنید.

    vscode

     

    The post آموزش نصب ویرایشگر Visual Studio Code در لینوکس فدورا first appeared on طرفداران فدورا.

    Episode 224 – Are old Android devices dangerous?

    Posted by Josh Bressers on November 16, 2020 12:01 AM

    Josh and Kurt talk about what happens when important root certificates expire on old Android devices? Who should be responsible? How can we fix this? Is this even something we can or should fix? How devices should age is a really hard problem that needs a lot of discussion.

    <audio class="wp-audio-shortcode" controls="controls" id="audio-2038-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_224_Are_old_Android_devices_dangerous.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_224_Are_old_Android_devices_dangerous.mp3</audio>

    Show Notes

    Transactional email providers for indie businesses

    Posted by Josef Strzibny on November 15, 2020 12:00 AM

    Most businesses need to send an email from time to time. Marketing platforms aside, what if you want to send it on your terms? Here’s a table comparing the common choices for starting businesses and indie makers.

    I looked at leading transactional email providers and tried to compare them in what they give you for free, what they cost per 10 000 emails/month, 20 000 emails/month, 50 000 emails/month, and for a dedicated IP address. It focuses on initial costs where the business is starting up. I am including a price for one milion emails for fun, profit, and growing B2C startups.

    This table is put together in November 2020. Please let me know if this needs an update.

    Provider Free plan 10 000 20 000 50 000 1M Address Validation Dedicated IP address
    Mailgun 5000/month for 3 mo. $8+ $16+ $35+ $810+ $35+ $75+
    SendGrid 100/day $14.95+ $14.95+ $14.95+ $449+ $89.95+ (2500) $89.95+
    Postmark 100 test emails/month $10 $22.5 $50 $535+ none +$50
    Mandrill none $34.00+ $34.00+ $54.00+ $734+ none +$29.95
    Sendinblue 300/day $25 $39 $69 $599 none Enterprise
    Pepipost 30000 free for 30 days + 100/day $25 $25 $25 $599 none $245
    Mailjet 200/day $9.65 $9.65 $18.95 Enterprise none $68.95+
    Transmail first 10000 emails $2.5 $5 $12.5 $250 none on request
    Kingmailer 100 test emails/month $10 $20 $50 $1000 none $5
    Amazon SES 62000/month $1 $2 $5 $100 none $24.95

    Notes:

    The table compares limits and the minimum payment but does not compare other advantages of higher volumes. Some price points might already trigger a higher plan with more emails.

    Mailgun’s one dedicated IP address starts with the Foundation plan ($35) for 100k emails and up ($75). You might consider going for the growth plan ($80) already with 1000 email address validations included.

    SendGrid starts the email validation at $89.95, but only for 2500 emails.

    Postmark charges for dedicated IP address extra ($50), but for any volume.

    Mandrill is a transactional email service from Mailchimp. Transactional email is available as an add-on to Standard ($14.99) and Premium plans. I included the Standard plan for keeping 500 contacts). Charges extra for a dedicated IP address ($29.95) for any volume.

    Sendinblue offers discounts for yearly payments. Dedicated IP address are offered only in the Enterprise plan without published pricing.

    Pepipost has great pricing for higher volumes. The mentioned $25/month gives you actually 150000 emails. 1 million can be as cheap as $171 for paying yearly.

    Mailjet free offering inserts a logo. $398.95 gets you 900k, then you have to switch to the Enterprise plan.

    Transmail is a service from Zoho Mail.

    Kingmailer does not support any email type. Newsletters, announcements, and marketing emails are not supported. Dedicated IP does not seem to have any additional price. You can pay with Bitcoin.

    Amazon SES pricing can get little confusing like anything in AWS. 62000 free emails in a month are for projects hosted in Amazon EC2 without data transfer fees. In general, you pay $0.10 for every 1000 emails sent, $0.12 for 1GB of attachments, and transfers.

    They are some more services out there to consider, but I wanted to start with something famous and solid. Let me know on Twitter what should I add to the list.

    Obtain previous Job ID in Ansible Tower Workflow

    Posted by Fabio Alessandro Locati on November 15, 2020 12:00 AM
    Ansible Tower allows you to create Workflows, which enable you to create complex workflows by putting together multiple Ansible Playbooks. Ansible Tower Workflows can have some simple logics, such as run different Ansible Playbooks based on the outcome (success or failure) of a previous Ansible Playbook run. Sometimes, though, you need to have more information about a previous Ansible Playbook run than just the outcome. I recently found myself in a situation where I had an Ansible Tower Workflow with two Ansible Playbooks into it, where the first one was performing specific tasks.