June 09, 2022

A quick textmode-themed update

Summer is coming and I've got a couple of posts cooking that may turn out mildly interesting, but — time constraints being what they are — in the meantime there's this.

Chafa

I (judiciously, as one might opine) pulled back from posting about every single feature release, but things have kept plodding along in quiet. ImageMagick is finally going away as per a buried remark from 2020, which means no more filling up /tmp, no more spawning Inkscape to read in SVGs, and so on. There's also lots of convenience and robustness and whatnot. Go read the release notes.

Text terminals, ANSI art groups, my dumb pet projects: they just won't.

As for eye candy, I guess the new 16/8-color mode qualifies. It's the good old "eight colors, but bold attribute makes foreground bright" trick, which requires a bit of special handling since the quantization step must apply two different palettes.

With this working, the road to ANSI art scene Naraka nirvana is short: Select code points present in your favorite IBM code page, strip newlines (only if your output is 80 columns wide), and convert Chafa's Unicode output to the target code page. You'll get a file worthy of the .ANS extension and perhaps a utility like Ansilove (to those who care: There's some mildly NSFW art in their Examples section. Definitely don't look at it. You've been warned).

Taken together, it goes something like this:

$ chafa -f symbol -c 16/8 -s 80 -w 9 --font-ratio 1 --color-space din99d \
    --symbols space+solid+half+stipple+ascii they_wont.jpg | tr -d \\n | \
    iconv -c -f utf8 -t cp437 > they_wont.ans
$ ansilove -f 80x50 -r they_wont.ans -o top_notch_blog_fodder.png

It's a bit of a screenful, but should get better once I get around to implementing presets.

Finally, I added a new internal symbol range for Latin scripts. It's got about 350 new symbols to work with on top of the ASCII that was already there. Example anim below; might be a good idea to open this one in a separate tab, as browser scaling kind of ruins it.

--fg-only --symbols latin. Input from 30000fps.

Thanks

Apart from the packagers, who are excellent but too numerous to list for fear of leaving anyone out, this time I'd like to thank Lionel Dricot aka Ploum for lots of good feedback. He develops a text mode offline-first browser for Gemini, Gopher, Spartan and the web called Offpunk, and you should check it out.

One more. When huntr.dev came onto my radar for the first time this spring, I admit to being a little bit skeptical. However, they've been a great help, and every interaction I've had with both staff and researchers has been professional, pleasant and highly effective. Big thumbs up. I've more thoughts on this, probably enough for a post of its own. Eventually.

A propos

I came across Aaron A. Reed's project 50 Years of Text Games a while back (via Emily Short's blog, I suspect), and have been following it with interest. He launched his kickstarter this week and is knocking it out of the park. The selection is a tad heavy on story/IF games (quoth the neckbeard, "grumble grumble, Empire, ZZT, grumble"), but it's really no complaint considering the effort that obviously went into this.

Seems low-risk too (the draft articles are already written and available to read), but I have a 75% miss rate on projects I've backed, so what do I know. Maybe next year it'll be 60%.

June 08, 2022

Apps: Attempt of a status report

This is not an official post from GNOME Foundation nor am I part of the GNOME Foundation’s Board that is responsible for the policies mentioned in this post. However, I wanted to sum up the current situation as I understand it to let you know what is currently happening around app policies.

Core and Circle

Ideas for (re)organizing GNOME apps have been around for a long time, like with this initiative from 2018. In May 2020, the Board of Directors brought forward the concept of differentiating “official GNOME software” and “GNOME Circle.” One month later the board settled on the GNOME Foundation’s software policy. GNOME Circle was officially launched in November 2020.

With this, there are two categories of software:

    1. Official GNOME software, curated by the release team. This software can use the GNOME brand, the org.gnome app id prefix, and can identify the developers as GNOME. Internally the release team refers to official software as core.

    2. GNOME Circle, curated by the Circle committee. This software is not official GNOME software and cannot use the GNOME trademarks. Projects receive hosting benefits and promotion.

Substantial contribution to the software of either of those categories makes contributors eligible for GNOME Foundation membership.

Those two categories are currently the only ones that exist for apps in GNOME.

Current Status and Outlook

Since the launch of GNOME Circle, no less than 42 apps have joined the project. With Apps for GNOME, we have an up-to-date representation of all apps in GNOME. And more projects benefitting from this structure are under development. Combined with other efforts like libadwaita, new developer docs, and a new HIG, I think we have seen an incredible boost in app quality and development productivity.

Naturally, there remain open issues after such a huge change. App criteria and workflows have to be adapted after collecting our first experiences. We need more clarification on what a “Core” app means to the project. And last but not least, I think we can do better with communicating about these changes.

Hopefully, at the upcoming GUADEC 2022 we will be able to add some cornerstones to get started with addressing the outstanding issues and continue this successful path. If you want to get engaged or have questions, please let me know. Maybe, some questions can already be answered below 🙂

Frequent Questions

Why is this my favorite app missing?

I often get questions about why an app is absent from apps.gnome.org. The answer is usually, that the app just never applied to Circle. So if your favorite app is missing, you may want to ask them to apply to GNOME Circle.

What do the “/World” and “/GNOME” GitLab namespaces mean?

I often get asked why an app is not on apps.gnome.org or part of “Core” while its repository resides in /GNOME. However, there is no specific meaning to /GNOME. It’s mostly a historical category and many of the projects in /GNOME have no specific status inside the project. By the way, many GNOME Circle projects are not even hosted on GNOME’s GitLab instance.

New “Core” apps however will be moved to /GNOME.

But I can still use org.gnome in my app id or GNOME as part of  my app name?

To be very clear: No. If you are not part of “Core” (Official GNOME software) you can’t. As far as I can see, we won’t require apps to change their app id if they have used it before July 2020.

What about those GNOME games?

We have a bunch of nice little games that were developed within the GNOME project (and that largely also still carry legacy GNOME branding.) None of them currently have an official status. At the moment, no rules exclude games from becoming part of GNOME Circle. However, most of those games would probably need an overhaul before being eligible. I hope we can take care of them soon. Let me know if you want to help.

June 07, 2022

Introduction

Hello everyone!

I’m Marco Melorio, a 22-year-old Italian computer science student. I’m a GNOME user for about 2 years and I’ve quite literally felt in love with it since then. Last year I started developing Telegrand, a Telegram client built to be well integrated with GNOME, which is a project I’m really proud of and it’s gaining quite a bit of interest. That was the moment where I started being more active in the community and also when I started contributing to various GNOME projects.

Fast-forward to today

I’m excited to announce that I’ve been selected for GSoC’22 to implement a media history viewer in Fractal, the matrix client for GNOME, with the help of my mentor Julian Sparber. More specifically, this is about adding a page to the room info dialog that can display all the media (e.g. images, videos, gifs) sent in the room. This is similar to what it’s found in other messaging apps, like Telegram, WhatsApp, etc.

I will be posting more in the next days with details on the implementation and milestones about the project.

Thanks for reading.

Creating your own math-themed jigsaw puzzle from scratch

 Don't you just hate it when you get nerd sniped?

I don't either. It is usually quite fun. Case in point, some time ago I came upon this YouTube video:

It is about how a "500 piece puzzle" usually does not have 500 pieces, but instead slightly more to make manufacturing easier (see the video for the actual details, they are actually quite interesting). As I was watching the video I came up with an idea for my own math-themed jigsaw puzzle.

You can probably guess where this is going.

The idea would not leave me alone so I had to yield to temptation and get the damn thing implemented. This is where problems started. The puzzle required special handling and tighter tolerances than the average jigsaw puzzle made from a custom photo. As a taste of things to come, the final puzzle will only have two kind of pieces, namely these:


For those who already deciphered what the final result will look like: good job.

As you can probably tell, the printed pattern must be aligned very tightly to the cut lines. If it shifts by even a couple of millimeters, which is common in printing, then the whole thing breaks apart. Another requirement is that I must know the exact piece count beforehand so that I can generate an output image that matches the puzzle cut.

I approached several custom jigsaw puzzle manufacturers and they all told me that what I wanted was impossible and that their manufacturing processes are not capable of such precision. One went so far as to tell me that their print tolerances are corporate trade secrets and so is the cut used. Yes, the cut. Meaning the shapes of the resulting pieces. The one feature that is the same on every custom jigsaw puzzle and thus is known by anyone who has ever bought one of them. That is a trade secret. No, it makes no sense to me either.

Regardless it seemed like the puzzle could not be created. But, as the old saying goes, all problems are solvable with a sufficient application of public libraries and lasers.

This is a 50 Watt laser cutter and engraver that is freely usable in my local library. This nicely steps around the registration issues because printing and cutting are done at the same time and the machine is actually incredibly accurate (sub-millimeter). The downside is that you can't use color in the image. Color is created by burning so you can only create grayscale images and the shade is not particularly precise, though the shapes are very accurate.

After figuring this out the procedure got simple. All that was needed was some Python, Cairo and 3mm plywood. Here is the machine doing the engraving.

After the image had been burned, it was time to turn the laser to FULL POWER and cut the pieces. First sideways

then lengthwise.

And here is the final result all assembled up.

This is a 256 piece puzzle showing a Hilbert Curve. It is a space filling curve, that is, it travels through each "pixel" in the image exactly once in a continuous fashion and never intersects itself. As you can (hopefully) tell, there is also a gradient so that the further along the curve you get the lighter the printing gets. So in theory you could assemble this jigsaw puzzle by first ordering the pieces from darkest to lightest and then just joining the pieces one after the other.

The piece cut in this puzzle is custom. The "knob" shape is parameterized by a bunch of variables and each cut between two pieces has been generated by picking random values for said parameters. So in theory you could generate an arbitrarily large jigsaw puzzle with this method (it does need to be a square with the side length being a power of two, though).

2022-06-07 Tuesday

  • Planning call; mail chew - got my blog re-located finally to https://meeksfamily.uk/~michael after something of a hiatus caused by the demise of people.gnome.org - that lets me continue to be a dinosaur and generate RSS manually and so on.
  • Why is it always at the very end of the day, when I need to quit and emerge from my lair - that I'm most productive: in the 'stolen time' from my family? Perhaps this is the secret sauce behind the four day week ?

Release (semi-)automation

The time I have available to maintain GNOME Initial Setup is very limited, as anyone who has looked at the commit history will have noticed. I’d love more eyes & hands on this important but easy-to-overlook component, particularly to guide it kindly but firmly into the modern age of GTK 4 and the refreshed HIG.

I found that making a batch of 1–3 releases across different GNOME branches every few months was surprisingly time-consuming and error-prone, even with the pretty comprehensive release process checklist on the GNOME Wiki, so I’ve been periodically trying to automate bits of it away.

Philip Withnall’s gitlab-changelog script makes writing the NEWS file a lot quicker. I taught it to output the human-readable names of each updated translation (a nice additional contribution would be to also include the name of the human who updated the translation) and made it a little smarter about guessing the Git commit range to scan.

Beyond that, I added a Meson run target, maintainer-upload-release pointing at a script which performs some rudimentary coherence checks on the version number, tags the release (using git-evtag if available), atomically pushes the branch and that tag to GNOME GitLab, then copies the source tarball to master.gnome.org. (Apparently it has been almost 12 years since I did something similar in telepathy-gabble, building on the make maintainer-upload-release target that Simon McVittie added in 2008, which is where I borrowed the name.) Maybe other module maintainers may find this script useful too – it’s quite generic.

Putting these together, the release flow looks like this:

git switch gnome-42
git pull
../pwithnall/gitlab-changelog/gitlab-changelog.py GNOME/gnome-initial-setup
# Manually edit NEWS to incorporate the changelog, adjusted as needed
# Manually check the version in meson.build
git commit -am 'NEWS for 42.Y'
ninja -C _build dist maintainer-upload-release

Another release-related quality-of-life improvement is to make GitLab CI not only build and test the project (in the vain hope that there might actually be tests!) but also check that the install and gnome-initial-setup-pot targets both work. (At one point or another both have failed at or around release time; now they never will again, famous last words.)

I know none of this is rocket science, but I find it all makes the process quicker and less cumbersome, and it’s stopped me from repeating errors like uploading the wrong version on a few tired evenings. Obviously this could all be taken further: perhaps a manually-invoked CI pipeline that does all this stuff, more checks, etc. But while I’m on this train of thought:

Why do we release GNOME modules one-by-one at all?

The workflow we use to release Endless OS is a bit different to GNOME. Once we merge a change to some module’s Git repository, such as eos-updater or our shrinking branch of GNOME Software, that change embarks on a scenic automated journey that takes it to the next nightly build of the entire OS, both as an OSTree update and as fresh installation media. I use these nightly builds for my daily work, safe in the knowledge that I can roll back to the previous build if necessary.

We don’t make releases of individual modules: instead, when it comes time to release the OS, we trigger a pipeline that (among many other things) pushes the already-built OS update to the production repo, and creates Release_x.y.z tags on each Git repo.

This was quite an adjustment for me at first, compared to lovingly hand-crafting NEWS files and coming up with funny/esoteric release names, but now that I’m used to it it’s hard to go back. Why can’t GNOME do the same?

At this point in the post, we are straying into territory that I have limited first-hand knowledge of. Caveat lector! But here goes:

Thanks to GNOME OS, GNOME already has nightly builds of the entire desktop and apps: so rather than having to build everything yourself, or wait for a development release of GNOME, you can just update & reboot your GNOME OS VM and test the change right there. gnome-build-meta knows how to build every GNOME module; and if you can build the code, it seems a conceptually small step to run ninja dist and the stuff above to publish tags and tarballs for each module.

So you could well imagine on 43.beta release day, someone in the release team could boot the latest GNOME OS nightly, declare it to be Good, and push a button that tags every relevant GNOME module & builds and uploads all the tarballs, and then go back to their day, rather than having to chase down module owners who haven’t quite got around to making the release, fix random build breakages, and so on.

To make this work reliably, I think you’d need every module’s CI to be run through gnome-build-meta, building that MR against the rest of the project, so that g-b-m build failures would be caught before (not after) the offending change lands in the module in question. Seems doable – in Endless we have the equivalent thing managed by a jenkins-job-builder template, the GitHub Pull Request Builder plugin, and a gnarly script.

Continuous integration and deployment are becoming the norm throughout the software industry, for good reasons laid out quite well in articles like Shipping Fast Changes Your Life: the smaller the gap between making a change and it reaching a user, the faster the feedback, and the less costly it is to fix a bug or change course.

The free software movement has historically been ahead of the curve on this, with the “release early, release often” philosophy. And GNOME in particular has used a time-based release process for two decades, allowing major distros to align their schedules to GNOME and get updates into the hands of users quickly, which went some way towards overcoming the fact that GNOME does not own the full pipeline from source code to end users.

Havoc Pennington’s June 2002 email proposing this model has aged rather well, in my opinion, and places a heavy emphasis on the development branch being usable:

The unstable branch must always be dogfood-quality. If testers can’t test it by using it daily, they can’t make the jump. If the unstable branch becomes too unstable, we can’t release it on a reliable schedule, so we have to start breaking the stable branch as a stopgap.

Interestingly the time-based release schedule wiki page states that the schedule should contain:

Regular test release dates, approximately every 2 weeks.

These days, GNOME releases are closer to monthly. In the context of the broader industry where updates reach users multiple times a day, this is starting to look a little less forward-thinking! Of course, continuously deploying an entire OS to production is rather harder than continuously deploying web apps or apps in app stores, if only because the stakes are higher: you need a really robust automatic rollback mechanism to save your users’ plant-based bacon substitute if a new OS build fails to boot, or worse, contains an updater bug that prevents future updates being applied! Still, I believe that a bit of automation would go a long way in allowing module maintainers and the release team alike to spend their scarce mental energy on other things, and allow the project to increase the frequency of releases. What am I missing?

June 06, 2022

2022-06-06 Monday

  • Thought this was a bank holiday, but apparently wrong. Posted things, returned a substantial wrecking bar of the wrong length. Collected medicine for M.
  • Played at Barry Bushel's funeral & caught up with friends & family afterwards.
  • Home, chat with Kendy. Finally managed to catch the focus loss bug on save that drives me up the wall in 22.05. Mail chew.

Playing with the rpi4 CPU/GPU frequencies

In recent days I have been testing how modifying the default CPU and GPU frequencies on the rpi4 increases the performance of our reference Vulkan applications. By default Raspbian uses 1500MHz and 500MHz respectively. But with a good heat dissipation (a good fan, rpi400 heat spreader, etc) you can play a little with those values.

One of the tools we usually use to check performance changes are gfxreconstruct. This tools allows you to record all the Vulkan calls during a execution of an aplication, and then you can replay the captured file. So we have traces of several applications, and we use them to test any hypothetical performance improvement, or to verify that some change doesn’t cause a performance drop.

So, let’s see what we got if we increase the CPU/GPU frequency, focused on the Unreal Engine 4 demos, that are the more shader intensive:

Unreal Engine 4 demos FPS chart

So as expected, with higher clock speed we see a good boost in performance of ~10FPS for several of these demos.

Some could wonder why the increase on the CPU frequency got so little impact. As I mentioned, we didn’t get those values from the real application, but from gfxreconstruct traces. Those are only capturing the Vulkan calls. So on those replays there are not tasks like collision detection, user input, etc that are usually handled on the CPU. Also as mentioned, all the Unreal Engine 4 demos uses really complex shaders, so the “bottleneck” there is the GPU.

Let’s move now from the cold numbers, and test the real applications. Let’s start with the Unreal Engine 4 SunTemple demo, using the default CPU/GPU frequencies (1500/500):

Even if it runs fairly smooth most of the time at ~24 FPS, there are some places where it dips below 18 FPS. Let’s see now increasing the CPU/GPU frequencies to 1800/750:

Now the demo runs at ~34 FPS most of the time. The worse dip is ~24 FPS. It is a lot smoother than before.

Here is another example with the Unreal Engine 4 Shooter demo, already increasing the CPU/GPU frequencies:

Here the FPS never dips below 34FPS, staying at ~40FPS most of time.

It has been around 1 year and a half since we announced a Vulkan 1.0 driver for Raspberry Pi 4, and since then we have made significant performance improvements, mostly around our compiler stack, that have notably improved some of these demos. In some cases (like the Unreal Engine 4 Shooter demo) we got a 50%-60% improvement (if you want more details about the compiler work, you can read the details here).

In this post we can see how after this and taking advantage of increasing the CPU and GPU frequencies, we can really start to get reasonable framerates in more demanding demos. Even if this is still at low resolutions (for this post all the demos were running at 640×480), it is still great to see this on a Raspberry Pi.

Builder GTK 4 Porting, Part VI

Short update this week given last Monday was Memorial Day in the US. I had a lovely time relaxing in the yard and running errands with my wife Tenzing. We’ve been building such a beautiful home together that it’s nice to just sit back and enjoy it from time to time.

A picture of my yardA picture of me

GTK

  • Merged some work on debug features for testing RTL vs LTR from Builder. There is a new GTK_DEBUG=invert-text-dir to allow rudimentary testing with alternate text directions.

Builder

  • Landed a new clone design using libadwaita.
  • Fixed rendering of symbolic icons in the gutter for diagnostics, etc
  • Fixed error underlines for spellcheck when dealing with languages where the glyph baseline may change
  • Added a new IdeVcsCloneRequest which can do most of the clone work so the UI bits can be very minimal.
  • Added interfaces to allow for retrieving a list of branches on a remote before you’ve cloned it. Useful to help selecting an initial branch, but do to how libgit2 works, we have to create a temporary directory to make it work (and then unlink it). Handy nonetheless.
  • Make gnome-builder --clone work again.
  • Make cloning newcomer applications automatically work again.
  • Made a lot of our popover’s use menu styling, despite being backed by GListModel and GtkListView.
  • Even more menuing cleanups. Amazing how each pass of this really tends to clarify things from a user perspective.
  • Made all of the editor menu buttons in the statusbar functional now.
  • New gsetting and preference toggle to set default license for new projects.
  • A new IdeWebkitPage page implementation which is a very rudimentary web-browser. This will end up being re-used by the html-preview, markdown-preview, and sphinx plugins.
  • Removed the glade plugin
  • Fixed presentation of clang completion items.

I’m pretty satisfied with the port of the cloning workflow, but it really needs to have a PTY plumbed through to the peer process so we can get better/more complete information. We’ll see if there is time before 43 though given how much else there is to get done.

All of this effort is helping me get a more complete vision of what I’d like to see out of a GTK 5. Particularly as we start attacking things from a designer tooling standpoint.

A screenshot of Builder with an integrated web-browser
A screenshot of Builder with the clone dialog choosing a branch to clone
A screenshot of Builder with the clone dialog

June 03, 2022

Inrotductory Post

Hello everyone! 😄

I'm Utkarsh Gandhi, a 20-year-old, second-year B.Tech student. I have been coding for a few years now but had never contributed to an open-source project before. My seniors advised me to participate in GSoC as it is the best way to start contributing to open source projects.

I looked at a lot of organisations, but none of them seemed right. Then finally, GNOME caught my eye. I had been using the GNOME desktop environment and its applications for a year, so this seemed like the perfect opportunity for me to give back to this organisation. I knew this would be the right fit for me.

I started contributing to GNOME (more specifically Nautilus) around mid-February this year. I chose Nautilus as it is one of those applications which I use on a daily basis, and it just seemed logical to try and contribute to the app which is really important to me. 

As this was the first time I was contributing to an open-source project, I was extremely nervous about how to get started. But the community members were really polite and helpful and gave me a lot of guidance during the first few weeks which made it really easy and fun for me to contribute :D

Fast-forward to June, and I am glad to announce that I have been selected as a contributor by the GNOME Foundation for the "Revamp New Documents Sub-menu" project in Nautilus for GSoC '22. My mentor is @anotniof, who has been extremely helpful ever since I started contributing.

My project aims to design and implement a UI for the New Document creation feature in GNOME Files (Nautilus), which is a UI front-end project.

I'm excited to work on this project and have a chance to give back to this wonderful community. Looking forward to an amazing summer! 🎉

This is my first post for this blog, and I plan to post updates on my progress every week or two on this blog, so stay tuned! 💫

Thank you for reading 😁

#46 Going Mobile

Update on what happened across the GNOME project in the week from May 27 to June 03.

Core Apps and Libraries

GNOME Shell

Core system user interface for things like launching apps, switching windows, system search, and more.

verdre says

News from the Shell team! We’re improving the experience on small screens and things are progressing quickly, GNOME Shell might run on your phone sooner than you think. Check out our blogpost for more information.

WebKitGTK

GTK port of the WebKit rendering engine.

adrian announces

The recent WebKitGTK 2.36.3 release included fixes for a number of security issues that allowed remote code execution. While we are not aware of any of them being exploited, it is nevertheless strongly recommended to update to the latest release.

There are a number of improvements in the multimedia backend as well: GStreamer elements known to be problematic are now explicitly by default, we have enabled capture from devices which can use hardware-accelerated encoding, fixed display capture using Pipewire, and improved streaming video playback.

Software

Lets you install and update applications and system extensions.

Philip Withnall reports

Milan Crha has added support for listing other apps by the same author in gnome-software

Calls

A phone dialer and call handler.

Evangelos reports

Calls can now do encrypted VoIP calls with SIP using SRTP 🎉 Supporting SRTP roughly consisted of two parts:

Please note that we don’t make use of the encryption indicator yet, for more details see this issue where you can also find some nice designs by Sam Hewitt of how things should eventually behave.

GLib

The low-level core library that forms the basis for projects such as GTK and GNOME.

Philip Withnall reports

Benjamin Berg has fixed a nasty deadlock in GFileMonitor in GLib

Circle Apps and Libraries

Gaphor

A simple UML and SysML modeling tool.

Arjan says

Gaphor 2.10.0 has been release last week. Among the improvements Activity diagrams have been extended. Model loading has been improved and Gaphor finally has full drag and drop support from the tree view to a diagram.

Authenticator

Simple application for generating Two-Factor Authentication Codes.

Bilal Elmoussaoui says

A new bugfixes release of Authenticator is out. The new release also migrates your tokens from the host keyring to inside the sandbox so other sandboxed apps can’t access them.

Third Party Projects

Flatseal

A graphical utility to review and modify permissions of Flatpak applications.

Martín Abente Lahaye says

I am happy to announce the release of Flatseal 1.8.0 🎉. This new release comes with the ability to review and modify global overrides, highlight changes made by users, follow system-level color styles, support for more languages and a few bugs fixes.

Amberol

Plays music, and nothing else.

Emmanuele Bassi announces

New Amberol release! Lots of small UI fixes to improve consistency and give better feedback when importing songs in the playlist. Colors and spacing between elements have also been improved, as well as general reliability.

Miscellaneous

federico announces

The accessibility infrastructure modules are now merged: https://viruta.org/accessibility-repos-are-now-merged.html

ranfdev announces

Frustrated by the standard .ui file writing experience (that xml is very verbose) and inspired by the blueprint compiler, I’ve decided to write a custom Domain Specific Language to generate .ui files. Compared to blueprint compiler, this let’s you use a complete programming language, with variables and functions, to generate your ui. Check out the project page for more information

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

June 02, 2022

Using Composefs in OSTree

Recently I’ve been looking at what options there are for OSTree based systems to be fully cryptographically sealed, similar to dm-verity. I really like the efficiency and flexibility of the ostree storage model, but it currently has this one weakness compared to image-based systems. See for example the FAQ in Lennarts recent blog about image-based OSes for a discussions of this.

This blog post is about fixing this weakness, but lets start by explaining the problem.

An OSTree boot works by encoding in the kernel command-line the rootfs to use, like this:

ostree=/ostree/boot.1/centos/993c66dedfed0682bc9471ade483e2f57cc143cba1b7db0f6606aef1a45df669/0

Early on in the boot some code runs that reads this and mount this directory (called the deployment) as the root filesystem. If you look at this you can see a long hex string. This is actually a sha256 digest from the signed ostree commit, which covers all the data in the directory. At any time you can use this to verify that the deployment is correct, and ostree does so when downloading and deploying. However, once the deployment has been written to disk, it is not verified again, as doing so is expensive.

In contrast, image-based systems using dm-verity compute the entire filesystem image on the server, checksum it with a hash-tree (that allows incremental verification) and sign the result. This allows the kernel to validate every single read operation and detect changes. However, we would like to use the filesystem to store our content, as it is more efficient and flexible.

Luckily, there is something called fs-verity that we can use. It is a checksum mechanism similar to dm-verity, but it works on file contents instead of partition content. Enabling fs-verity on a file makes it immutable and computes a hash-tree for it. From that point on any read from the file will return an error if a change was detected.

fs-verity is a good match for OSTree since all files in an the repo are immutable by design. Since some time ostree supportes fs-verity. When it is enabled the files in the repo get fs-verity enabled as they are added. This then propagates to the files in the deployment.

Isn’t this enough then? The files in the root fs are immutable and verified by the kernel.

Unfortunately no. fs-verity only verifies the file content, not the file or directory metadata. This means that a change there will not be detected. For example, its possible to change permissions on a file, add a file, remove a file or even replace a file in the deploy directories. Hardly immutable…

What we would like is to use fs-verity to also seal the filesystem metadata.

Enter composefs

Composefs is a Linux filesystem that Giuseppe Scrivano and I have been working on, initially with a goal of allowing deduplication for container image storage. But, with some of the recent changes it is also useful for the OSTree usecase.

The basic idea of composefs is that we have a set of content files and then we want to create directories with files based on it. The way ostree does this is to create an actual directory tree with hardlinks to the repo files. Unfortunately this has certain limitations. For example, the hardlinks share metadata like mtime and permission, and if these differ we can’t share the content file. It also suffer from not being an immutable representation.

So, instead of creating such a directory, we create a “composefs image”, which is a binary blob that contains all the metadata for the directory (names, structure, permissions, etc) as well as pathnames to the files that have the actual file contents. This can then be mounted wherever you want.

This is very simple to use:

# tree rootfs
rootfs
├── file-a
└── file-b
# cat rootfs/file-a
file-a
# mkcomposefs rootfs rootfs.img
# ls -l rootfs.img
-rw-r--r--. 1 root root 272 Jun 2 14:17 rootfs.img
# mount composefs -t composefs -o descriptor=rootfs.img,basedir=rootfs mnt

At this point the mnt directory is now a frozen version of the rootfs directory. It will not pick up changes to the original directory metadata:

# ls mnt/
file-a file-b
# rm mnt/file-a
rm: cannot remove 'mnt/file-a': Read-only file system
# echo changed > mnt/file-a
bash: mnt/file-a: Read-only file system#
# touch rootfs/new-file
# ls rootfs mnt/
mnt/:
file-a file-b

rootfs:
file-a file-b new-file

However, it is still using the original files for content (via the basedir= option), and these can be changed:

# cat mnt/file-a
file-a
# echo changed > rootfs/file-a
# cat mnt/file-a
changed

To fix this we enable the use of fs-verity, by passing the --compute-digest option to mkcomposefs:

# mkcomposefs rootfs --compute-digest rootfs.img
# mount composefs -t composefs -o descriptor=rootfs.img,basedir=rootfs mnt

Now the image will have the fs-verity digests recorded and the kernel will verify these:

# cat mnt/file-a
cat: mnt/file-a: Input/output error
WARNING: composefs backing file 'file-a' unexpectedly had no fs-verity digest

Oops, turns out we didn’t actually use fs-verity on that file. Lets remedy that:

# fsverity enable rootfs/file-a
# cat mnt/file-a
changed

We can now try to change the backing file (although fs-verity only lets us completely replace it). This will fail even if we enable fs verity on the new file:

# echo try-change > rootfs/file-a
bash: rootfs/file-a: Operation not permitted
# rm rootfs/file-a
# echo try-change > rootfs/file-a
# cat mnt/file-a
cat: mnt/file-a: Input/output error
WARNING: composefs backing file 'file-a' unexpectedly had no fs-verity digest
# fsverity enable rootfs/file-a
# cat mnt/file-a
cat: mnt/file-a: Input/output error
WARNING: composefs backing file 'file-a' has the wrong fs-verity digest

In practice, you’re likely to use composefs with a content-addressed store rather than the original directory hierarchy, and mkcomposefs has some support for this:

# mkcomposefs rootfs --digest-store=content rootfs.img
# tree content/
content/
├── 0f
│   └── e37b4a7a9e7aea14f0254f7bf4ba3c9570a739254c317eb260878d73cdcbbc
└── 76
└── 6fad6dd44cbb3201bd7ebf8f152fecbd5b0102f253d823e70c78e780e6185d
# mount composefs -t composefs -o descriptor=rootfs.img,basedir=content mnt
# cat mnt/file-b
file-b

As you can see it automatically copied the content files into the store named by the fs-verity digest and enabled fs-verity on all the content files.

Is this enough now? Unfortunately no. We can still modify the rootfs.img file, which will affect the metadata of the filesystem. But this is easy to solve by using fs-verity on the actual image file:

# fsverity enable rootfs.img
# fsverity measure rootfs.img
sha256:b92d94aa44d1e0a174a0c4492778b59171703903e493d1016d90a2b38edb1a21 rootfs.img
# mount composefs -t composefs -o descriptor=rootfs.img,basedir=content,digest=b92d94aa44d1e0a174a0c4492778b59171703903e493d1016d90a2b38edb1a21 mnt

Here we passed the digest of the rootfs.img file to the mount command, which makes composefs verify that the image matches what was expected.

Back to OSTree

That was a long detour into composefs. But how does OSTree use this?

The idea is that instead of checking out a hardlinked directory and passing that on the kernel commandline we build a composefs image, enable fs-verity on it and put its filename and digest on the kernel command line instead.

For additional trust, we also generate the composefs image on the server when building the ostree commit. Then we add the digest of that image to the commit metadata before signing it. Since building the composefs image is fully reproducible, we will get the exact same composefs image on the client and can validate it against the signed digest before using it.

This has been a long post, but now we are at the very end, and we have a system where every bit read from the “root filesystem” is continuously verified against a signed digest which is passed as a kernel command line. Much like dm-verity, but much more flexible.

The Containers usecase

As I mentioned before, composefs was originally made for a different usecase, namely container image storage. The goal there is that as we unpack container image layers we can drop the content files into a shared directory, and then generate composefs files for the image themselves.

This way identical files between any two installed images will be shared on the local machine. And the sharing would be both on disk and in memory (i.e. in the page cache), This will allow higher density on your cluster, and smaller memory requirements on your edge nodes.

June 01, 2022

Accessibility repositories are now merged

Over the past week I worked on merging the atk and at-spi2-atk repositories into at-spi2-core. A quick reminder of what they do:

  • at-spi2-core: Has the XML definitions of the DBus interfaces for accessibility — what lets a random widget identify itself as having a Button role, or what lets a random text field to expose its current text contents to a screen reader. Also has the "registry daemon", which is the daemon that multiplexes applications to screen readers or other accessibility technologies. Also has the libatspi library, which is a hand-written binding to the DBus interfaces, and which is used by...

  • at-spi2-atk: Translates the ATK API into calls to libatspi, to effectively make ATK talk DBus to the registry daemon. This is because...

  • atk: is mostly just a bunch of GObject-based interfaces that programs can implement to make themselves accessible. GTK3, LibreOffice, and Mozilla use it. They haven't yet done like GTK4 or Qt5, which use the DBus interfaces directly and thus avoid a lot of wrappers and conversions.

Why merge the repositories?

at-spi2-core's DBus interfaces, the way the registry daemon works, atk's interfaces and their glue in at-spi2-atk via libatspi... all of these are tightly coupled. You can't make a change in the libatspi API without changing at-spi2-atk, and a change in the DBus interfaces really has to ripple down to everything, but keeping things as separate repositories makes it hard to keep them in sync.

I am still in the process of learning how the accessibility code works, and my strategy to learn a code base, besides reading code while taking notes, is to do a little exploratory refactoring.

However, when I did a little refactoring of bit of at-spi2-core's code, the tests that would let me see if that refactoring is correct were in another repository! This is old code, written before unit tests in C were doable in a convenient fashion, so it would take a lot more refactoring to get it to a unit-testable state. I need end-to-end tests instead...

... and it is at-spi2-atk that has the end-to-end tests for all the accessibility middleware, not at-spi2-core, which is the module I was working on. At-spi2-atk is the repository that has tests like this:

  • Create a mock accessible application ("my_app").
  • Create a mock accessibility technology ("my_screen_reader").
  • See if the things transferred from the first one to the second one make sense, thus testing the middleware.

By merging the three repositories, and adding a code coverage report for the test suite, we can add a test, change some code, look at the coverage report, and see if the test really exercised the code that we changed.

Changes for distributions

Please see the announcement on discourse.gnome.org.

That coverage report is not accessible!

Indeed, it is pretty terrible. Lcov's genhtml tool creates a giant <pre>, with things like the execution count for each line just delimited with a <span>. Example of lcov's HTML.

(Librsvg's coverage report is pretty terrible as well; grcov's HTML output is a bunch of color-coded <div>. Example of grcov's HTML.)

Does anyone know code coverage tools that generate accessible output?

May 31, 2022

Flatseal 1.8.0

I am happy to announce a new release of Flatseal 🎉. This new release comes with the ability to review and modify global overrides, highlight changes made by users, follow system-level color schemes, support for more languages and a few bugs fixes.

Let’s start with bug fixes. Since Flatpak 1.12.4, removing filesystem permissions with modes in Flatseal caused Flatpak to warn people about the mode being included as part of the override. Justifiably, this confused many. With this release, it will no longer include these modes, e.g. :ro,  when removing filesystem permissions.

Although Flatseal main distribution is Flatpak, there are people who prefer to install it from their regular package manager. So, I included a fix which handles the creation of the overrides directory. Under Flatpak, this scenario is handled by permissions themselves.

Moving on to new features, @A6GibKm added support for the new system-level color schemes recently added to GNOME. Plus, he streamlined both shortcuts and documentation windows behavior by making both modal.

The main feature I focused on for this release is the ability to review and modify global overrides, which is actually three features in one. Let me elaborate.

Currently, when people look at a particular application permissions they are actually seeing the mix of two things: a) the original permissions that came with that application and b) the permissions changed by them. But there’s a third source of changes that wasn’t taken into account, and that is global overrides.

So, the first part was to make Flatseal aware of these global overrides. That means, when people look at an application permissions, these global overrides need to be accounted for and displayed. With this release, all sources of permissions changes are taken into account. Now, what you see is effectively and exactly what the application can or can’t do.

But this introduces a new problem or, better said, it exacerbates an existing one. It goes like this; people go to Flatseal, select an application, switch a few options and close it. Next day, they go back, select the same application and have absolutely no idea what changed the day before. This only gets worse when introducing global overrides to the mix.

Therefore, the second part was to extend Flatseal models to differentiate between original permissions and the two types of overrides, and to expose that information to the UI. Now, with this release, every permission changed by the user or globally is highlighted as shown above. This includes tooltips to let people know exactly where the change came from.

Lastly, the third part was to expose global overrides themselves to the UI, so people can review and modify these. I tried different approaches as for how to expose this but, finally, I let occam’s razor decide. I exposed global overrides as-if it was just another application, under an “All Applications” sticky item on the applications lists.

The benefit of this approach is that it’s quite easy to find, and even search, and it’s literally the same interface as for applications. But simplicity comes with a price.

If you’re a heavy user of Flatseal you probably noticed that it only allows to a) grant permissions that the applications don’t have or b) remove permissions that the applications have but, with the exception of filesystem permissions, c) it doesn’t allow to remove permissions that the applications don’t have.

Of course, most of the time, this wouldn’t even make sense for a particular application, but it is a limitation when thinking in terms of global overrides. So unfortunately, you can’t go and remove network access for all your applications in one click. At least not just yet.

Moving on to translations, support for Bulgarian, Danish and Chinese (China) languages were added by @RacerBG@exponentactivity, @EricZhang456 respectively. Big kudos to @libreajans, @AsciiWolf, @Vistaus, @cho2, @eson57, @daPhipz, @ovari, @TheEvilSkeleton,  and @Garbulix for keeping translations up to date.

Moving forward, I would like to revise the backend models to remove some of the limitations I mentioned earlier, polish the global overrides UI, and finally port it to GTK4 and Libadwaita.

Last but not least, special kudos to @rusty-snake for always keeping an eye on newly opened issues and patiently responding to people’s doubts.

ep0: The Journey Begins

image

Hey! I’m Thejas Kiran P S, a sophomore pursuing my Bachelor’s in Computer Science. I have been selected to GNOME organization as a GSoC'22 contributor and will be working on Pitivi. Pitivi is a non-linear video editor based on the GStreamer Editing Services library.

My work here will be to improve the Timeline component of the application by solving currently open issues and other bugs (Also introduce some new features maybe? ;)

That’s all for now. See you later!

May 30, 2022

Towards GNOME Shell on mobile

As part of the design process for what ended up becoming GNOME 40 the design team worked on a number of experimental concepts, a few of which were aimed at better support for tablets and other smaller devices. Ever since then, some of us have been thinking about what it would take to fully port GNOME Shell to a phone form factor.

GNOME Shell mockup from 2020, showing a tiling-first tablet shell overview and two phone-sized screens
Concepts from early 2020, based on the discussions at the hackfest in The Hague

It’s an intriguing question because post-GNOME 40, there’s not that much missing for GNOME Shell to work on phones, even if not perfectly. A few of the most difficult pieces you need for a mobile shell are already in place today:

  • Fully customizable app grid with pagination, folders, and drag-and-drop re-ordering
  • “Stick-to-finger” horizontal workspace gestures, which are pretty close to what we’d want on mobile for switching apps
  • Swipe up gesture for navigating to overview and app grid, which is also pretty close to what we’d want on mobile

On top of that, many of the things we’re currently working towards for desktop are also relevant for mobile, including quick settings, the notifications redesign, and an improved on-screen keyboard.

Possible thanks to the Prototype Fund

Given all of this synergy, we felt this is a great moment to actually give mobile GNOME Shell a try. Thanks to the Prototype Fund, a grant program supporting public interest software by the German Ministry of Education (BMBF), we’ve been working on mobile support for GNOME Shell for the past few months.

Scope

We’re not expecting to complete every aspect of making GNOME Shell a daily driveable phone shell as part of this grant project. That would be a much larger effort because it would mean tackling things like calls on the lock screen, PIN code unlock, emergency calls, a flashlight quick toggle, and other small quality-of-life features.

However, we think the basics of navigating the shell, launching apps, searching, using the on-screen keyboard, etc. are doable in the context of this project, at least at a prototype stage.

Three phone-sized UI mockups, one showing the shell overview with multitasking cards, the second showing the app grid with tiny multitasking cards on top, and the third showing quick toggles with notifications below.
Mockups for some of the main GNOME Shell views on mobile (overview, app grid, system status area)

Of course, making a detailed roadmap for this kind of effort is hard and we will keep adjusting it as things progress and become more concrete, but these are the areas we plan to work on in roughly the order we want to do them:

  • New gesture API: Technical groundwork for the two-dimensional navigation gestures (done)
  • Screen size detection: A way to detect the shell is running on a phone and adjust certain parts of the UI (done)
  • Panel layout: Using the former, add a separate mobile panel layout, with a different top panel and a new bottom panel for gestures (in progress)
  • Workspaces and multitasking: Make every app a fullscreen “workspace” on mobile (in progress)
  • App Grid layout: Adapt the app grid to the phone portrait screen size, ideally as part of a larger effort to make the app grid work better at various resolutions (in progress)
  • On-screen keyboard: Add a narrow on-screen keyboard mode for mobile portrait
  • Quick settings: Implement the new quick settings designs

Current Progress

One of the main things we want to unlock with this project is the fully semantic two-dimensional navigation gestures we’ve been working towards since GNOME 40. This required reworking gesture recognition at a fairly basic level, which is why most of the work so far has been focused around unlocking this. We introduced a new gesture tracker and had to rewrite a fair amount of the input handling fundamentals in Clutter.

Designing a good API around this took a lot of iterations and there’s a lot of interesting details to get into, but we’ll cover that in a separate deep-dive blogpost about touch gesture recognition in the near future.

Based on the gesture tracking rework, we were able to implement two-dimensional gestures and to improve the experience on touchscreens quite a bit in general. For example, the on-screen keyboard now behaves a lot more like you’re used to from your smartphone.

Here’s a look at what this currently looks like on laptops (highly experimental, the second bar would only be visible on phones):

Some other things that already work or are in progress:

  • Detecting that we’re running on a phone, and disabling/adjusting UI elements based on that
  • A more compact app grid layout that can fit on a mobile portrait screen
  • A bottom bar that can act as handle for gesture navigation; we’ll definitely need this for mobile but it’s is also a potentially interesting future direction for larger screens

Taken together, here’s what all of this looks like on actual phone hardware right now:

Most of this work is not merged into Mutter and GNOME Shell yet, but there are already a few open MRs in case you’d like to dive into the details:

Next Steps

There’s a lot of work ahead, but going forward progress will be faster and more visible because it will be work on the actual UI, rather than on internal APIs. Now that some of the basics are in place we’re also excited to do more testing and development on actual phone hardware, which is especially important for tweaking things like the on-screen keyboard.

Photo of the app grid on a Pinephone Pro leaning against a wood panel.
The current prototype running on a Pinephone Pro sponsored by the GNOME Foundation

Evince, Flatpak, and GTK print previews

Endless OS is distributed as an immutable OSTree snapshot, with apps added & removed with Flatpak (and podman for power users & developers). Although the snapshot is assembled from Debian packages, it’s not really possible to install additional system packages locally, nor to remove them. Over time, we have tried to remove as many apps out of the immutable OS as possible: Flatpak apps are sandboxed and can be updated at a faster cadence than the OS itself, or removed if not needed.

Evince is one such app built into the OS at present. As a PDF viewer, it handles untrusted input in a complex format with libraries that have historically contained vulnerabilities, so is a good candidate for sandboxing and timely updates. While exploring removing it from the OS in favour of the Flatpak version from Flathub, I learned some things that were non-obvious to me about print preview, and which prevented making this change at the time.

Caveats: the notes below are a simplification, but I believe they are broadly accurate for GNOME on Linux. I’m sure people more familiar with GTK and/or printing already know everything here.

Printing from GTK apps

GTK provides API for applications to print documents. This presents the user with a print dialog, with many knobs to control how your document will be printed. That dialog has a Preview button; when you press it, the dialog vanishes and another one appears, showing a preview of your document. You can press Print on that dialog to print the document, or close it to cancel.

Why does the Preview button close the print dialog and open another one? Why does the preview dialog not have any of the knobs from the print dialog, or a way to return to the print dialog?

The answer lies in the documentation for GtkPrintOperation:

By default GtkPrintOperation uses an external application to do print preview.

The default application is Evince! More specifically, it is the following command:

evince --unlink-tempfile --preview --print-settings %s %f

Cribbing from the manpage:

--preview
Run evince as a previewer.
--unlink-temp-file
If evince is run in preview mode, this will unlink the temporary file created by GTK+.
--print-settings %s %f
This sends the full path of the PDF file, f, and the settings of the print dialog, s, to evince.

So when the user chooses to preview the document, GTK asks the application to render the document with the settings from the dialog, generates a PDF, and then invokes Evince to display that PDF. When you press Print in the preview dialog, it is Evince that sends the job to CUPS and thence to the printer.

What if evince is not present on the $PATH? The button is still displayed, but pressing it does nothing and the following is logged to stderr:

sh: 1: exec: evince: not found

There is code in GTK which attempts to handle this case by logging its own warning and then invoking the default PDF viewer on the generated PDF, but it doesn’t work because GLib actually spawns sh, not evince directly, and then returns success because ‘sh’ was successfully launched.

Printing from sandboxed apps

What happens if the application using the GtkPrintOperation API is a Flatpak app? evince is not part of any runtime, so GTK running in the application process cannot invoke it to preview the document? Well, in the general case the app can’t talk directly to CUPS either. So it uses the print portal’s PreparePrint method to prompt the user to choose a printer & settings, then renders the document and sends it to the portal with the Print method. The desktop portal service, which also uses GTK but is running outside the sandbox, presents the print dialog, and invokes evince if needed. All good, nothing too tricky here.

But notice that a sandboxed app is feeding a PDF to an unsandboxed PDF viewer. If the sandboxed app is malicious and can convince a user to print-preview a document, and there is some arbitrary code execution bug in Evince’s PDF library, then you’re in for a bad day.

What if Evince is a Flatpak?

The Flatpak version of Evince does not put an ‘evince’ command onto the $PATH, by design of Flatpak. So if you remove Evince from the OS and install the Flatpak, print preview stops working.

The evince executable inside the org.gnome.Evince Flatpak supports the --preview flag as normal. So you can put something like the following into ~/.config/gtk-4.0/settings.ini:

# https://docs.gtk.org/gtk4/property.Settings.gtk-print-preview-command.html
[Settings]
gtk-print-preview-command=flatpak run --file-forwarding org.gnome.Evince --unlink-tempfile --preview --print-settings @@ %s %f @@

--file-forwarding triggers special handling of the arguments bracketed by @@:

If this option is specified, the remaining arguments are scanned, and all arguments that are enclosed between a pair of ‘@@’ arguments are interpreted as file paths, exported in the document store, and passed to the command in the form of the resulting document path.

And this does indeed cause Evince to be spawned. However Evince can’t print the document. This is because its previewer tries to talk directly to CUPS, and its sandbox does not allow it to talk to CUPS. You might try punching some crude holes in the sandbox:

[Settings]
gtk-print-preview-command=flatpak run --file-forwarding --socket=system-bus --socket=cups org.gnome.Evince --unlink-tempfile --preview --print-settings @@ %s %f @@

and it seems to get a bit further, but by this point you’ve given up and turned your printer off because you want to go to bed.

What next?

I think it’s desirable for a PDF viewer to be sandboxed. I also think it’s desirable for the print previewer in particular to be sandboxed, or else a malicious-but-sandboxed application could trick the user into printing a PDF that exploits some vulnerability in the previewer and run stuff on the host system.

As I write this up, the gtk-print-preview-command override seems more viable than it did when I first looked into this last year. I think at the time, GTK in the GNOME runtime didn’t have the CUPS backend enabled so it couldn’t print even if you punched the relevant sandbox holes, but apparently it does now, so maybe we can make this change after all. It’s a shame I only realised this after spending hours writing this post!

You could also imagine extending the print portal API to allow an external app to be used for the preview without allowing that app to talk directly to CUPS.

(You could gracefully handle Evince not being installed by putting a wrapper script onto the $PATH which invokes Evince if installed or prompts you to install it if not.)

May 29, 2022

GNOME Outreachy 2022

GNOME Translation Editor, Road to Gtk4

It's time to move to Gtk4. That could be an easy task for new project or for small projects without a lot of custom widgets, but gtranslator is old and the migration will require some time.

Some time ago I did the Gtk2 to Gtk3 migration. It was fun and during the journey we redesigned a bit the interface, but the internals didn't change a lot. Now we can do the same, migrate to Gtk4 and also update the User Interface.

Thankfully, I'm not alone this time, the GNOME community is there to help. A couple of months ago, Maximiliano started a series of commits to prepare the project to the Gtk4 migration, and today starts the Outreachy program and we've a great intern to work in this. Afshan Ahmed Khan will be working during this summer in the GNOME Translation Editor migration to Gtk4.

Outreachy

The Outreachy program provides internship to work in Free and Open Source Software. This year I've proposed the "Migrate GNOME Translation Editor to Gtk4" project and we had a lot of applicants. We had some great contributions during the application phase, and at the end Afshan was selected.

We've now an initial intern blog post and he is working now in the first step, trying to build the project with Gtk4. It's not a simple task, because gtranslator uses a lot of inheritance and there's a lot of widgets in the project.

User Interface redesign?

Once we've the project working with Gtk4 and libadwaita we can start to think about user interface improvements, and all the collaboration here is welcome, so if some designer or translator want to help, don't hesitate to take a look to the current interface and propose some ideas in the corresponding task

Beginning my GSoC'22 journey with GNOME

It was the late night of the 20th of May. My eyes were glued to the email, waiting for the results of the GSoC'22 when I finally received an email that started with a Congratulations message rather than a Thank You for applying message. I was overjoyed when I read the message "Congratulations, your proposal with GNOME Foundation has been accepted!". This post describes my GSoC project and my journey so far with the GSoC, GNOME Foundation, and Open Source.
GSoC @ GNOME

Journey so far

It was the first year of my university when I heard one of my seniors got accepted for the Google Summer of Code. But since I was new to the Computer Science field, I hardly understood any of the terms such as open-source, git, etc. One and a half years later, when I had some coding experience, I dived into the open-source world with Hactoberfest. I made my first trivial pull requests during that period. After that, I started looking for some organizations to start with the open-source contribution when I came across the GNOME Foundation.

I knew the GNOME organization because I used many of their products on my Fedora Desktop. When I joined their IRC, I was initially afraid to ask any questions, as it might have sounded stupid, but the community was generous to answer my stupid questions as well :)

It took me a long time to get the development environment set up. Then I just started looking for a good-first-issue, to begin with. In the same period, GNOME Foundation announced that they will be participating in GSoC that year. I remembered GSoC when I heard it in my first year of college, so I started looking for the projects. Out of all those projects, the Redesigning Health application UI caught my mind because I had just won a hackathon where our team built a Health application. So a Health based project had a special place in my heart.

I started working on some beginner issues and also started learning Rust alongside. My mentor, Rasmus Thomsen (@Cogitri) was supportive during the entire period. But, I was too under-confident in my skills, and eventually, I wasn't selected for the GSoC.

I took this rejection positively and I took some time off to work on my skills and build projects during that period. I started working on those issues again in January and this time the codebase made much more sense than the last time I tried. I went on to solve a few more issues during this period. I came to know that GNOME is participating once again and Health will also participate to revamp their synchronization feature. I participated once again but this time I was confident with myself.

And finally, I got the mail that I have been selected for the GSoC. It was a journey with a mixed feelings over the years, but I'm excited for what next I have in store.

Introduction to Health

Health is a Health and Fitness Tracking application. It helps the user to track and visualize their health indicators better. That means a user can track down their activities and weight progressions. The project is created and maintained by Rasmus Thomsen, who is also the mentor of my GSoC project.

Attached below is the screenshot of the Health MainView:

Health

About the Project

My project is titled - Reworking Sync Options for Health. This project aims to improve the synchronization features of the Health application. Currently, most users have to enter their data manually. Google Fit is the only sync provider present at the moment. We can sync steps and weights from Google Fit to our application.

The current sync feature works as follows:

  1. We pull out the steps from the sync provider.
  2. We convert the steps into a walking activity.

This approach works as long as we would only like to track our walking activity. But, it would be great to pull out actual activities from the sync provider to get a better insight into our Health data.

So my project aims to improve the following Health synchronization features:

  1. Support for syncing actual activities from the sync provider.
  2. Two-way sync support
  3. Support for multiple sync providers such as Apple HealthKit, NextCloud Health, etc.
  4. A proper User Interface and a way to handle multiple sync providers for individual Health data such as activities, weight, etc.
  5. Setting up a proper model so that different Health data can be added in the future.

If the time permits, I would also like to work on the support of PineTime Companion apps. This way Health data can be accessed directly to the cloud services on Health and PineTime companion apps can focus on firmware updates.

Upon completion, this project will solve the major issues Health has with their synchronization at the moment.

Ending notes

I will be updating my blog every two weeks. I have set my goals and milestones accordingly. If you would like to track my journey, keep an eye on the blog for updates, and check the issue board. If you would like to have a look at my proposal, make sure to use it just for reference.

Finally, I would like to express my gratitude to GNOME for believing in me and giving me this opportunity to contribute. I would also like to thank my mentor Rasmus Thomsen for guiding me throughout the journey.

At last, I would like to say that I still have a long way to go. Since I've been given this opportunity to contribute, I would stick along the way to contribute to different GNOME projects as well. But for now, I'm looking forward to a great summer ahead with GSoC.

May 28, 2022

Builder GTK 4 Porting, Part V

Previously Part IV, Part III, Part II, and Part I.

Still working through medicine changes which have wreaked havoc on my sleep, but starting to settle in a bit more.

Template-GLib

Small changes here and there for template-glib to cover more cases for us in our keybindings effort. Improved type comparisons, fixed some embarrassing bugs, improved both GObject Introspection and GType method resolution.

Had some interesting talks with Benjamin about expression language needs within GTK and what things I’ve learned from Template-GLib that could be extracted/rewritten with a particular focus on continuously-evaluating-expressions.

Text Editor

I include gnome-text-editor in these updates because I tend to share code between Builder and g-t-e frequently.

  • Improved session resiliency
  • The Save-As dialog will now suggest filenames based on your current language syntax
  • Tracked down some property orderings which ended up being a GTK bug, so fixed that too
  • Persisted maximized window state to the session object on disk
  • Support to inhibit logout while documents are modified
  • Allow starting a single instance of the app with -s|--standalone like we do with Builder

GTK 4

  • More API strawmen for things we need in Builder
  • Fix some checkbutton annoyances
  • Removed assertions from debug builds during failure cases, converted to g_criticals()

GtkSourceView

  • Updated CI to use a newer Fedora release for more recent wayland protocols and what not
  • More work on source assistants and how measure/present are applied to popovers
  • Improved when and how we show informative tooltips with snippets
  • Add a bunch of “suggested-name” and “suggested-suffix” metadata properties to language specifications so that applications may suggest filenames for Save-As
  • Made Vim emulation of registers global to the application rather than per-view which makes things actually useful and expected behavior to share content between documents
  • Squash some testsuite issues

Builder

Merged a bunch of cleanup commits from the community which is very helpful and appreciated!

I also decided that we’re going to remove all PyGObject plugins from Builder itself. We’ll still have it enabled for third-party plugins, at least for now. Maybe someday we’ll get a GJS back-end for libpeas and we could go that route instead. I’ve spent too much time tracking down bindings issues which made me feel very much like I was still working on MonoDevelop circa 2004. That experience was the whole reason I wrote Builder in C to begin with.

None of our PyGObject plugins are all that complicated so I’ve rewritten most of them in C and had help for a few others. So far that covers: blueprint, clangd, go-langserv (now gpls), intelephense, jedi-language-server, ts-language-server, vala-language-server, buildstream, cargo, copyright, eslint, find-other-file, jhbuild, make, mono, phpize, rstcheck, rubocop, stylelint, and waf.

A few got removed instead for various reasons. That includes gvls (replaced by vala-language-server), rls (replaced by rust-analyzer), and gjs-symbols (to be replaced by ts-language-server eventually).

I added support for two new language servers: bash-language-server and jdtls (Java) although we don’t have any bundling capabilities for them yet with regards to Flatpak.

I’ve landed a new “Create New Project” design which required a bunch of plumbing cleanup and simplification of how templates work. That will help me in porting the meson-templates and make-templates plugins to C too.

A screenshot of the "Create New Project" design

I’ve added quick access to Errors and Warnings in the statusbar so that we can remove it from the (largely hidden) pane within the left sidebar. Particularly I’d like to see someone contribute an addition to limit the list to the current file.

A screenshot of the errors/warnings popover

I updated the support for Sysprof so that it can integrate with Builder’s application runners and new workspace designs. You can how have Sysprof data in pages which provides a lot more flexibility. Long term I’d like to see us add API hooks in Sysprof so that we can jump from symbol names in the callgraphs to source code.

A screenshot of Sysprof embedded within Builder

We cleaned up how symbolic icons are rendered in the greeter as well as how we show scroll state with a GtkScrolledWindow when you have a AdwHeaderBar.flat.

A screenshot of the greeter window

Our Valgrind plugin got more tweakables from the Run menu to help you do leak detection.

A screenshot of the valgrind menu

Keybindings for “Build and Run” along with various tooling got simplified to be more predictable. Also a lot of work on the menuing structure to be a bit simpler to follow.

A screenshot of the updated Run menu

You can force various a11y settings now to help get developers testing things they might otherwise never test.

A screenshot of the a11y menu containing high-contrast and ltr/rtl controls

Same goes for testing various libadwaita and libhandy settings. Both this and the RTL/LTR settings have a few things that still need to propagate through the stack, but it will happen soon enough.

A screenshot showing the forced-appearance modes for libadwaita/libhandy

Using Sysprof will be a lot easier to tweak going forward now that there are menu entries for a lot of the instruments.

A screenshot of the sysprof menu

A lot of new infrastructure is starting to land, but not really visible at the moment. Of note is the ability to separate build artifacts and runnables. This will give users a lot more control over what gets built by default and what gets run by default.

For example, a lot of people have asked for run support with better environment variable control. This should be trivial going forward. It also allows for us to do the same when it comes to tooling like “run this particular testsuite under valgrind”.

As always, freshest content tends to be found here 🐦 before I manage to find a spare moment to blog.

Gingerblue 6.0.1 with Immediate Ogg Vorbis Audio Encoding

Gingerblue 6.0.1 is Free Music Recording Software for GNOME available under GNU General Public License version 3 (or later) that now supports immediate Ogg Vorbis audio recordings in compressed Ogg Vorbis encoded audio files stored in the $HOME/Music/ folder. https://download.gnome.org/sources/gingerblue/6.0/gingerblue-6.0.1.tar.xz

Visit https://www.gingerblue.org/ and https://wiki.gnome.org/Apps/Gingerblue for more information about the GTK+/GNOME Wizard program Gingerblue for Free Music Recording Software under GNOME 42.

Radio 16.0.43 for GNOME 42 (gnome-radio)

The successor to GNOME Internet Radio Locator for GNOME 42 is available from http://download.gnome.org/sources/gnome-radio-16.0.43.tar.xz and https://wiki.gnome.org/Apps/Radio

New stations in GNOME Radio version 16.0.43 is NRK Folkemusikk (Oslo, Norway), NRK P1+ (Oslo, Norway), NRK P3X (Oslo, Norway), NRK Super (Oslo, Norway), Radio Nordfjord (Nordfjord, Norway), and Radio Ålesund (Ålesund, Norway).

Installation on Debian 11 (GNOME 42) from GNOME Terminal


sudo apt-get install gnome-common gcc git make wget
sudo apt-get install debhelper intltool dpkg-dev-el libgeoclue-2-dev
sudo apt-get install libgstreamer-plugins-bad1.0-dev libgeocode-glib-dev
sudo apt-get install gtk-doc-tools itstool libxml2-utils yelp-tools
sudo apt-get install libchamplain-0.12-dev libchamplain-gtk-0.12
sudo apt-get install libgstreamer-plugins-bad1.0-dev libgeocode-glib-dev
wget http://www.gnomeradio.org/~ole/debian/gnome-radio_16.0.43-1_amd64.deb
sudo dpkg -i gnome-radio_16.0.43-1_amd64.deb

Installation on Fedora Core 36 (GNOME 42) from GNOME Terminal


sudo dnf install http://www.gnomeradio.org/~ole/fedora/RPMS/x86_64/gnome-radio-16.0.43-1.fc36.x86_64.rpm

Installation on Ubuntu 22.04 (GNOME 42) from GNOME Terminal


sudo apt-get install gnome-common gcc git make wget
sudo apt-get install debhelper intltool dpkg-dev-el libgeoclue-2-dev
sudo apt-get install libgstreamer-plugins-bad1.0-dev libgeocode-glib-dev
sudo apt-get install gtk-doc-tools itstool libxml2-utils yelp-tools
sudo apt-get install libchamplain-0.12-dev libchamplain-gtk-0.12
sudo apt-get install libgstreamer-plugins-bad1.0-dev libgeocode-glib-dev
wget http://www.gnomeradio.org/~ole/ubuntu/gnome-radio_16.0.43-1_amd64.deb
sudo dpkg -i gnome-radio_16.0.43-1_amd64.deb

More information about GNOME Radio 16.0.43 is available on http://www.gnomeradio.org/ and http://www.gnomeradio.org/news/

May 27, 2022

GSoC 2022: Introduction

Hello there!

My name is Ignacy Kuchciński and I'm studying computer science at UMCS in Lublin, Poland. I've been making minor contributions to GNOME over the past few years, and among the projects I was looking into was GNOME Files, a.k.a Nautilus. I learned about GSoC in #nautilus irc chat room as I observed the effort to port nautilus properties dialog to use GtkBuilder, and I really liked the idea of it - have a chance to make a more significant contribution and be a part of an awesome community on a deeper level. Fast-forward two years, I've applied to Nautilus for GSoC'22 and got accepted to help revamp the “New Document” submenu - an adventure I'm very excited to undertake.

The project

GNOME Files, also known as Nautilus, as many of you already know, is a file manager for GNOME. Its goal is to provide the user with a simple way to navigate and manage files.

One of its abilities, the New Document creation feature, is considered to be a part of core user experience, but its design implementation has room for improvement, especially in discoverability and usability. There are also further regressions to be addressed caused by the GTK 4 port.

For this project the idea is to design and implement a new UI for this feature, the main goals are following:

1. Exposing an entire tree of possible templates in a single view, instead of nested submenus.

2. Making use of visual representations of each template, such as icons, to help users find what they’re looking for.

3. Always showing the New Documents menu, even if the Templates directory is empty - in that case, offer the user the ability to add new templates, both pre-defined as well as custom.

4. Add the ability to search the list of templates.

5. Add the ability to quickly rename the newly created files.

I'll be working in close cooperation with the Nautilus maintainer Antonio Fernandes who will be my mentor, GNOME design team, and with various user studies in mind. Initially, the project was supposed to have only one student working on it. However, in quite an unexpected turn of events, two interns were selected. As a result I'll be working on this project together with Utkarsh Gandhi whom I congratulate for getting selected as well!

Nevertheless, the fact remains that the initial project was meant for a single person. Fortunately, there is a room for expansion: resolving the "no default templates" situation, which has a very big impact on the discoverability and ease of use of this feature. We're still figuring our strategy, but one of the possible scenarios is one of us focusing on revamping the "New Document" submenu, and the other figuring out how to deal with the lack of initial templates.

Conclusion

I will keep track of my progress on this blog, and you can contact me on the GNOME IRC/Matrix on the #nautilus channel. I'm very excited to take part in this journey among the welcoming community and I look forward to contributing towards it. :)

#45 Timeout!

Update on what happened across the GNOME project in the week from May 20 to May 27.

Core Apps and Libraries

GLib

The low-level core library that forms the basis for projects such as GTK and GNOME.

Philip Withnall says

New g_idle_add_once() and g_timeout_add_once() functions just landed in GLib, courtesy of Emmanuele Bassi. They make it simpler to add an idle or timeout callback which is only ever dispatched once.

Philip Withnall reports

Also in GLib land, Thomas Haller’s work to add automatic NULL-termination support to GPtrArray has just landed

Files

Providing a simple and integrated way of managing your files and browsing your file system.

antoniof reports

Solid improvements are coming to the GTK 4 port of Files. This week, with the help of Corey Berla and advice from Alexander Mikhaylenko, I’ve enhanced the experience for mouse users without hindering future improvements for touch users. As a new feature, now the middle button can be used to open multiple selected files at once.

Third Party Projects

Workbench

A sandbox to learn and prototype with GNOME technologies.

sonnyp says

I added support for previewing templates and signals in Workbench as well as converting back and forth between XML and Blueprint

Furtherance

Track your time without being tracked

Ricky Kresslein reports

Furtherance v1.3.0 was released and now has the ability to autosave and automatically restore autosaves after an improper shutdown. Also, tasks can now be added manually, and task names can be changed for an entire group.

Amberol

Plays music, and nothing else.

Emmanuele Bassi says

Various fixes to Amberol, including a refresh of the main application icon, and tweaks to the styling of elements such as the waveform view, volume control, and loading progress bar.

Miscellaneous

Thib says

GNOME is participating to GSoC this year again, and we have no less than 9 interns this year! Health, Chromecast Support, Pitivi (video editor), Nautilus (files manager), Fractal and engagement websites: such are the projects that will receive great contributions this summer!

You can read all the details, welcome the interns and thank the mentors in this post. Many thanks to contributing to a vibrant project and community.

GNOME Shell Extensions

aunetx reports

Blur my Shell has recently seen some big changes, and a lot more are probably coming soon!

To keep you updated:

  • a color and a noise effect have been added, they can help to make the blur more legible and prevent color banding on low-resolution screens – or to simply have have some fun :p
  • a lot of the internal preferences have been changed – this probably resulted in some of you having Blur my Shell preferences reset, sorry about this :(
  • translation have been added to different languages, including French, Chinese, Italian, Spanish, Norwegian and Arabic: if you want to help the project, you can translate the preferences in you own language using Weblate!

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

May 25, 2022

Beginning Outreachy journey with Gnome

This blog post introduces myself , how I got started as a contributor to gnome as contributor and what my project is about .

About me and my first encounter with gnome

I am currently in the third year of my integrated Master's Program at a tier-3 college in Indore, India. I am a technophile since childhood and in those days , I once installed kali-linux in vm (as google searches always show it was easy to hack neighbour's wifi with kali-linux 🤣 and cellular internet was expensive back then) but to my surprise the kali-linux looked very complex, then I again google searched and one advice got fix in my mind and that was "Kali linux is complex , start with a simple linux distro (ubuntu) and use it instead of windows" (A simple way to learn a tech thing is to use it). So When I entered college, one thing I was determined at ... and that was using Linux. I installed ubuntu Linux and loved the way it was beautiful and responsive, just one click and the application gets open immediately. Later I come to know that GUI which I am looking at and experiencing is actually gnome desktop.
linux vs windows

From gnome user to gnome contributor

I then tried other DEs (desktop environments) . But sometimes they offered less functionality to get the work done and other times provided more features than needed, making them complex and crashing occasionally. Gnome is a proper match of the two. The applications in gnome were simple to use, stable and provided all needed functionality.
In one subject of our college, we need to submit a study report on an open-source application. So at that time, I decided to get to the development side of gnome. Although, studying big thing as gnome application, is itself a very big thing and I was a newcomer back then . Fortunately, that was an online semester and most of us submitted copied report from the internet.
Satisfied

How I get started

I looked at this repo. But still going through official Gtk documentation was difficult for me, I went through this book. But some things still were not clear to me. It turns out I needed to learn about GObject and then it became more difficult. Hopefully, there was vala which works with objects similar to java,c++.This playlist is must watch, after watching this and coding alongside, I developed some confidence and then following my principle (use tech to learn it) I tried to write this gtk app (although it is still work in progress), while working on this application I realized that a lot of code has to be written . So to get some idea what are good practices for coding and proper way to structure the project I started to understand gnome-clocks codebase and then one fine day I made my first MR .
But everything isn't in vala and how does gobject works that I still wanted to know , then finally, I found these two gems 💎 one is chapter 2 of "The Official GNOME 2 Developer's Guide" and this documentation.

Besides, I also asked many of my doubts on newcomers channel .

How I got outreachy internship ?

When I made my first contribution to gnome (!194 to clocks) ,
that was the time around february and in that month, I also filled my initial application of outreachy, then around 25 march I received mail that my initial application has been accepted. So when project list got finalized , I started contributing to Gtranslator . Then I filled my final application with contributions and submitted it. On 20 May I was on cloud nine , when I saw this mail in Inbox.

mail of acceptance

About my project

My task in this internship is to port the gtranslator from gtk3 to gtk4 under mentorship of Daniel Garcia Moreno .
Gtranslator screenshot
Gtranslator is a gui program which helps the translators to translate a application , under the hood this internationalization happens with the use of gettext library .
In this porting we plan to achieve these tasks -

  1. Updating Gtk version from gtk3 to gtk4.
  2. Replacing libhandy with libadwaita.
  3. Update custom styles to make app work with dark theme.
  4. Adapt the app's ui to gnome HIG.

--

NOTE: All the books I mentioned can be found online for free with google searches . And second thing is I didn't use gnome-builder while writing vtodo app, I just followed youtube playlist way. Third I didn't read all the books completely , but rather just some specific chapters.

Amberol

In the beginning…

In 1997, I downloaded my first MP3 file. It was linked on a website, and all I had was a 56k modem, so it took me ages to download the nearly 4 megabytes of 128 kbit/s music goodness. Before that file magically appeared on my hard drive, if we exclude a brief dalliance with MOD files, the only music I had on my computer came either in MIDI or in WAV format.

In the nearly 25 years passed since that seminal moment, my music collection has steadily increased in size — to the point that I cannot comfortably keep it in my laptop’s internal storage without cutting into the available space for other stuff and without taking ages when copying it to new machines; and if I had to upload it to a cloud service, I’d end up paying monthly storage fees that would definitely not make me happy. Plus, I like being able to listen to my music without having a network connection — say, when I’m travelling. For these reasons, I have my music collection on a dedicated USB3 drive and on various 128 GB SD cards that I use when travelling, to avoid bumping around a spinning rust drive.

In order to listen to that first MP3 file, I also had to download a music player, and back in 1997 there was this little software called Winamp, which apparently really whipped the llama’s ass. Around that same time I was also dual-booting between Windows and Linux, and, obviously, Linux had its own Winamp clone called x11amp. This means that, since late 1997, I’ve also tested more or less all mainstream, GTK-based Linux music players—xmms, beep, xmms2, Rhythmbox, Muine, Banshee, Lollypop, GNOME Music—and various less mainstream/non-GTK ones—shout out to ma boi mpg123. I also used iTunes on macOS and Windows, but I don’t speak of that.

Turns out that, with the very special exception of Muine, I can’t stand any of them. They are all fairly inefficient when it comes to managing my music collection; or they are barely maintained; or (but, most often, and) they are just iTunes clones—as if cloning iTunes was a worthy goal for anything remotely connected to music, computing, or even human progress in general.

I did enjoy using Banshee, up to a point; it wasn’t overly offensive to my eyes and pointing devices, and had the advantage of being able to minimise its UI without getting in the way. It just bitrotted with the rest of the GNOME 2 platform even before GNOME bumped major version, and it still wasn’t as good as Muine.

A detour: managing a music collection

I’d like to preface this detour with a disclaimer: I am not talking about specific applications; specific technologies/libraries; or specific platforms. Any resemblance to real projects, existing or abandoned, is purely coincidental. Seriously.

Most music management software is, I feel, predicated on the fallacy that the majority of people don’t bother organising their files, and are thus willing to accept a flat storage with complex views built at run time on top of that; while simultaneously being willing to spend a disproportionate amount of time classifying those files—without, of course, using a hierarchical structure. This is a fundamental misunderstanding of human nature.

By way of an example: if we perceive the Universe in a techno-mazdeist struggle between a πνεῦμα which creates fool-proof tools for users; and a φύσις, which creates more and more adept fools; then we can easily see that, for the entirety of history until now, the pneuma has been kicked squarely in the nuts by the physis. In other words: any design or implementation that does not take into account human nature in that particular problem space is bound to fail.

While documents might benefit from additional relations that are not simply inferred by their type or location on the file system, media files do not really have the same constraints. Especially stuff like music or videos. All the tracks of an album are in the same place not because I decided that, but because the artist or the music producers willed it that way; all the episodes of a series are in the same place because of course they are, and they are divided by season because that’s how TV series work; all the episodes of a podcast are in the same place for the same reason, maybe divided by year, or by season. If that structure already exists, then what’s the point of flattening it and then trying to recreate it every time out of thin air with a database query?

The end result of constructing a UI that is just a view on top of a database is that your UI will be indistiguishable from a database design and management tool; which is why all music management software looks very much like Microsoft Access from circa 1997 onwards. Of course you can dress it up however you like, by adding fancy views of album covers, but at the end of the day it’s just an Excel spread sheet that occasionally plays music.

Another side effect of writing a database that contains the metadata of a bunch of files is that you’ll end up changing the database instead of changing the files; you could write the changes to the files, but reconciling the files with the database is a hard problem, and it also assumes you have read-write access to those files. Now that you have locked your users into your own database, switching to a new application becomes harder, unless your users enjoy figuring out what they changed over time.

A few years ago, before backing up everything in three separate storages, I had a catastrophic failure on my primary music hard drive; after recovering most of my data, I realised that a lot of the changes I made in the early years weren’t written out to music files, but were stored in some random SQLite database somewhere. I am still recovering from that particular disaster.

I want my music player to have read-only access to my music. I don’t want anything that isn’t me writing to it. I also don’t want to re-index my whole music collection just because I fixed the metadata of one album, and I don’t want to lose all my changes when I find a better music player.

Another detour: non-local media

Yes, yes: everyone listens to streamed media these days, because media (and software) companies are speed-running Adam Smith’s The Wealth of Nations and have just arrived at the bit about rentier economy. After all, why should they want to get paid once for something, when media conglomerates can “reap where they never sowed, and demand a rent even for its natural produce”.

You know what streaming services don’t like? Custom, third party clients that they can’t control, can’t use for metrics, and can’t use to serve people ads.

You know what cloud services that offer to host music don’t like? Duplicate storage, and service files that may potentially infringe the IP of a very litigious industry. Plus, of course, third party clients that they can’t use to serve you ads, as that’s how they can operate at all, because this is the Darkest Timeline, and adtech is the modern Moloch to which we must sacrifice as many lives as we can.

You may have a music player that streams somebody’s music collection, or even yours if you can accept the remote service making a mess of it, but you’re always a bad IPO or a bad quarterly revenue report away from losing access to everything.

Writing a music player for fun and no profit

For the past few years I’ve been meaning to put some time into writing a music player, mostly for my own amusement; I also had the idea of using this project to learn the Rust programming language. In 2015 I was looking for a way to read the metadata of music files with Rust, but since I couldn’t find anything decent, I ended up writing the Rust bindings for taglib. I kept noodling at this side project for the following years, but I was mostly hitting the limits of GTK3 when it came to dealing with my music collection; every single iteration of the user interface ended up with a GtkTreeView and a replica of iTunes 1.0.

In the meantime, though, the Rust ecosystem got exponentially better, with lots of crates dedicated to parsing music file metadata; GTK4 got released with new list widgets; libadwaita is available to take care of nice UI layouts; and the Rust bindings for GTK have become one of the most well curated and maintained projects in the language bindings ecosystem.

Another few things that happened in the meantime: a pandemic, a year of unemployment, and zero conferences, all of which pushed me to streaming my free and open source software contributions on Twitch, as a way to break the isolation.

So, after spending the first couple of months of 2022 on writing the beginners tutorial for the GNOME developer documentation website, in March I began writing Amberol, a local-only music player that has no plans of becoming more than that.

Desktop mode

Amberol’s scope sits in the same grand tradition of WinAmp, and while its UI started off as a Muine rip off—down to the same key shortcuts—it has evolved into something that more closely resembles the music player I have on my phone.

Mobile mode

Amberol’s explicit goal is to let me play music on my desktop the same way I typically do when I am using my phone, which is: shuffling all the songs in my music collection; or, alternatively, listening to all the songs in an album or from an artist from start to finish.

Amberol’s explicit non goals are:

  • managing your music collection
  • figuring out your music metadata
  • building playlists
  • accessing external services for stuff like cover art, song lyrics, or the artist’s Wikipedia page

The actual main feature of this application is that it has forced me to figure out how to deal with GStreamer after 15 years.

I did try to write this application in a way that reflects the latest best practices of GTK4:

  • model objects
  • custom view widgets
  • composite widgets using templates
  • property bindings/expressions to couple model/state to its view/representation
  • actions and actionable widgets

The ability to rely on libadwaita has allowed me to implement the recoloring of the main window without having to deal with breakage coming from rando style sheets:

The main thing I did not expect was how much of a good fit was Rust in all of this. The GTK bindings are top notch, and constantly improving; the type system has helped me much more than hindering me, a poor programmer whose mind has been twisted by nearly two decades of C. Good idiomatic practices for GTK are entirely within the same ballpark of idiomatic practices for Rust, especially for application development.

On the tooling side, Builder has been incredibly helpful in letting me concentrate on the project—starting from the basic template for a GNOME application in Rust, to dealing with the build system; from the Flatpak manifest, to running the application under a debugger. My work was basically ready to be submitted to Flathub from day one. I did have some challenge with the AppData validation, mostly caused by appstream-utils undocumented validation rules, but luckily it’s entirely possible to remove the validation after you deal with the basic manifest.

All in all, I am definitely happy with the results of basically two months of hacking and refactoring, mostly off an on (and with two weeks of COVID in the middle).

Update on Niepce

Here we go, when I started that project in 2006, I had plenty of ideas. I still have, but everything else is in the way, including me.

Since there have been some amazing apps, like RawTherapee, Darktable, and possibly some other I miss, apps fullfilling some of the uses I envisioned for Niepce back then. Not to mention a few other apps that did just disappear ; that's life, it's hard to find maintainers or keep being motivated.

Anyway.

Still slowly working on it, far from an MVP.

But here are some recent developments:

  1. Port to Rust: most of the backend including the database (but not sqlite3) is written in Rust. I squashed a few bugs in the process, improving the architecture. There are some technical challenges involving bridging C++ and Rust. Some widgets have been rewritten in Rust and progressively everything is getting the treatement. Ideally I'd not write a single new feature in C++ anymore but sadly that's not how it work.
  2. Port to Gtk4: the app started with Gtk2 back then. It was quickly ported to Gtk3 early in the last decade, thanks to the help of Gtkmm. A few weeks ago I did port it to Gtk4. And it's still a mess of C++ with Gtkmm and Rust. I do have plan to add libadwaita.
  3. Library import: this one I worked on it out of spite between the Rust port and the Gtk4 port, and it's not merged yet. The goal is to import an Adobe Ligthroom™ library (up to version 6). I wrote the base code in Rust a while ago, and last November I just worked on integrating the crate into Niepce. The current status is works slowly, as there are some serious reachitecturing that need to happen. For the longer term, I have support for Capture One and Apple Aperture 3 it two other crates. The Aperture 3 crate was actually one of my first practical Rust project back in 2015.

To be fair I don't know where this will be going. It's more like permanent construction.

May 24, 2022

Selected for GSoC'22

Welcome, readers!

I'm pleased to share that I'm accepted for Google Summer of Code (GSoC) 2022 under GNOME Foundation umbrella on the Pitivi project. This summer I will be updating the project from GTK3 to the latest GTK4 toolkit.

To anyone that wants to be a part of GSoC, I have only one piece of advice, just go for it. Don't think if you can do it or not, don't assume failure before attempting, and don't overthink. I always felt that it is for the best of the best, and I won't be able to clear it, all the big organizations on the GSoC page overwhelmed me, but instead of making it a dream, I made it a life goal. And well, now I'm enjoying it.

I will be posting more blogs here regarding my progress on the project and also some casual ones too.

So be sure to check this page :)

Thank you for reading :D

New Again

Old is new again. Apparently my blog was broken following a PHP upgrade on the server. tl;dr PHP 8 broke things and code had to be changed. I don't do PHP so I had to guess (I can read it, patch it, but it's a lot of tries).

Anyway I decided to bite the bullet and use a generator. I use Hugo in other places, but I decided to try Zola. I the past I have used Jekyll but ended up having a bad experience with the Ruby install. Hugo works fine, it's written in Go and end up being properly packaged in Fedora. Zola (see a naming theme?) is written in Rust and also well packaged. So it's 'Les Misérables' or 'La bête humaine'.

Note that the old blog will stay in place, read only.

This is my wlog: Web Log.

More to come.

May 23, 2022

A look in the game design of choose-your-own-adventure books

After a long covid pause I visited my parents' house. I took some things back with me, such as this:

This is a choose your own adventure book that originally belonged to my big sister. For those who don't speak Finnish, the book is called The Haunted Railway Book and it uses the characters from Enid Blyton's Famous Five, which is a well known series of adventure books for children (or maybe YA, don't know, I haven't actually read any of them).

The game itself is fairly involved. Instead of just doing random choices you have multiple items like a map and a compass that you can obtain and lose during the game and even hit points that are visualised as, obviously, food. I figured it might be interesting to work out how the game has been designed. Like all adventure game books the story has been split into different numbered chapters and you progress in the story by going from one chapter to another according to the rules.

In other words the game is a directed acyclic graph. Going through the entire book and writing it out as a Graphviz file, this is what the adventure looks like. The start point is at the top and to win you need to get to the second to last node without losing all your hit points and having the code book in your possession. The gv file is available in this repo for those who want to examine it themselves.

The red nodes are those where you lose one hit point. As you can tell the game really does not go easy on the player. The basic game design principle is simple. At first the decision tree fans out to give a sense of freedom and then chokes back down to a single node or few where important plot points are presented. This pattern then repeats until the story ends. In this way the game designers know that no matter how the player gets to the endgame, they have been told certain facts. Given that this book was designed in the late 70s or early 80s when computers were not that easily accessible this approach  makes designing the game a lot easier as each story segment is fully isolated from all others.

Looking closer we can see how the hit point mechanism works.

This is a repeating pattern. The game checks if you have a given item. If you don't have it you lose one hit point. In either case the plot goes on to the same point. Note how the child nodes have very different numbers. One assumes that if they were consecutive the player might read both branches and feel cheated that the event had no real influence on the story.

One thing that I wanted to find out was whether the game had any bugs or shortcuts you could take. There do not seem to be any. When I first wrote the graph down there was indeed one path that jumped past several plot nodes. I verified that it was not a transcription error and it did not seem to be. So either the original game had a bug or the Finnish translation had a typo.

Then I re-verified the transcription and found the issue.

In this chapter you need to use a code book to decipher a secret message, which seems to be the letters EJR that translate to "go to chapter 100". However the black triangly thing that is completely separate from the rest of the text (helpful arrow added by me) is actually part of the message and changes the content to "go to chapter 110". With this fixed the transition no longer jumps over plot nodes.  (The chapter text does say that the message is split over two separate bricks, but who reads instructions anyway.)

So what can we learn from this? Graphic design is important and too much whitespace can lead to smugglers escaping the long arm of the law and the short arm of child detectives.

Broader opens: on the relevance of cryptolaw for open lawyers

I’ve been thinking a lot of late about what “libre” and “open” mean to me, in large part by thinking about movements adjacent to open source software, and how open software might learn/borrow from its progeny. I hope to go into that more this summer, but in the meantime, I’m publishing this as a related “just blog it and get it out” note.

A chart from a recent presentation I gave on legal topics adjacent to open. Security, DAOs, smart contracts, and Swiss non-profits, among others, show up here and stem from, or are more salient in, the crypto space.

This post started as an email to a private list, giving my best-case argument for why open lawyers should pay attention to crypto/blockchain/web3 even if it’s apparent that most of these things are at best naive and at worst planet-destroying scams. I think it’s worth sharing in public to get more comments and more thoughts on this attempt to steel man the argument for cryptolaw.

So, why should open-adjacent lawyers take crypto seriously?

The Bad Bits

Liability

I’ll start with a point designed to appeal to the most diehard crypto-haters (and to be clear, that’s mostly me too!) Quite simply, crypto (particularly but not just smart contracts) is an almost ideal test case for anyone who wants to bring product liability into software. For decades now, we’ve argued (essentially) that software is too complicated, and so software developers should not be liable when it breaks. And in any case, how do you figure out who owes what to whom when it breaks?

In crypto, those arguments start to look particularly weak. The code is much simpler, and the damages are very specific and clear: “my money(-like-thing) disappeared!” And of course this breakage happens all the time; I haven’t looked but I assume there’s an entire category on Web 3 Is Going Great devoted just to software bugs.

Given all this, lawyers will be tempted to bring product liability cases, and judges will be tempted to resolve them. If that happens, and it finally brings serious product liability to software, what does that mean for our FAVORITE BLOCKS OF ALL-CAPS TEXT? To put it another way: the worse you think blockchain is, probably the more worried you should be about the impact of blockchain on every free software license.

(For this point, I am indebted to Mark Radcliffe, who has been writing and speaking on this issue for a while.)

Ethics, or lack thereof

To continue again with the “the more you hate crypto, the more you should pay attention” theme: lots of developers also hate crypto! In the past year I have been repeatedly asked about anti-crypto riders to attach to open source licenses. This would of course make them non-OSI-approved, but lots of developers hate crypto even more than they like (or are aware of) OSI. I suspect that if ethics-focused licenses ever really take off, anti-crypto energy may be the main driver. How those licenses are defined will make a difference, possibly in clauses that live well-past the current crazy.

The Better(?) Bits

Creates lots of code

To turn towards the more positive side: crypto is one of the most dynamic nearly-completely-FOSS sectors out there; it’d be weird for the open community more broadly to ignore them as FOSS consumers and producers. They write a lot of FOSS-licensed code!

As a practical matter, even if you think tokens/blockchain are likely to fail in the long run, it is likely that some of this code will leak back into non-token applications, so its licensing (and related: licensing hygiene) matters.

Governance innovation

Open source communities have traditionally had at least some members interested in how to govern online communities. Many of us who have been around long enough will have participated in more than one discussion of which version of Ranked Choice Voting to use 😭

Crypto is innovating furiously in the online governance space. As with all “innovation” most of that will turn out to be pointless or actively bad. But as a few practical examples of how it is filtering back into open source:

Again, most of that is somewhere between naivete and scam, but not all of it. Open lawyers (and open leaders more generally) should be aware of these ideas and have responses for them.

Taking ‘freedom’ seriously, sometimes

Related to the previous point, as the Cryptographic Autonomy License process reminded us, at least some crypto communities are taking the principles of free software very seriously and extending them to critical future issues like data governance and privacy. If open source wants to move the ball forward on these values and methodologies, the action is by and large in crypto (even if finding the good-faith action is a little bit like finding a needle in a haystack of greed and scams).

‘Smart’ contracts as new legal form?

Smart contracts are mostly garbage (because they are software and most software is garbage) but the interface of this new legal form with existing institutions is both (1) intellectually interesting and (2) has a lot of parallels to early adoption of open licensing as a legal form/tool. I’d love to interview the author of this paper on smart-contracts-as-vending-machines, for example. Similarly, I suspect there is something to be learned from efforts to bring arbitration to the smart contract space.

Money and interest

Finally, there is, quite simply, a pile of crypto-derived money and therefore employment out there. I personally find taking that money pretty distasteful, but lots of friends are taking it in good faith. Every open and open-adjacent lawyer will be getting questions (and many of will be getting paid for their time) out of that pile of money in coming months and years.

Whether we like it or not, this money — and the people paid by it — will set agendas and influence directions. We need to take it seriously even if we dislike it.

Closing (re-)disclaimer

Again: lots of cryptocurrency/web3 is malicious, and the primary legal response to carbon-emitting proof-of-work should be to figure out how to lock people up for it.

But there are, I suspect, a growing number of overlaps between what “we” do in “open” and what “they” are doing “there”. If we want to be seriously forward-looking, we need to take it seriously even if we hate it.

My Vacation in Android Land

Recently I was pulled into a project to build an Android app at Endless. While Android is Linux, it’s quite a bit different than “traditional” Linux1. In our case, we’re trying to assemble a Python app into an Android app using python-for-android (aka, p4a). As you might imagine, this adds a couple more layers to the mix, which is always fun. Magnets P4A Apps, How Do They Work? I was pretty mystified by p4a for the first couple weeks, but now that I’ve been down in the weeds for a while I mostly understand how it works.

How your organisation’s travel policy can impact the environment

Following on from updating our equipment policy, we’ve recently also updated our travel policy at the Endless OS Foundation. A major part of this update was to introduce consideration of carbon emissions into the decision making for when and how to travel. I’d like to share what we came up with, as it should be broadly applicable to many other technology organisations, and I’m quite excited that people across the foundation worked to make these changes happen.

Why is this important?

For a technology company or organisation, travel is likely to be the first or second largest cause of emissions from the organisation. The obvious example in free software circles is the emissions from taking a flight to go to a conference, but actually in many cases the annual emissions from commuting to an office by car are comparable. Both can be reduced through an organisation’s policies.

In Endless’ case, the company is almost entirely remote and so commuting is not a significant cause of emissions. Pre-pandemic, air travel caused a bit under a third of the organisation’s emissions. So if there are things we can do to reduce our organisation’s air travel, that would make a significant difference to our overall emissions.

On an individual level, one return transatlantic flight (1.6tCO2e, which is 1.6 tonnes of carbon dioxide equivalent, the unit of global warming potential) is more than half of someone’s annual target footprint which is 2.8tCO2e for 2030. So not taking a flight is one of the most impactful single actions you can take.

Similarly, commuting 10 miles a day by petrol car, for 227 working days per year, causes annual emissions of about 0.55tCO2e, which is also a significant proportion of a personal footprint when the aim is to limit global warming to 1.5°C. An organisation’s policies and incentives can impact people’s commuting decisions.

Once the emissions from a journey have been made, they can’t be un-made anywhere near as easily or quickly. Reducing carbon emissions now is more impactful than reducing them later.

How did we change the policy?

Previously, Endless’ travel policy was almost entirely focused around minimising financial cost by only allowing employees to choose the cheapest option for a particular travel plan. It had detailed sections on how to minimise cost for flights and private car use, and didn’t really consider other modes of transport.

In the updated policy, financial cost is still a big consideration, but it’s balanced against environmental cost. I’ve included some excerpts from the policy at the bottom of this post, which could be used as the basis for updating your policy.

Due to COVID, not much travel has happened since putting the policy in place, so I can’t share any comparisons of cost and environmental impact before and after applying the policy. The intention is that reducing the number of journeys made will balance slightly increased costs for taking lower-carbon transport modes on other journeys.

The main changes we made to it are:

  • Organise the policy so that it’s written in decision making order: sections cover necessity of travel, then travel planning and approval, then accommodation, then expenses.
  • Critically, the first step in the decision making process is “do you need to travel and what are the alternatives?”. If it’s decided that travel is needed, the next step is to look at how that trip could be combined with other useful activities (meetings or holiday) to amortise the impact of the travel.
  • We give an explicit priority order of modes of travel to choose:
    1. Rail (most preferred)
    2. Shared ground transport (coach/bus, shared taxi)
    3. Private ground transport (taxi, car rental, use of own vehicle)
    4. Air (least preferable)
  • And, following that, a series of rules for how to choose the mode of transport, which gives some guidance about how to balance environmental and financial cost (and other factors):

You should explore travel options in that order, only moving to the next option if any of the following conditions are true:

  • No such option exists for the journey in question
    • e.g. there is no rail/ground link between London and San Francisco
  • This mode of travel, or the duration of time spent traveling via such means, is regarded as unsafe or excessively uncomfortable at that location
    • For example, buses/coaches are considered to be uncomfortable or unsafe in certain countries/regions.
  • The journey is over 6 hours, and the following option reduces the journey time by 2× (or more)
    • We have a duty to protect company time, so you may (e.g.) opt for flying in cases where the travel time is significantly reduced.
    • Even if there is the opportunity for significant time savings, you are encouraged to consider the possibility of working while on the train, even if it works out to be a longer journey.
  • The cost is considered unreasonably/unexpectedly high, but the following option brings expenses within usual norms
    • The regular pricing of the mode of transport can be considered against the distance traveled. If disproportionately high, move onto other options.

In summary, we prefer rail and ground transportation to favor low-emissions, even if they are not the cheapest options. However, we also consider efficient use of company time, comfort, safety, and protecting ourselves from unreasonably high expenditure. You should explore all these options and considerations and discuss with your manager to make the final decision.

Your turn

I’d be interested to know whether others have similar travel policies, or have better or different ideas — or if you make changes to your travel policy as a result of reading this.

Policy excerpt

GNOME will be mentoring 9 new contributors in Google Summer of Code 2022!

We are happy to announce that GNOME was assigned nine slots for Google Summer of Code projects this year!

GSoC is a program focused on bringing new contributors into open source software development. A number of long term GNOME developers are former GSoC interns, making the program a very valuable entry point for new members in our project.

In 2022 we will mentoring the following projects:

Project Title Contributor Assigned Mentor(s)
Reworking Sync Options for Health amankrx Rasmus Thomsen
Chromecast support for GNOME Network Displays Anupam Kumar Claudio Wunder and Benjaming Berg
Pitivi: Port UI to GTK4 Aryan Kaushik Alex Băluț and Yatin
Faces of GNOME – Continuing the Development of the Platform Asmit Malakannawar Claudio Wunder and Caroline Henriksen
Revamp “New Document” submenu Ignacy Kuchciński and Utkarsh Gandhi António Fernandes
Fractal: Media history viewer Marco Melorio Julian Sparber
GNOME Websites Framework – Part 2 Pooja Patel Claudio Wunder and Caroline Henriksen
Pitivi Timeline Enhancements Thejas Kiran P S Alex Băluț and Fabián Orccón

As part of the contributor’s acceptance into GSoC they are expected to actively participate in the Community Bonding period (May 20 – June 12). The Community Bonding period is intended to help prepare contributors to start contributing at full speed starting June 13.

The new contributors will soon get their blogs added to Planet GNOME making it easy for the GNOME community to get to know them and the projects that they will be working on.

We would like to also thank our mentors for supporting GSoC and helping new contributors enter our project.

If you have any doubts, feel free to reply to this Discourse topic or message us privately at [email protected]

** This is a repost from https://discourse.gnome.org/t/announcement-gnome-will-be-mentoring-9-new-contributors-in-google-summer-of-code-2022/9918

May 20, 2022

Status update, 19/05/2022

I had some ambitious travel plans last month – ambitious by 2020’s standards, anyway – and somehow they came off without any major issues or contagions. In Cologne I was amazed to go to Freedom Sounds festival and witness the return of the Singers ATX alongside host of other ska legends. And then my first ever trip to Italy where I attended Linux App Summit. It was a treat to be around people who are as enthusiastic as I am about the rather niche topic of Linux desktop apps… but hopefully it’s a topic which is becoming less niche.

While safely back home I then managed to contract COVID-19, so this status update is going to be fairly short.

Things I spent time on that may be of interest to someone, include

  • a minimal “GUI toolkit” for the monome grid, written partly as another Rust exercise, and partly to make experimental beats. Fun to write. A lot of this was developed offline while travelling, and I discovered Rust’s cargo doc feature which is an incredible tool for the disconnected hacker. Code dump here.
  • related to the above, packaging the user space driver for the monome grid as a systemd portable service.
  • updating Fedora on my partners laptop. This particular laptop was lacking a Wifi driver but that is now solved in Fedora 36. I can provide yet another datapoint that the GNOME 42 screenshot experience creates joy and happiness.
  • watching the various new synthesizers unveiled the Superbooth 2022 event.
  • a single experimental beat, plus some production on a new song due in a few months

Announcing the Fedora BIOS boot SIG

Now that FESCo has decided that Fedora will keep supporting BIOS booting, the people working on Fedora's bootloader stack will need help from the Fedora community to keep Fedora booting on systems which require Legacy BIOS to boot.

To help with this the Fedora BIOS boot SIG (special interest group) has been formed. The main goal of this SIG are to help the Fedora bootloader people by:

  1. Doing regular testing of nightly Fedora N + 1 composes on hardware
    which only supports BIOS booting

  2. Triaging and/or fixing BIOS boot related bugs.


A [email protected] mailinglist and bugzilla account has been created, which will be used to discuss testing result and as assignee / Cc for bootloader bugzillas which are related to BIOS booting.

If you are interested in helping with Fedora BIOS boot support please:

  1. Subscribe to the email-list

  2. Add yourself to the Members section of the SIG's wiki page

May 16, 2022

v3dv status update 2022-05-16

We haven’t posted updates to the work done on the V3DV driver since
we announced the driver becoming Vulkan 1.1 Conformant.

But after reaching that milestone, we’ve been very busy working on more improvements, so let’s summarize the work done since then.

Multisync support

As mentioned on past posts, for the Vulkan driver we tried to focus as much as possible on the userspace part. So we tried to re-use the already existing kernel interface that we had for V3D, used by the OpenGL driver, without modifying/extending it.

This worked fine in general, except for synchronization. The V3D kernel interface only supported one synchronization object per submission. This didn’t properly map with Vulkan synchronization, which is more detailed and complex, and allowed defining several semaphores/fences. We initially handled the situation with workarounds, and left some optional features as unsupported.

After our 1.1 conformance work, our colleage Melissa Wen started to work on adding support for multiple semaphores on the V3D kernel side. Then she also implemented the changes on V3DV to use this new feature. If you want more technical info, she wrote a very detailed explanation on her blog (part1 and part2).

For now the driver has two codepaths that are used depending on if the kernel supports this new feature or not. That also means that, depending on the kernel, the V3DV driver could expose a slightly different set of supported features.

More common code – Migration to the common synchronization framework

For a while, Mesa developers have been doing a great effort to refactor and move common functionality to a single place, so it can be used by all drivers, reducing the amount of code each driver needs to maintain.

During these months we have been porting V3DV to some of that infrastructure, from small bits (common VkShaderModule to NIR code), to a really big one: common synchronization framework.

As mentioned, the Vulkan synchronization model is really detailed and powerful. But that also means it is complex. V3DV support for Vulkan synchronization included heavy use of threads. For example, V3DV needed to rely on a CPU wait (polling with threads) to implement vkCmdWaitEvents, as the GPU lacked a mechanism for this.

This was common to several drivers. So at some point there were multiple versions of complex synchronization code, one per driver. But, some months ago, Jason Ekstrand refactored Anvil support and collaborated with other driver developers to create a common framework. Obviously each driver would have their own needs, but the framework provides enough hooks for that.

After some gitlab and IRC chats, Jason provided a Merge Request with the port of V3DV to this new common framework, that we iterated and tested through the review process.

Also, with this port we got timelime semaphore support for free. Thanks to this change, we got ~1.2k less total lines of code (and have more features!).

Again, we want to thank Jason Ekstrand for all his help.

Support for more extensions:

Since 1.1 got announced the following extension got implemented and exposed:

  • VK_EXT_debug_utils
  • VK_KHR_timeline_semaphore
  • VK_KHR_create_renderpass2
  • VK_EXT_4444_formats
  • VK_KHR_driver_properties
  • VK_KHR_16_bit_storage and VK_KHR_8bit_storage
  • VK_KHR_imageless_framebuffer
  • VK_KHR_depth_stencil_resolve
  • VK_EXT_image_drm_format_modifier
  • VK_EXT_line_rasterization
  • VK_EXT_inline_uniform_block
  • VK_EXT_separate_stencil_usage
  • VK_KHR_separate_depth_stencil_layouts
  • VK_KHR_pipeline_executable_properties
  • VK_KHR_shader_float_controls
  • VK_KHR_spirv_1_4

If you want more details about VK_KHR_pipeline_executable_properties, Iago wrote recently a blog post about it (here)

Android support

Android support for V3DV was added thanks to the work of Roman Stratiienko, who implemented this and submitted Mesa patches. We also want to thank the Android RPi team, and the Lineage RPi maintainer (Konsta) who also created and tested an initial version of that support, which was used as the baseline for the code that Roman submitted. I didn’t test it myself (it’s in my personal TO-DO list), but LineageOS images for the RPi4 are already available.

Performance

In addition to new functionality, we also have been working on improving performance. Most of the focus was done on the V3D shader compiler, as improvements to it would be shared among the OpenGL and Vulkan drivers.

But one of the features specific to the Vulkan driver (pending to be ported to OpenGL), is that we have implemented double buffer mode, only available if MSAA is not enabled. This mode would split the tile buffer size in half, so the driver could start processing the next tile while the current one is being stored in memory.

In theory this could improve performance by reducing tile store overhead, so it would be more benefitial when vertex/geometry shaders aren’t too expensive. However, it comes at the cost of reducing tile size, which also causes some overhead on its own.

Testing shows that this helps in some cases (i.e the Vulkan Quake ports) but hurts in others (i.e. Unreal Engine 4), so for the time being we don’t enable this by default. It can be enabled selectively by adding V3D_DEBUG=db to the environment variables. The idea for the future would be to implement a heuristic that would decide when to activate this mode.

FOSDEM 2022

If you are interested in watching an overview of the improvements and changes to the driver during the last year, we made a presention in FOSDEM 2022:
“v3dv: Status Update for Open Source Vulkan Driver for Raspberry Pi
4”

Can we fix bearer tokens?

Last month I wrote about how bearer tokens are just awful, and a week later Github announced that someone had managed to exfiltrate bearer tokens from Heroku that gave them access to, well, a lot of Github repositories. This has inevitably resulted in a whole bunch of discussion about a number of things, but people seem to be largely ignoring the fundamental issue that maybe we just shouldn't have magical blobs that grant you access to basically everything even if you've copied them from a legitimate holder to Honest John's Totally Legitimate API Consumer.

To make it clearer what the problem is here, let's use an analogy. You have a safety deposit box. To gain access to it, you simply need to be able to open it with a key you were given. Anyone who turns up with the key can open the box and do whatever they want with the contents. Unfortunately, the key is extremely easy to copy - anyone who is able to get hold of your keyring for a moment is in a position to duplicate it, and then they have access to the box. Wouldn't it be better if something could be done to ensure that whoever showed up with a working key was someone who was actually authorised to have that key?

To achieve that we need some way to verify the identity of the person holding the key. In the physical world we have a range of ways to achieve this, from simply checking whether someone has a piece of ID that associates them with the safety deposit box all the way up to invasive biometric measurements that supposedly verify that they're definitely the same person. But computers don't have passports or fingerprints, so we need another way to identify them.

When you open a browser and try to connect to your bank, the bank's website provides a TLS certificate that lets your browser know that you're talking to your bank instead of someone pretending to be your bank. The spec allows this to be a bi-directional transaction - you can also prove your identity to the remote website. This is referred to as "mutual TLS", or mTLS, and a successful mTLS transaction ends up with both ends knowing who they're talking to, as long as they have a reason to trust the certificate they were presented with.

That's actually a pretty big constraint! We have a reasonable model for the server - it's something that's issued by a trusted third party and it's tied to the DNS name for the server in question. Clients don't tend to have stable DNS identity, and that makes the entire thing sort of awkward. But, thankfully, maybe we don't need to? We don't need the client to be able to prove its identity to arbitrary third party sites here - we just need the client to be able to prove it's a legitimate holder of whichever bearer token it's presenting to that site. And that's a much easier problem.

Here's the simple solution - clients generate a TLS cert. This can be self-signed, because all we want to do here is be able to verify whether the machine talking to us is the same one that had a token issued to it. The client contacts a service that's going to give it a bearer token. The service requests mTLS auth without being picky about the certificate that's presented. The service embeds a hash of that certificate in the token before handing it back to the client. Whenever the client presents that token to any other service, the service ensures that the mTLS cert the client presented matches the hash in the bearer token. Copy the token without copying the mTLS certificate and the token gets rejected. Hurrah hurrah hats for everyone.

Well except for the obvious problem that if you're in a position to exfiltrate the bearer tokens you can probably just steal the client certificates and keys as well, and now you can pretend to be the original client and this is not adding much additional security. Fortunately pretty much everything we care about has the ability to store the private half of an asymmetric key in hardware (TPMs on Linux and Windows systems, the Secure Enclave on Macs and iPhones, either a piece of magical hardware or Trustzone on Android) in a way that avoids anyone being able to just steal the key.

How do we know that the key is actually in hardware? Here's the fun bit - it doesn't matter. If you're issuing a bearer token to a system then you're already asserting that the system is trusted. If the system is lying to you about whether or not the key it's presenting is hardware-backed then you've already lost. If it lied and the system is later compromised then sure all your apes get stolen, but maybe don't run systems that lie and avoid that situation as a result?

Anyway. This is covered in RFC 8705 so why aren't we all doing this already? From the client side, the largest generic issue is that TPMs are astonishingly slow in comparison to doing a TLS handshake on the CPU. RSA signing operations on TPMs can take around half a second, which doesn't sound too bad, except your browser is probably establishing multiple TLS connections to subdomains on the site it's connecting to and performance is going to tank. Fixing this involves doing whatever's necessary to convince the browser to pipe everything over a single TLS connection, and that's just not really where the web is right at the moment. Using EC keys instead helps a lot (~0.1 seconds per signature on modern TPMs), but it's still going to be a bottleneck.

The other problem, of course, is that ecosystem support for hardware-backed certificates is just awful. Windows lets you stick them into the standard platform certificate store, but the docs for this are hidden in a random PDF in a Github repo. Macs require you to do some weird bridging between the Secure Enclave API and the keychain API. Linux? Well, the standard answer is to do PKCS#11, and I have literally never met anybody who likes PKCS#11 and I have spent a bunch of time in standards meetings with the sort of people you might expect to like PKCS#11 and even they don't like it. It turns out that loading a bunch of random C bullshit that has strong feelings about function pointers into your security critical process is not necessarily something that is going to improve your quality of life, so instead you should use something like this and just have enough C to bridge to a language that isn't secretly plotting to kill your pets the moment you turn your back.

And, uh, obviously none of this matters at all unless people actually support it. Github has no support at all for validating the identity of whoever holds a bearer token. Most issuers of bearer tokens have no support for embedding holder identity into the token. This is not good! As of last week, all three of the big cloud providers support virtualised TPMs in their VMs - we should be running CI on systems that can do that, and tying any issued tokens to the VMs that are supposed to be making use of them.

So sure this isn't trivial. But it's also not impossible, and making this stuff work would improve the security of, well, everything. We literally have the technology to prevent attacks like Github suffered. What do we have to do to get people to actually start working on implementing that?

comment count unavailable comments

May 15, 2022

Pika Backup 0.4 Released with Schedule Support

Pika Backup is an app focused on backups of personal data. It’s internally based on BorgBackup and provides fast incremental backups.

Pika Backup version 0.4 has been released today. This release wraps up a year of development work. After the huge jump to supporting scheduled backups and moving to GTK 4 and Libadwaita, I am planning to go back to slightly more frequent and smaller releases. At the same time, well-tested and reliable releases will remain a priority of the project.

Release Overview

The release contains 41 resolved issues, 27 changelog entries, and a codebase that despite many cleanups nearly doubled. Here is a short rundown of the changes.

      • Ability to schedule regular backups.
      • Support for deleting old archives.
      • Revamped graphical interface including a new app icon.
      • Better compression and performance for backups.
      • Several smaller issues rectified.

You can find a more complete list of changes in Pika’s Changelog.

Thanks!

Pika Backup’s backbone is the BorgBackup software. If you want to support them you can check out their funding page. The money will be well spent.

A huge thank you to everyone who helped with this release. Especially Alexander, Fina, Zander, but also everyone else, the borg maintainers, the translators, the beta testers, people who reported issues or contributed ideas, and last but not least, all the people that gave me so much positive and encouraging feedback!

Resources

May 14, 2022

Crosswords 0.3

I’m pleased to announce Crosswords 0.3. This is the first version that feels ready for public consumption. Unlike the version I announced five months ago, it is much more robust and has some key new features.

New in this version:

  • Available on flathub: After working on it out of the source tree for a long while, I finally got the flatpaks working. Download it and try it out—let me know what you think! I’d really appreciate any sort of feedback, positive or negative.

  • Puzzle Downloaders: This is a big addition. We now support external downloaders and puzzle sets. This means it’s possible to have a tool that downloads puzzles from the internet. It also lets us ship sets of puzzles for the user to solve. With a good set of puzzles, we could even host them on gnome.org. These can be distributed externally, or they can be distributed via flatpak extensions (if not installing locally). I wrapped xword-dl and puzzlepull with a downloader to add some newspaper downloaders, and I’m looking to add more original puzzles shortly.

  • Dark mode: I thought that this was the neatest feature in GNOME 42. We now support both light and dark mode and honor the system setting. CSS is also heavily used to style the app allowing for future visual modifications and customizations. I’m interested in allowing  css customization on a per-puzzle set basis.

  • Hint Button: This is a late addition. It can be used to suggest a random word that fits in the current row. It’s not super convenient, but I also don’t want to make the game too easy! We use Peter Broda’s wordlist as the basis for this.
  • .puz support: Internally we use the unencumbered .ipuz format. This is a pretty decent format and supports a wide-variety of crossword features. But there’s a lot of crosswords out there that use the .puz format and a I know people have large collections of puzzles in that format. I wrote a simple convertor to load these.

Next steps

I hope to release this a bit more frequently now that we have gotten to this stage. Next on my immediate list:

  • Revisit clue formats; empirically crossword files in the wild play a little fast-and-loose with escaping and formatting (eg. random entities and underline-escaping).
  • Write a Puzzle Set manager that will let you decide which downloaders/puzzles to show, as well as launch gnome-software to search for more.
  • Document the external Puzzle Set format to allow others to package up games.
  • Fix any bugs that are found.

I also plan to work on the Crossword editor and get that ready for a release on flathub. The amazing-looking AdwEntryRow should make parts of the design a lot cleaner.

But, no game is good without being fun! I am looking to expand the list of puzzles available. Let me know if you write crosswords and want to include them in a puzzle set.

Thanks

I couldn’t have gotten this release out without lots of help. In particular:

  • Federico, for helping to  refactor the main code and lots of great advice
  • Nick, for explaining how to get apps into flathub
  • Alexander, for his help with getting toasts working with a new behavior
  • Parker, for patiently explaining to me how the world of Crosswords worked
  • The folks on #gtk for answering questions and producing an excellent toolkit
  • And most importantly Rosanna, for testing everything and for her consistent cheering and support

Download on FLATHUB