Insulating a suspended timber floor

In a departure from my normal blogging, this post is going to be about how I’ve retrofitted insulation to some of the flooring in my house and improved its airtightness. This has resulted in a noticeable increase in room temperature during the cold months.

Setting the scene

The kitchen floor in my house is a suspended timber floor, built over a 0.9m tall sealed cavity (concrete skim floor, brick walls on four sides, air bricks). This design is due to the fact the kitchen is an extension to the original house, and it’s built on the down-slope of a hill.

The extension was built around 1984, shortly before the UK building regulations changed to (basically) require insulation. This meant that the floor was literally some thin laminate flooring, a 5mm underlay sheet for that, 22mm of chipboard, and then a ventilated air cavity at outside temperature (which, in winter, is about 4°C).

In addition to that, there were 10mm gaps around the edge of the chipboard, connecting the outside air directly with the air in the kitchen. The kitchen is 3×5m, so that gives an air gap of around 0.16m². That’s equivalent to leaving a window open all year round. The room has been this way for about 36 years! The UK needs a better solution for ongoing improvement and retrofit of buildings.

I established all this initial information fairly easily by taking the kickboards off the kitchen units and looking into the corners of the room; and by drilling a 10mm hole through the floor and threading a small camera (borescope) into the cavity beneath.

Making a plan

The fact that the cavity was 0.9m high and in good structural shape meant that adding insulation from beneath was a fairly straightforward choice. Another option (which would have been the only option if the cavity was shallower) would have been to remove the kitchen units, take up all the floorboards, and insulate from above. That would have been a lot more disruptive and labour intensive. Interestingly, the previous owners of the house had a whole new kitchen put in, and didn’t bother (or weren’t advised) to add insulation at the same time. A very wasted opportunity.

I cut an access hatch in one of the floorboards, spanning between two joists, and scuttled into the cavity to measure things more accurately and check the state of things.

Under-floor cavity before work began (but after a bit of cleaning)

The joists are 145×45mm, which gives an obvious 145mm depth of insulation which can be added. Is that enough? Time for some calculations.

I chose several potential insulation materials, then calculated the embodied carbon cost of insulating the floor with them, the embodied financial cost of them, and the net carbon and financial costs of heating the house with them in place (over 25 years). I made a number of assumptions, documented in the workings spreadsheet, largely due to the lack of EPDs for different components. Here are the results:

Heating scenarioInsulation assemblyU-value of floor assembly (W/m2K)Energy loss to floor (W)Net cost over 25 years (£)Net carbon cost over 25 years (kgCO2e)
Current gas tariff
(3.68p/kWh, 0.22kgCO2e/kWh)
Current floor2.60382308017980
Thermojute 160mm0.22327301700
Thermoflex 160mm0.21308601450
Thermojute 300mm0.121810201190
Thermoflex 240mm0.1117910820
Mineral wool 160mm0.24355401680
ASHP estimate
(13.60p/kWh, 0.01kgCO2e/kWh)
Current floor(as above)(as above)113701140
Thermojute 160mm1420290
Thermoflex 160mm1520110
Thermojute 300mm1410410
Thermoflex 240mm128080
Mineral wool 160mm1290150
Average future estimate (hydrogen grid)
(8.40p/kWh, 0.30kgCO2e/kWh)
Current floor702025090
Thermojute 160mm10602290
Thermoflex 160mm11702010
Thermojute 300mm12001520
Thermoflex 240mm10901140
Mineral wool 160mm8902320
Costings for different floor assemblies; see the spreadsheet for full details

In retrospect, I should also have considered multi-layer insulation options, such as a 20mm layer of closed-cell foam beneath the chipboard, and a 140mm layer of vapour-open insulation below that. More on that below.

In the end, I went with 160mm of Thermojute, packed between the joists and held in place with a windproof membrane stapled to the underside of the joists. This has a theoretical U-value of 0.22W/m2K and hence an energy loss of 32W over the floor area. Over 25 years, with a new air source heat pump (which I don’t have, but it’s a likelihood soon), the net carbon cost of this floor (embodied carbon + heating loss through the floor) should be at most 290kgCO2e, of which around 190kgCO2e is the embodied cost of the insulation. Without changing the heating system it would be around 1700kgCO2e.

The embodied cost of the insulation is an upper bound: I couldn’t find an embodied carbon cost for Thermojute, but its Naturplus certification puts an upper bound on what’s allowed. It’s likely that the actual embodied cost is lower, as the jute is recycled in a fairly simple process.

Three things swung the decision: the availability of Thermojute over Thermoflex, the joist loading limiting the depth of insulation I could install, and the ease of not having to support insulation installed below the depth of the joists.

This means that the theoretical performance of the floor is not Passivhaus standard (around 0.10–0.15W/m2K), although this is partially mitigated by the fact that the kitchen is not a core part of the house, and is separated from it by a cavity wall and some tight doors, which means it should not be a significant heat sink for the rest of the house when insulated. It’s also regularly heated by me cooking things.

Hopefully the attention to detail when installing the insulation, and the careful tracing of airtightness and windtightness barriers through the design should keep the practical performance of the floor high. The windtightness barrier is to prevent wind-washing of the insulation from below. The airtightness barrier is to prevent warm, moisture-laden air from the kitchen escaping into the insulation and building structure (particularly, joists), condensing there (as they’re now colder due to the increased insulation) and causing damp problems. An airtightness barrier also prevents convective cooling around the floor area, and reduces air movement which, even if warm, increases our perception of cooling.

I did not consider thermal bridging through the joists. Perhaps I should have done?

Insulation installation

Installation was done over a number of days and evenings, sped up by the fact the UK was in lockdown at the time and there was little else to do.

Cross sections of the insulation details

The first step in installation was to check the blockwork around each joist end and seal that off to reduce draughts from the wall cavity into the insulation. Thankfully, the blockwork was in good condition so no work was necessary.

The next step was to add an airtightness seal around all pipe penetrations through the chipboard, as the chipboard was to form the airtightness barrier for the kitchen. This was done with Extoseal Magov tape.

Sealing pipe penetrations through the chipboard floor using Extoseal Magov.

The next step in installation was to tape the windproof membrane to the underside edge of the chipboard, to separate the end of the insulation from the wall. This ended up being surprisingly quick once I’d made a cutting template.

The next step was to wedge the insulation batts in the gap between each pair of joists. This was done in several layers with offset overlaps. Each batt was slightly wider than the gap between joists, so could easily be held in place with friction. This arrangement shouldn’t be prone to gaps forming in the insulation as the joists expand and contract slightly over time.

One of the positives of using jute-based insulation is that it smells of coffee and sugar (which is what the bags which the jute fibres came from were originally used to transport). One of the downsides is that the batts need to be cut with a saw and the fibres get everywhere.

Some of the batts needed to be carefully packed around (insulated) pipework, and I needed to form a box section of windproof membrane around the house’s main drainage stack in one corner of the space, since it wasn’t possible to fit insulation or the membrane behind it. I later added closed-cell plastic bubblewrap insulation around the rest of the drainage stack to reduce the chance of it freezing in winter, since the under-floor cavity should now be significantly colder.

As more of the insulation was installed, I could start to staple the windproof membrane to the underside of the joists, and seal the insulation batts in place. The room needed three runs of membrane, with 100mm taped overlaps between them.

With the insulation and membrane in place and taped, the finishing touches in the under-floor cavity were to reinstall the pipework insulation and seal it to the windproof membrane to prevent any (really minor) wind washing of the insulation from draughts through the pipe holes; to label everything; insulate the drainage stack; re-clip the mains wiring; and tie the membrane into the access hatch.

Airtightness work in the kitchen

With the insulation layer complete under the chipboard floor, the next stage in the job was to ensure a continuous airtightness layer between the kitchen walls (which are plasterboard, and hence airtight apart from penetrations for sockets which I wasn’t worried about at the time) and the chipboard floor. Each floor board is itself airtight, but the joints between each of them and between them and the walls are not.

The solution to this was to add a lot of tape: cheaper paper-based Uni tape for joining the floor boards, and Contega Solido SL for joining the boards to the walls (Uni tape is not suitable as the walls are not smooth and flat, and there are some complex corners where the flexibility of a fabric tape is really useful).

Tediously, this involved removing all the skirting board and the radiator. Thankfully, though, none of the kitchen units needed to be moved, so this was actually a fairly quick job.

Finally, with some of the leftover insulation and windproof membrane, I built an insulation plug for the access hatch. This is attached to the underside of the hatch, and has a tight friction fit with the underfloor insulation, so should be windtight. The hatch itself is screwed closed onto a silicone bead, which should be airtight and replaceable if the hatch is ever opened.

The final step was to reinstall the kitchen floor, which was fairly straightforward as it’s interlocking laminate strips. And, importantly, to print out the plans, cross-sections, data sheets, a big warning about the floor being an air tightness barrier, and a sign to point towards the access hatch, and put them in a wallet under the kitchen units for someone to find in future.

Retrospective

This was a fun job to do, and has noticeably improved the comfort of my kitchen.

I can’t give figures for how much of an improvement it’s made, or whether its actual performance matches the U-value calculations I made in planning, as I don’t have reliable measured energy loss figures from the kitchen from before installing the insulation. Perhaps I’d try and measure things more in advance of a project like this next time, although that does require an extra level of planning and preparation which can be hard to achieve for a job done in my spare time.

I’m happy with the choice of materials and installation method. Everything was easy to work with and the job progressed without any unexpected problems.

If I were to do the planning again, I might put more thought into how to achieve a better U-value while being limited by the joist height. Extending the joists to accommodate more depth of insulation was something I explored in some detail, but it hit too many problems: the air bricks would need to be ducted (as otherwise they’d be covered up), the joist loading limits might be hit, and the method for extending the joists would have to be careful not to introduce thermal bridges. The whole assembly might have bridged the damp proof course in the walls.

It might, instead, have worked to consider a multi-layer insulation approach, where a thin layer of high performance insulation was used next to the chipboard, with the rest of the joist depth taken up with the thermojute. I can’t easily change to that now, though, so any future improvements to this floor will either have to add insulation above the chipboard (and likely another airtightness layer above that), or extend below the joists and be careful about it.

Add metadata to your app to say what inputs and display sizes it supports

The appstream specification, used for appdata files for apps on Linux, supports specifying what input devices and display sizes an app requires or supports. GNOME Software 41 will hopefully be able to use that information to show whether an app supports your computer. Currently, though, almost no apps include this metadata in their appdata.xml file.

Please consider taking 5 minutes to add the information to the appdata.xml files you care about. Thanks!

If your app supports (and is tested with) touch devices, plus keyboard and mouse, add:

<recommends>
  <control>keyboard</control>
  <control>pointing</control>
  <control>touch</control>
</recommends>

If your app is only tested against keyboard and mouse, add:

<requires>
  <control>keyboard</control>
  <control>pointing</control>
</requires>

If it supports gamepads, add:

<recommends>
  <control>gamepad</control>
</recommends>

If your app is only tested on desktop screens (the majority of cases):

<requires>
  <display_length compare="ge">medium</display_length>
</requires>

If your app is adaptive and works on mobile device screens through to desktops, add:

<requires>
  <display_length compare="ge">small</display_length>
</requires>

Or, if you’ve developed your app to work at a specific size (mostly relevant for mobile devices), you can specify that explicitly:

<requires>
  <display_length compare="ge">360</display_length>
</requires>

Note that there may be updates to the definition of display_length in appstream in future for small display sizes (phones), so this might change slightly.

Another example is what I’ve added for Hitori, which supports touch and mouse input (but not keyboard input) and which works on small and large screens.

See the full specification for more unusual situations and additional examples.

How your organisation’s equipment policy can impact the environment

At the Endless OS Foundation, we’ve recently been updating some of our internal policies. One of these is our equipment policy, covering things like what laptops and peripherals are provided to employees. While updating it, we took the opportunity to think about the environmental impact it would have, and how we could reduce that impact compared to standard or template equipment policies.

How this matters

For many software organisations, the environmental impact of hardware purchasing for employees is probably at most the third-biggest contributor to the organisation’s overall impact, behind carbon emissions from energy usage (in building and providing software to a large number of users), and emissions from transport (both in sending employees to conferences, and in people’s daily commute to and from work). These both likely contribute tens of tonnes of emissions per year for a small/medium sized organisation (as a very rough approximation, since all organisations are different). The lifecycle emissions from a modern laptop are in the region of 300kgCO2e, and one target for per-person emissions is around 2.2tCO2e/year by 2030.

If changes to this policy reduce new equipment purchase by 20%, that’s a 20kgCO2e/year reduction per employee.

So, while changes to your organisation’s equipment policy are not going to have a big impact, they will have some impact, and are easy and unilateral changes to make right now. 20kgCO2e is roughly the emissions from a 150km journey in a petrol car.

What did we put in the policy?

We started with a fairly generic policy. From that, we:

  • Removed time-based equipment replacement schedules (for example, replacing laptops every 3 years) and went with a more qualitative approach of replacing equipment when it’s no longer functional enough for someone to do their job properly on.
  • Provided recommended laptop models for different roles (currently several different versions of the Dell XPS 13), which we have checked conform to the rest of the policy and have an acceptable environmental impact — Dell are particularly good here because, unlike a lot of laptop manufacturers, they publish a lifecycle analysis for their laptops
  • But also gave people the option to justify a different laptop model, as long as it meets certain requirements:

All laptops must meet the following standards in order to have low lifetime impacts:

All other equipment must meet relevant environmental standards, which should be discussed on a case by case basis

If choosing a device not explicitly listed above, manufacturers who provide Environmental Product Declarations for their products should be preferred

  • These requirements aim to minimise the laptop’s carbon emissions during use (i.e. its power consumption), and increase the chance that it will be repairable or upgradeable when needed. In particular, having a replaceable battery is important, as the battery is the most likely part of the laptop to wear out.
  • The policy prioritises laptop upgrades and repairs over replacement, even when the laptop would typically be coming up for replacement after 3 years. The policy steers people towards upgrading it (a new hard drive, additional memory, new battery, etc.) rather than replacing it.
  • When a laptop is functional but no longer useful, the policy requires that it’s wiped, refurbished (if needed) and passed on to a local digital inclusion charity, school, club or similar.
  • If a laptop is broken beyond repair, the policy requires that it’s disposed of according to WEEE guidelines (which is the norm in Europe, but potentially not in other countries).

A lot of this just codifies what we were doing as an organisation already — but it’s good to have the policy match the practice.

Your turn

I’d be interested to know whether others have similar equipment policies, or have better or different ideas — or if you make changes to your equipment policy as a result of reading this.

Don’t (generally) put documentation license in appdata

There have been a few instances recently where people have pointed out that GNOME Software marks some apps as not free software when they are. This is a bug in the appdata files provided by those applications, which typically includes something like

<project_license>GPL-3.0+ and CC-BY-SA-3.0</project_license>

This is generally an attempt to list the license of the code and of the documentation. However, the resulting SPDX expression means to apply the most restrictive interpretation of both licenses. As a result, the expression as a whole is considered not free software (CC-BY-SA-3.0 is not a free software license as per the FSF or OSI lists).

Instead, those apps should probably just list the ‘main’ license for the project:

<project_license>GPL-3.0+</project_license>

and document the license for their documentation separately. As far as I know, the appdata format doesn’t currently have a way of listing the documentation license in a machine readable way.

If you maintain an app, or want to help out, please check the licensing is correctly listed in your app’s appdata.

There’s an issue open against the appdata spec for improving how licenses are documented in future — contributions also welcome there.

(To avoid doubt, I think CC-BY-SA-3.0 is a fine license for documentation; it’s just problematic to include it in the ‘main’ appdata license statement for an app.)

Simple HTTP profiling of applications using sysprof

This is a quick write-up of a feature I added last year to libsoup and sysprof which exposes basic information about HTTP/HTTPS requests to sysprof, so they can be visualised in GNOME Builder.

Prerequisites

  • libsoup compiled with sysprof support (-Dsysprof=enabled), which should be the default if libsysprof-capture is available on your system.
  • Your application (and ideally its dependencies) uses libsoup for its HTTP requests; this won’t work with other network libraries.

Instructions

  • Run your application in Builder under sysprof, or under sysprof on the CLI (sysprof-cli -- your-command) then open the resulting capture file in Builder.
  • Ensure the ‘Timings’ row is visible in the ‘Instruments’ list on the left, and that the libsoup row is enabled beneath that.

Results

You should then be able to see a line in the ‘libsoup’ row for each HTTP/HTTPS request your application made. The line indicates the start time and duration of the request (from when the first byte of the request was sent to when the last byte of the response was received).

The details of the event contain the URI which was requested, whether the transaction was HTTP or HTTPS, the number of bytes sent and received, the Last-Modified header, If-Modified-Since header, If-None-Match header and the ETag header (subject to a pending fix).

What’s that useful for?

  • Seeing what requests your application is making, across all its libraries and dependencies — often it’s more than you think, and some of them can easily be optimised out. A request which is not noticeable to you on a fast internet connection will be noticeable to someone on a slower connection with higher request latencies.
  • Quickly checking that all requests are HTTPS rather than HTTP.
  • Quickly checking that all requests from the application set appropriate caching headers (If-Modified-Since, If-None-Match) and that all responses from the server do too (Last-Modified, ETag) — if a HTTP request can result in a cache hit, that’s potentially a significant bandwidth saving for the client, and an even bigger one for the server (if it’s seeing the same request from multiple clients).
  • Seeing a high-level overview of what bandwidth your application is using, and which HTTP requests are contributing most to that.
  • Seeing how it all ties in with other resource usage in your application, courtesy of sysprof.

Yes that seems useful

There’s plenty of scope for building this out into a more fully-featured way of inspecting HTTP requests using sysprof. By doing it from inside the process, using sysprof – rather than from outside, using Wireshark – this allows for visibility into TLS-encrypted conversations.

GNOME Software performance in GNOME 40

tl;dr: Use callgrind to profile CPU-heavy workloads. In some cases, moving heap allocations to the stack helps a lot. GNOME Software startup time has decreased from 25 seconds to 12 seconds (-52%) over the GNOME 40 cycle.

To wrap up the sporadic blog series on the progress made with GNOME Software for GNOME 40, I’d like to look at some further startup time profiling which has happened this cycle.

This profiling has focused on libxmlb, which gnome-software uses extensively to query the appstream data which provides all the information about apps which it shows in the UI. The basic idea behind libxmlb is that it pre-compiles a ‘silo’ of information about an XML file, which is cached until the XML file next changes. The silo format encodes the tree structure of the XML, deduplicating strings and allowing fast traversal without string comparisons or parsing. It is memory mappable, so can be loaded quickly and shared (read-only) between multiple processes. It allows XPath queries to be run against the XML tree, and returns the results.

gnome-software executes a lot of XML queries on startup, as it loads all the information needed to show many apps to the user. It may be possible to eliminate some of these queries – and some earlier work did reduce the number by binding query parameters at runtime to pre-prepared queries – but it seems unlikely that we’ll be able to significantly reduce their number further, so better speed them up instead.

Profiling work which happens on a CPU

The work done in executing an XPath query in libxmlb is largely on the CPU — there isn’t much I/O to do as the compiled XML file is only around 7MB in size (see ~/.cache/gnome-software/appstream), so this time the most appropriate tool to profile it is callgrind. I ruled out using callgrind previously for profiling the startup time of gnome-software because it produces too much data, risks hiding the bigger picture of which parts of application startup were taking the most time, and doesn’t show time spent on I/O. However, when looking at a specific part of startup (XML queries) which are largely CPU-bound, callgrind is ideal.

valgrind --tool=callgrind --collect-systime=msec --trace-children=no gnome-software

It takes about 10 minutes for gnome-software to start up and finish loading the main window when running under callgrind, but eventually it’s shown, the process can be interrupted, and the callgrind log loaded in kcachegrind:

Here I’ve selected the main() function and the callee map for it, which shows a 2D map of all the functions called beneath main(), with the area of each function proportional to the cumulative time spent in that function.

The big yellow boxes are all memset(), which is being called on heap-allocated memory to set it to zero before use. That’s a low hanging fruit to optimise.

In particular, it turns out that the XbStack and XbOperands which libxmlb creates for evaluating each XPath query were being allocated on the heap. With a few changes, they can be allocated on the stack instead, and don’t need to be zero-filled when set up, which saves a lot of time — stack allocation is a simple increment of the stack pointer, whereas heap allocation can involve page mapping, locking, and updates to various metadata structures.

The changes are here, and should benefit every user of libxmlb without further action needed on their part. With those changes in place, the callgrind callee map is a lot less dominated by one function:

There’s still plenty left to go at, though. Contributions are welcome, and we can help you through the process if you’re new to it.

What’s this mean for gnome-software in GNOME 40?

Overall, after all the performance work in the GNOME 40 cycle, startup time has decreased from 25 seconds to 12 seconds (-52%) when starting for the first time since the silo changed. This is the situation in which gnome-software normally starts, as it sits as a background process after that, and the silo is likely to change every day or two.

There are plans to stop gnome-software running as a background process, but we are not there yet. It needs to start up in 1–2 seconds for that to give a good user experience, so there’s a bit more optimisation to do yet!

Aside from performance work, there’s a number of other improvements to gnome-software in GNOME 40, including a new icon, some improvements to parts of the interface, and a lot of bug fixes. But perhaps they should be explored in a separate blog post.

Many thanks to my fellow gnome-software developers – Milan, Phaedrus and Richard – for their efforts this cycle, and my employer the Endless OS Foundation for prioritising working on this.

Use GLIB_VERSION_MIN_REQUIRED to avoid deprecation warnings

tl;dr: Define GLIB_VERSION_MIN_REQUIRED and GLIB_VERSION_MAX_ALLOWED in your meson.build to avoid deprecation warnings you don’t care about, or accidentally using GLib APIs which are newer than you want to use. You can add this to your library by copying gversionmacros.h and using its macros on public APIs in your header files.

With every new stable release, GLib adds and/or deprecates public API. If you are building a project against GLib, you probably don’t want to use the new APIs until you can reliably depend on a new enough version of GLib. Similarly, you want to be able to continue using the newly-deprecated APIs until you can reliably depend on the version of GLib which deprecated them.

In both cases, the alternative is for your project to add conditional compilation blocks which use some GLib symbols if building against the new version of GLib, and others if building against the old version. That’s lots of work, and no fun.

So to prevent projects using GLib APIs which are outside the range of GLib versions which those projects are tested to build and work against, GLib can emit deprecation warnings at compile time if APIs which are too new – or too old – are used.

You can enable this by defining GLIB_VERSION_MIN_REQUIRED and GLIB_VERSION_MAX_ALLOWED in your build environment. With Meson, add the following to your top-level meson.build, typically just after you check for the GLib dependency using dependency():

add_project_arguments(
  '-DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_46',
  '-DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_2_66',
  language: 'c')

The GLIB_VERSION_x symbols are provided by GLib, and there’s one for each stable release series. You can also use GLIB_VERSION_CUR_STABLE or GLIB_VERSION_PREV_STABLE to refer to the stable version of the copy of GLib you’re building against.

If undefined, both symbols default to the current stable version, meaning you get all new APIs and all deprecation warnings by default. This is good for keeping up to date with developments in GLib, but not so good if you are targeting an older distribution and don’t want to see all the latest deprecation warnings.

None of this is new; this blog post is your periodic reminder that this versioning functionality exists and you may benefit from using it.

Add this to your library too

It’s easy enough to add to other libraries, and should be useful in situations where your library is unlikely to break API for the foreseeable future, but could add or deprecate API with every release.

Simply copy and adapt gversionmacros.h, and use its macros against every symbol in a public header. You can use them for functions, types, macros and enums.

The downside is that you will need to update the version macros header for each new version of your library, to add macros for the new version. There’s no way round this within the header file, as C macros may not define additional macros. It may be possible to generate the header using an external script from Meson, though, if someone wants to automate it.

Add extended information to GErrors in GLib 2.67.2

Thanks to Krzesimir Nowak, a 17-year-old feature request in GLib has been implemented: it’s now possible to define GError domains which have extended information attached to their GErrors.

You could now, for example, define a GError domain for text parser errors which includes context information about a parsing failure, such as the current line and character position. Or attach the filename of a file which was being read, to the GError informing of a read failure. Define an extended error domain using G_DEFINE_EXTENDED_ERROR(). The extended information is stored in a ‘private’ struct provided by you, similarly to how it’s implemented for GObjects with G_DEFINE_TYPE_WITH_PRIVATE().

There are code examples on how to use the new APIs in the GLib documentation, so I won’t reproduce them here.

An important limitation to note is that existing GError domains which have ever been part of a stable public API cannot be extended retroactively unless you are breaking ABI. That’s because extending a GError domain increases the size of the allocated GError instances for that domain, and it’s possible that users of your API will have stack-allocated GErrors in the past.

Please don’t stack-allocate GErrors, as it makes future extensions of the API impossible, and doesn’t buy you notable extra performance, as GErrors should not be used on fast paths. By their very nature, they’re for failure reporting.

The new APIs are currently unstable, so please try them out and provide feedback now. They will be frozen with the release of GLib 2.67.3, scheduled for 11th February 2021.

GUADEC 2020

tl;dr: The virtual GUADEC 2020 conference had negligible carbon emissions, on the order of 100× lower than the in-person 2019 conference. Average travel to the 2019 conference was 10% of each person’s annual carbon budget. 2020 had increased inclusiveness; but had the downside of a limited social scene. What can we do to get the best of both for 2021?

It’s been several weeks since GUADEC 2020 was held, and this release cycle of GNOME is coming to a close. It’s been an interesting year. The conference was a different experience from normal, and despite missing seeing everyone in person I thought it went very well. Many thanks to the organising team and especially the sysadmin team. I’m glad an online conference was possible, and happy that it allowed many people to attend who can’t normally do so. I hope we can incorporate the best parts of this year into future conferences.

Measuring things

During the conference, with the help of Bart, I collected some data about the resource consumption of the servers during GUADEC. After a bit of post-processing, it looks like the conference emitted on the order of 0.5–1tCO2e (tonnes of carbon dioxide equivalent, the measure of global warming potential). These emissions were from the conference servers (21% of the total), network traffic (55%), and an estimate of the power used by people’s home computers while watching talks (24%).

By way of contrast, there were estimated emissions of 110tCO2e for travel to and from GUADEC 2019 in Thessaloniki. Travel emissions are likely to be the bulk of the emissions from that conference (insufficient data is available to estimate the other costs, such as building use, food, events, etc.). Of those travel emissions, 98% were from flights, and 79% of attendees flew. The lowest emissions for a return flight were a bit under 0.3tCO2e, the highest were around 3tCO2e, and the mode was the bracket [0.3, 0.6)tCO2e.

This shows quite a contrast between in-person and virtual conferences — a factor of 100 difference in carbon emissions. The conference in Thessaloniki (which I’m focusing on because I’ve got data for it from the post-conference survey, not because it was particularly unusual) had 198 registered attendees, and modal transport emissions per attendee of 0.42tCO2e.

Does it matter?

The recommended personal carbon budget for 2019/2020 is 4.1tCO2e, and it decreases each year until we reach emissions which are compatible with 2°C of global warming in 2050. That means that everyone should only emit 4.1tCO2e or less, per year. Modal emissions of 0.42tCO2e per person attending the 2019 conference is 10% of their carbon budget.

Other emissions pathways give lower budgets sooner, and perhaps would be better goals (2°C of global warming is a lot).

Everyone is in charge of their own carbon budgeting, and how they choose to spend it. It’s possible to spend 10% of your annual budget on one conference and still come in under-budget for the year, but it’s not easy.

For this reason, and for the reasons of inclusiveness which we saw at GUADEC 2020, I hope we keep virtual participation as a first-class part of GUADEC in future. It would be good to explore ways of keeping the social aspects of an in-person conference without completely returning to the previous model of flying everyone to one place.

What about 2021?

I say ‘2021’, but please take this to mean ‘next time it’s safe to host an international in-person conference’.

Looking at the breakdown of transport emissions for GUADEC 2019 by mode, flights are the big target for emissions reductions (note the logarithmic scale):

GUADEC 2019 transport emissions by mode (note logarithmic scale)

Splitting the flights up by length shows that the obvious approach of encouraging international train travel instead of short-haul flights (emissions bins up to 1.2tCO2e/flight in the graph below) would not have got us more than 38% reduction in transport emissions for Thessaloniki, but that’s a pretty good start.

GUADEC 2019 total flight emissions breakdown by flight length (where flight length is bucketed by return emissions; a lower emissions bucket means a shorter return flight)

Would a model where we had per-continent or per-country in-person meetups, all attending a larger virtual conference, have significantly lower emissions? Would it bring back enough of the social atmosphere?

Something to think about for GUADEC 2021! If you have any comments or suggestions, or have spotted any mistakes in this analysis, please get in touch. The data is available here.

Thanks to Will Thompson for proofreading.

Controlling safety vs speed when writing files

GLib 2.65.1 has been released with a new g_file_set_contents_full() API which you should consider using instead of g_file_set_contents() for writing out a file — it’s a drop-in replacement. It provides two additional arguments, one to control the trade-off between safety and performance when writing the file, and one to set the file’s mode (permissions).

What’s wrong with g_file_set_contents()?

g_file_set_contents() has worked fine for many years (and will continue to do so). However, it doesn’t provide much flexibility. When writing a file out on Linux there are various ways to do it, some slower but safer — and some faster, but less safe, in the sense that if your program or the system crashes part-way through writing the file, the file might be left in an indeterminate state. It might be garbled, missing, empty, or contain only the old contents.

g_file_set_contents() chose a fairly safe (but not the fastest) approach to writing out files: write the new contents to a temporary file, fsync() it, and then atomically rename() the temporary file over the top of the old file. This approach means that other processes only ever see the old file contents or the new file contents (but not the partially-written new file contents); and it means that if there’s a crash, either the old file will exist or the new file will exist. However, it doesn’t guarantee that the new file will be safely stored on disk by the time g_file_set_contents() returns. It also has fewer guarantees if the old file didn’t exist (i.e. if the file is being written out for the first time).

In most situations, this is the right compromise. But not in all of them — so that’s why g_file_set_contents_full() now exists, to let the caller choose their own compromise.

Choose your own tradeoff

The level of safety/speed of g_file_set_contents() can be chosen using GFileSetContentsFlags.

Situations where your code might want a looser set of guarantees from the defaults might be when writing out cache files (where it typically doesn’t matter if they’re lost or corrupted), or when writing out large numbers of files where you’re going to call fsync() once after the whole lot (rather than once per file).

In these situations, you might choose G_FILE_SET_CONTENTS_NONE.

Conversely, your code might want a tighter set of guarantees when writing out files which are well-formed-but-incorrect when empty or filled with zeroes (as filling a file with zeroes is one of the failure modes of the existing g_file_set_contents() defaults, if the file is being created), or when writing valuable user data.

In these situations, you might choose G_FILE_SET_CONTENTS_CONSISTENT | G_FILE_SET_CONTENTS_DURABLE.

The default flags used by g_file_set_contents() are G_FILE_SET_CONTENTS_CONSISTENT | G_FILE_SET_CONTENTS_ONLY_EXISTING, which makes its definition:

gboolean
g_file_set_contents (const gchar  *filename,
                     const gchar  *contents,
                     gssize        length,
                     GError      **error)
{
  return g_file_set_contents_full (filename, contents, length,
                                   G_FILE_SET_CONTENTS_CONSISTENT |
                                   G_FILE_SET_CONTENTS_ONLY_EXISTING,
                                   0666, error);
}

Check your code

So, maybe now is the time to quickly grep your code for g_file_set_contents() calls, and see whether the default tradeoff is the right one in all the places you call it?