Securing the Switcher browser extension
I’ve been involved with the WonderSwitcher browser extension since its beginnings, so have worked on a lot of features and fixes in that time. Recently I was involved in one particularly interesting project: patching security vulnerabilities exposed by a security audit.
The Switcher is unique amongst WonderProxy offerings in that it is distributed through browser vendor stores and installed by users directly in their browsers. Being able to import, customise, and activate proxy servers is great functionality that is enabled by sitting one step closer to the user, but that position raises extra security considerations. In order to provide assurance to customers using the extension, Cure53 were commissioned to set their expertise on throwing up what could happen in the theoretical scenario of the WonderProxy website having become compromised by attackers.
As I’d never been involved in a project like this before, I was intrigued as to what the results of their review would be like (and admittedly a tiny bit anxious!). We’d been following a thorough QA process, but always centering on user needs, not malicious intentions. Cure53’s outstanding reputation preceded them, so it was reassuring to know that we were working with the right people to do this.
First stage and assessing the report
During the initial six-day review, Cure53 were able to feed back descriptions of issues they discovered while remaining tests were ongoing. This enabled investigation to begin in some areas before they provided their report with the compiled results. Each issue was fed back complete with a description, test code used to reproduce, and suggestions for how it might be patched. Their approach was “Here’s what we managed to do; we wouldn’t have been able to do it if...”. The information proved to be invaluable in making required changes.
Four vulnerabilities relating to the Switcher browser extension were discovered:
- WON-01-002 Extension: 3rd-party extensions can send store commands (Medium)
- WON-01-006 Extension: Missing validation for imported server data (Low)
- WON-01-003 Extension: Fake geolocation can be bypassed (Info)
- WON-01-007 Extension: PAC script injection via proxy information (Info)
WON-01-006: Missing validation for imported server data (Low) and WON-01-007: PAC script injection via proxy information (Info)
After reviewing the report, a clear first step was to implement validation on the JSON data as it is imported into the extension. Input validation would patch two of the four vulnerabilities, as they both depend on being able to get altered data from the website parsed into the extension. We’d previously considered that the import needed to fail gracefully if what was arriving prevented the extension doing what it needed to, but Cure53’s example proved that additional manipulated data could change the extension behaviour. Covering this problem ultimately required a three-step approach in the extension’s import process:
- Check for duplicate sibling properties in the raw JSON string before parsing.
- Manually check on the parsed JS object.
- Validate the parsed JS object against a predefined schema.
Based on Cure53’s test code, we were able to carry out further internal testing with fake data having duplicate sibling properties. When we did this, the extension user would only end up with one of the properties - always the last of the siblings. This is because JavaScript’s JSON.parse
function can’t parse duplicate siblings in an object, and discards the others rather than failing. Because we need to work with parsed data for our manual checks and schema validation, we first check for duplicates in the raw string so we can fail at that point if there are any.
As acceptable sane limits for imported data were being reviewed, I looked for JavaScript data validation libraries and settled on AJV, as it would allow us to cover a good number of the checks we needed to do by defining a schema for it to use. Besides standard features like checking type, required properties, disallowing unexpected properties, and max/min for numbers, we also make use of some other nice AJV features like:
- Regex pattern matching for key names and string properties.
- Type checking that differentiates between “number” and “integer”.
Working with AJV is also nice because it provides useful errors on where validation fails.
It wasn’t possible to use AJV for some less-standard requirements, so these checks needed to be done preceding the AJV check. As these involve looping over the data, we also take the opportunity to count the total number of incoming proxy servers, so this can be presented to the user for a final “OK” from them.
To end up with a neat validateData(incomingString)
function, we combine the steps and wrap everything in a try/catch:
try{ /* Everything */ }catch(){ return false }
The function then returns the number of proxy servers if the import is valid, so we can assume it is not if it returns anything else. Nice!
WON-01-003: Fake geolocation can be bypassed (Info)
A useful feature of the Switcher is that it can override the browser’s native geolocation functions. If the user has a Switcher proxy activated while a site calls any of these JavaScript functions, it will get the geolocation results of the proxy server rather than the browser’s actual location. Cure53 discovered that in Google Chrome it was possible for websites to “find” the native un-altered geolocation functions by programmatically creating a blank iframe, which would expose a “clean” window object containing the native functions. If those are available from the page, they can be called directly or used to override the already-overridden functions.
Dealing with injected content scripts from your extension can get complicated, as there are a range of options which can be set in the extension’s manifest definitions to control execution. In this case I had overlooked the actual functionality of the all_frames
property we were using, so our geolocation content script that overrides the functions was not being injected in iframes which were set to have (or ended up with) their source being about:blank
. Cure53’s issue detail pointed us straight to the match_about_blank
property, which specifically ensures these iframes will also include the script.
A further possible improvement they suggested was passing the browser’s geolocation object to JavaScript’s Object.freeze()
to prevent any other code trying to change it back to normal, or to something else. We weren’t able to do this, as it would have also prevented the extension’s code from being able to dynamically switch the overridden functions on and off as the user changes proxies or updates the traffic filtering settings.
WON-01-002: 3rd-party extensions can send store commands (Medium)
With any complex extension, communication between the core parts is something that is hard to do in a reliable way. In our case, there is a background script, a popup UI, an options UI, and the per-page content scripts which need to be able to communicate and stay in sync. Since we refactored the UI parts into React.js and are using Redux to manage state, the webext-redux library was the obvious choice for internal state sharing.
Unfortunately, I’d overlooked a vulnerability in the library where it would be possible for another extension installed in the same browser to send messages to our extension if it knew the extension ID (webext-redux actually documents this themselves). When Cure53 discovered this, they were able to provide us with a few lines of code they’d used in a test extension, which meant we could recreate the issue easily in a test extension of our own. They were even able to point to the two specific lines in the webext-redux source code which cause the vulnerability.
This level of detail was extremely useful because figuring out how to implement the fix got complicated. As Cure53 advised, there is a manifest property in Chrome called externally_connectable
for limiting external communication. Setting a strict control with this patches the issue for Chrome, but Firefox doesn’t support this property, and adding it causes a scary warning in the extensions page. Ideally, we didn’t want to split out separate distributions for that one change, so we investigated patching the webext-redux library ourselves rather than adding externally_connectable
to the manifest.
Once we’d tested that removing the insecure lines would not break any existing Switcher functionality in either browser, and would definitely prevent the vulnerability in both, we had the option of fixing this by maintaining our own fork of the library with those parts removed. This was also less than ideal as we’d be taking on the overhead and risks of maintaining and updating our customised fork. To avoid that, we used patch-package, which is a package that lets you make changes directly to any package’s source code in node_modules
and it records the difference in a patches
folder. Then add this line to your scripts in package.json
:
"postinstall": "patch-package"
From then on, running npm install
will install your dependencies as normal, and then also apply any patches from the patches
folder (which is of course maintained alongside the rest of your source code under version control). This leaves us in a good place to be able to use externally_connectable
for both browsers if Firefox supports it in the future.
Testing the fixes
The second stage involved Cure53 returning to verify that vulnerabilities had been patched sufficiently. Because the report information they provided had allowed us to accurately recreate the issues, we were able to run through an internal QA process first to verify we could no longer recreate them in the new code. Once that was done, Cure53 were invited back and they were able to quickly give us four big check marks for these issues, with no hiccups!
Conclusion
This was a positive project from start to finish, which I’d look forward to doing again. The Switcher extension is better for users now and there were some interesting learnings along the way. Some of the changes we needed to make were quite complicated, but working with Cure53 meant we had the information needed to be methodical and make the right choices. I wasn’t expecting their feedback to be so helpful and concise, but they owned and shared the problems they found with us rather than just handing them over.