Closed Bug 1885308 Opened 3 months ago Closed 9 days ago

"NetworkError when attempting to fetch resource." when using Firefox on Responsive mode with Service Worker

Categories

(DevTools :: Responsive Design Mode, defect, P2)

Firefox 123
defect

Tracking

(firefox129 fixed)

RESOLVED FIXED
129 Branch
Tracking Status
firefox129 --- fixed

People

(Reporter: miguel, Assigned: valentin)

Details

Attachments

(2 files)

Steps to reproduce:

When accessing web applications that utilize a Service Worker (SW) for resource caching, a "NetworkError when attempting to fetch resource" error occurs exclusively in Firefox's Responsive Design Mode. This error appears after the SW is installed and attempts to fetch resources via caches.match. Notably, the issue is not present in the Firefox mobile application or when using Firefox desktop outside of Responsive Design Mode. The problem has been observed on multiple sites, including https://app.clickup.com/login and https://www.voodoodreams.com/en/, which work as expected in other browsers' responsive modes and in Firefox under normal browsing conditions.

Actual results:

assets and script files don't load correctly when the Service worker is installed

Expected results:

assets and script files all loaded correctly through the Service worker

The Bugbug bot thinks this bug should belong to the 'Core::DOM: Networking' component, and is moving the bug to that component. Please correct in case you think the bot is wrong.

Component: Untriaged → DOM: Networking
Product: Firefox → Core
Component: DOM: Networking → Responsive Design Mode
Product: Core → DevTools

Some notes: the service worker on both websites does not install immediately. Not sure if we need to wait or interact with the page a bit, but it takes some time until it shows up.

I managed to reproduce the issue, but only when RDM is passing a custom user agent string, eg

Mozilla/5.0 (Linux; Android 8; Pixel 3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.120 Mobile Safari/537.36

I don't get a fully blank page in my case, but a few resources fail to load with the error mentioned in the bug Description.
I tried to use the same settings on Chrome (UA, screen size etc...) and everything seems to load fine.

Hi Tom,

Did you already encounter websites where service worker requests only fail in RDM with Firefox, but work fine in other browsers RDM modes, or in Firefox Mobile? Maybe you already have a bug filed for a similar issue?

Flags: needinfo?(twisniewski)

Given that it doesn't fail when setting an empty user agent in RDM, I was wondering if that could relates to that particular bit of RDM.

We end up triggering this particular code:
https://searchfox.org/mozilla-central/rev/b73676a106c1655030bb876fd5e0a6825aee6044/netwerk/protocol/http/HttpBaseChannel.cpp#545-552
which adds a User-agent header to all network requests.

But I couldn't really understand how this could cause the various issues we see in the console:

Blocage d’une requête multiorigine (Cross-Origin Request) : la politique « Same Origin » ne permet pas de consulter la ressource distante située sur https://app-cdn.clickup.com/fr-FR/runtime.fd15602909f7f6ee.js. Raison : échec de la requête CORS. Code d’état : (null).

Impossible de charger «  ». Un service worker a intercepté la requête et a rencontré une erreur inattendue.   ngsw-worker-entry.js:1473:13

HTTP 504 when loading https://app-cdn.clickup.com/fr-FR/styles.f75be9f801c70619.css from the Service Worker

Pardon my delay here. No, I haven't run into this yet, and am not aware of any other bugs filed on it.

Flags: needinfo?(twisniewski)

The severity field is not set for this bug.
:nchevobbe, could you have a look please?

For more information, please visit BugBot documentation.

Flags: needinfo?(nchevobbe)

we'll discuss it in our next triage meeting

clearing the ni so it appears in our dashboard

Flags: needinfo?(nchevobbe)

(In reply to Alexandre Poirot [:ochameau] from comment #4)

Given that it doesn't fail when setting an empty user agent in RDM, I was wondering if that could relates to that particular bit of RDM.

We end up triggering this particular code:
https://searchfox.org/mozilla-central/rev/b73676a106c1655030bb876fd5e0a6825aee6044/netwerk/protocol/http/HttpBaseChannel.cpp#545-552
which adds a User-agent header to all network requests.

But I couldn't really understand how this could cause the various issues we see in the console:

Blocage d’une requête multiorigine (Cross-Origin Request) : la politique « Same Origin » ne permet pas de consulter la ressource distante située sur https://app-cdn.clickup.com/fr-FR/runtime.fd15602909f7f6ee.js. Raison : échec de la requête CORS. Code d’état : (null).

Impossible de charger «  ». Un service worker a intercepté la requête et a rencontré une erreur inattendue.   ngsw-worker-entry.js:1473:13

HTTP 504 when loading https://app-cdn.clickup.com/fr-FR/styles.f75be9f801c70619.css from the Service Worker

Valentin: do you know if setting a custom user agent for a request at https://searchfox.org/mozilla-central/rev/b73676a106c1655030bb876fd5e0a6825aee6044/netwerk/protocol/http/HttpBaseChannel.cpp#545-552 could cause issues in general with requests coming from service workers? Otherwise it might be something coming from the server?

Flags: needinfo?(valentin.gosu)

I don't think it "should" cause issues, but for some reason it does.
I tried to track down the flow of headers into the service worker and found this part of the code:

https://searchfox.org/mozilla-central/rev/7a8904165618818f73ab7fc692ace4a57ecd38c9/netwerk/protocol/http/nsHttpChannel.cpp#10365-10378

// Some APIs, like fetch(), allow content to set non-standard headers.
// Normally these APIs are responsible for copying these headers across
// redirects.  In the e10s parent-side intercept case, though, we currently
// "hide" the internal redirect to the InterceptedHttpChannel.  So the
// fetch() API does not have the opportunity to move headers over.
// Therefore, we do it automatically here.
//
// Once child-side interception is removed and the internal redirect no
// longer needs to be "hidden", then this header copying code can be
// removed.
nsCOMPtr<nsIHttpHeaderVisitor> visitor =
    new CopyNonDefaultHeaderVisitor(intercepted);
rv = VisitNonDefaultRequestHeaders(visitor);
NS_ENSURE_SUCCESS(rv, rv);

I find removing this code block seems to make the bug go away 🙂
Andrew, I think child-side interception has been removed from the tree, right?

Flags: needinfo?(valentin.gosu) → needinfo?(bugmail)
Assignee: nobody → valentin.gosu
Status: UNCONFIRMED → ASSIGNED
Ever confirmed: true

(In reply to Valentin Gosu [:valentin] (he/him) from comment #10)

Andrew, I think child-side interception has been removed from the tree, right?

Re-capping https://phabricator.services.mozilla.com/D210478#7233068 and https://phabricator.services.mozilla.com/D210478#7234156 child-side intercept code was removed, but this is explicitly parent intercept logic; more context there.

Some additional investigation about the redirection hasn't turned up anything particularly useful. Although I did rediscover bug 1704877 that has https://phabricator.services.mozilla.com/D112675 about making the intercepted channel have its own id, which does raise the general meta issue around interception where:

  • We extracted InterceptedHttpChannel out of nsHttpChannel for complexity reasons; it allows decoupling interception logic from the core http channel logic.
  • Telling the content process about the redirect would seem to necessarily involve more IPC and potentially a new HttpChannelChild being set up, with the redirect from the non-intercepted HttpChannelChild to the yes-intercepted HttpChannelChild, I think?

Note that I did some additional digging which I'll make in the next comment because it's somewhat of a separate thought.

Flags: needinfo?(bugmail)

So the 504 gateway error is synthetic, being generated by the ServiceWorker for a fetch that throws:

      async safeFetch(req) {
        try {
          return await this.scope.fetch(req);
        } catch (err) {
          this.debugger.log(err, `Driver.fetch(${ req.url })`);
          return this.adapter.newResponse(null, {
            status: 504,
            statusText: 'Gateway Timeout'
          });
        }
      }

I had some trouble setting breakpoints/logpoints via the debugger via the prettified source (although it seemed to figure out how to map them back if I reloaded? but they just didn't work) so I hooked the synthetic response generation via:

savedResponse = Response;
Response = function(...args) {
console.trace();
console.log("resp", ...args);
return new savedResponse(...args);}

and then I set a breakpoint in that code I'd added by clicking on the "debugger eval code:2.9" in the console to get the source view and set a breakpoint on the console.trace.

The request that is throwing looks like:

​​bodyUsed: false
​​cache: "reload"
​​credentials: "same-origin"
​​destination: "font"
​​headers: Headers
​​integrity: ""
​​method: "GET"
​​mode: "cors"
​​redirect: "follow"
​​referrer: "https://fonts.googleapis.com/"
​​referrerPolicy: "strict-origin-when-cross-origin"
​​signal: AbortSignal
​​url: "https://fonts.gstatic.com/s/roboto/v30/KFOlCnqEu92Fr1MmSU5fBBc4.woff2"

Manually calling [...req.headers.entries()] gets us:

Array(3) [ (2) […], (2) […], (2) […] ]
​0: Array [ "accept", "application/font-woff2;q=1.0,application/font-woff;q=0.9,*/*;q=0.8" ]
​1: Array [ "accept-language", "en-US,en;q=0.5" ]
​2: Array [ "user-agent", "Mozilla/5.0 (Linux; Android 11; SAMSUNG SM-G973U) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/14.2 Chrome/87.0.4280.141 Mobile Safari/537.36" ]
​length: 3

Note that there is logic that intentionally propagates the headers from the fetch event that was received:

      newRequestWithMetadata(url, options) {
        return this.adapter.newRequest(url, {
          headers: options.headers
        });
      }

The browser will help explain what's going on if we run the equivalent fetch from an uncontrolled voodoodreams page (use ctrl-shift-refresh to bypass serviceworker interception). Run:

frh = new Headers() 
frh.append("user-agent", "Mozilla/5.0 (Linux; Android 11; SAMSUNG SM-G973U) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/14.2 Chrome/87.0.4280.141 Mobile Safari/537.36")
fr = new Request("https://fonts.gstatic.com/s/roboto/v30/KFOlCnqEu92Fr1MmSU5fBBc4.woff2", { mode: "cors", headers: frh})
z = fetch(fr) 

We get errors:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://fonts.gstatic.com/s/roboto/v30/KFOlCnqEu92Fr1MmSU5fBBc4.woff2. (Reason: header ‘user-agent’ is not allowed according to header ‘Access-Control-Allow-Headers’ from CORS preflight response).

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://fonts.gstatic.com/s/roboto/v30/KFOlCnqEu92Fr1MmSU5fBBc4.woff2. (Reason: CORS request did not succeed). Status code: (null).

Uncaught (in promise) TypeError: NetworkError when attempting to fetch resource.

So the problem here, as I understand it is a somewhat emergent behavior from:

  • We are surfacing the synthetic user-agent header to the intercepted channel.
  • That header is then propagated through a "cors" request.
  • The server does not allow-list that header.
  • The fetch throws.
  • The ServiceWorker turns it into a 504 response.

Presumably we should avoid propagating the synthetic header to the intercepted channel. Maybe the visitor could be made smart enough to understand that the user-agent override is synthetic and should not match the eFilterSkipDefault filter at (https://searchfox.org/mozilla-central/rev/f60bb10a5fe6936f9e9f9e8a90d52c18a0ffd818/netwerk/protocol/http/HttpBaseChannel.cpp#2041-2042)?:

return mRequestHead.VisitHeaders(visitor,
                                 nsHttpHeaderArray::eFilterSkipDefault);

Thank you for the detailed analysis, Andrew. That made it really easy to figure out that the problem here is the code setting the userAgentOverride doesn't use the correct flag

https://searchfox.org/mozilla-central/rev/b476ffaef761ff85c012e2d93050cf444ff7be34/netwerk/protocol/http/HttpBaseChannel.cpp#538,552

HttpBaseChannel::SetDocshellUserAgentOverride() {
...
  nsresult rv = SetRequestHeader("User-Agent"_ns, utf8CustomUserAgent, false);

Like we do here:

https://searchfox.org/mozilla-central/rev/b476ffaef761ff85c012e2d93050cf444ff7be34/netwerk/protocol/http/nsHttpChannel.cpp#6229-6234

if (!LoadIsUserAgentHeaderModified()) {
  rv = mRequestHead.SetHeader(
      nsHttp::User_Agent,
      gHttpHandler->UserAgent(nsContentUtils::ShouldResistFingerprinting(
          this, RFPTarget::HttpUserAgent)),
      false, nsHttpHeaderArray::eVarietyRequestEnforceDefault);

Passing eVarietyRequestEnforceDefault should resolve this issue. I'll update the patch ASAP.

Attachment #9401909 - Attachment description: Bug 1885308 - Remove CopyNonDefaultHeaderVisitor code r=asuth → Bug 1885308 - Make HttpBaseChannel::SetDocshellUserAgentOverride use eVarietyRequestEnforceDefault for the header.
Severity: -- → S3
Priority: -- → P2
Pushed by valentin.gosu@gmail.com:
https://hg.mozilla.org/integration/autoland/rev/2c96a82a6b09
Make HttpBaseChannel::SetDocshellUserAgentOverride use eVarietyRequestEnforceDefault for the header. r=asuth,necko-reviewers,jesup
Status: ASSIGNED → RESOLVED
Closed: 9 days ago
Resolution: --- → FIXED
Target Milestone: --- → 129 Branch
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: