Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Protocol handler API #2

Closed
7 of 12 tasks
Gozala opened this issue May 16, 2018 · 40 comments
Closed
7 of 12 tasks

Protocol handler API #2

Gozala opened this issue May 16, 2018 · 40 comments

Comments

@Gozala
Copy link
Contributor

Gozala commented May 16, 2018

Primary goal of this milestone is to enable Firefox addon to provide custom network protocol implementation. This will not enable add-on to provide full IPFS, Dat, SSB implementations (as that would for example require exposing UDP socket API to add-ons) but it would enable add-on to add associated protocol handler and possibly delegate actual networking to local node via native messaging API or a through a public gateway. Unlike current support of decentralized network protocols this will not redirect request but rather load them under specified URL and have content origin be set to scheme://${hostname}.

Option 1

Allow addon to register ServiceWorker against custom network protocol, allowing Firefox to delegate all network requests to the dedicated ServiceWorker run in the separate background thread.

Option 2

There is already established Electron API used by Beaker Browser and IPFS extension for Brave browser to do exactly that. Compatible Firefox addon API could be exposed to enable that in Firefox. Firefox could leverage existing components like nsIProtocolHandler, nsIStandardURL, nsIURL to expose such API.

🐞 1271553

API Draft

Current work-in progress implementation has a following API exposed under browser.protocol namespace

interface ProtocolAPI = {
  registerProtocol(scheme:string, handler:Request => Response):Promise<void>;
}

interface Request = {
  url:string;
}

interface Response = {
  contentCharset?: string; // default: "utf-8" 
  contentLength?: number; // default: -1 (Unknown)
  contentType?: string; // default: "text/html"
  content: AsyncIterator<ArrayBuffer>;
}

Status

@Gozala Gozala added this to the Protocol handler milestone May 16, 2018
@Gozala Gozala self-assigned this May 16, 2018
@Gozala
Copy link
Contributor Author

Gozala commented May 21, 2018

There is some relevant discussion in regards to going with Service Worker like API at https://github.com/Gozala/libdweb/issues/8#issuecomment-390437124

@Gozala
Copy link
Contributor Author

Gozala commented May 21, 2018

Made some progress on this issue last week. As things stand now above described API is available to the background script if protocol permission is requested. Current implementation still does not handle back-pressure or cancelling as in if consumer attempts to pause or cancel underlying channel, but I'd work towards getting this wired up as well.

Only other thing that I'm considering is going with ProtocolWorker API due to reasons described at https://github.com/Gozala/libdweb/issues/8#issuecomment-390437124. Other the performance it's also just a lot simpler to manage multiple messaging channels with worker than it is to do it with add-on's background script.

@Gozala
Copy link
Contributor Author

Gozala commented May 21, 2018

I observe uncaught exceptions being reported in the browser console, but can't seem to figure out where it's coming from

screen shot 2018-05-21 at 10 19 21 am

@Gozala
Copy link
Contributor Author

Gozala commented May 21, 2018

It seems that for whatever reason loader spinner for new tab into which content from custom protocol handler is loaded never finishes spinning. Oddly enough it works fine for the first tab. Maybe it's related to #issuecomment-390724031 ?

@Gozala
Copy link
Contributor Author

Gozala commented May 21, 2018

In separate conversation @olizilla asked if we could support CSP headers in custom protocols ? At least that is a mechanics in Electron.

Support for HTTP headers in non-http protocol seems odd on one hand, but on the other hand it's reasonable request. According to the [nsIChannel] (https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Reference/Interface/nsIChannel) page on MDN:

After a request has been completed, the channel is still valid for accessing protocol-specific results. For example, QueryInterfacing to nsIHttpChannel allows response headers to be retrieved for the corresponding http transaction.

I wonder if implementing nsIHttpChannel interface from our custom channels would allow us to do that.

@olizilla I also as I mentioned in that conversation there are some protocol flags that could be used to control policies to a much lesser degree.

@lidel
Copy link
Contributor

lidel commented May 21, 2018

@Gozala a somehow related question: are these flags taken in consideration when browser decides if loaded resource is a Secure Context? Asking because localhost is marked as a Secure Context (even on HTTP without encryption), and I see flags like URI_IS_LOCAL_RESOURCE, URI_IS_LOCAL_FILE, which could be kinda true for things like a locally running IPFS node.

@Gozala
Copy link
Contributor Author

Gozala commented May 21, 2018

@Gozala a somehow related question: are these flags taken in consideration when browser decides if loaded resource is a Secure Context?

I honestly don't know, but that is what I would expect.

Turns out protocolFlags do not affect isSecureContext, I tried using same flags as file protocol

Ci.nsIProtocolHandler.URI_IS_LOCAL_FILE |
Ci.nsIProtocolHandler.URI_IS_LOCAL_RESOURCE |
Ci.nsIProtocolHandler.URI_STD |
Ci.nsIProtocolHandler.URI_NOAUTH

But loaded page still reports self.isSecureContext as false.

Asking because localhost is marked as a Secure Context (even on HTTP without encryption), and I seen flags like URI_IS_LOCAL_RESOURCE, URI_IS_LOCAL_FILE, which is kinda true for things like a locally running IPFS node.

In case of URI_IS_LOCAL_RESOURCE my guess it's what used for resource:/// URLs that are firefox bundled content and as description says it "would not hit the network" which I don't think is true of IPFS content. It also note imply secure context it resource:/// urls inherit that from the loader.

As of URI_IS_LOCAL_FILE I'm confused by the description of this flag. But I also think it's pretty much for file:/// URLs.

At the moment protocol handler uses URI_STD | URI_INHERITS_SECURITY_CONTEXT which I assumed was enough for all intended purposes, but it turns out self.isSecureContextisfalse` wihch is not great.

@Gozala
Copy link
Contributor Author

Gozala commented May 22, 2018

In regards to isSecureContext I did some research and according to the nsIContentSecurityManager.idl :

/**
   * Implementation of
   * https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy
   *
   * The value returned by this method feeds into the the Secure Context
   * algorithm that determins the value of Window.isSecureContext and
   * WorkerGlobalScope.isSecureContext.
   *
   * This method returns false instead of throwing upon errors.
   */
boolean isOriginPotentiallyTrustworthy(in nsIPrincipal aPrincipal);

Then looking at the current implementation of isOriginPotentiallyTrustworthy it does not seem like any protocol, channel or URI flags affect the outcome, it's a hard-coded list of protocol schemas that are granted "secure context".

😞

@Gozala
Copy link
Contributor Author

Gozala commented May 22, 2018

Turns out there is a bug 1328695 that says "isOriginPotentiallyTrustworthy should consider URI Flags" and as far as I can tell from the attached changes it adds URI_SAFE_TO_LOAD_IN_SECURE_CONTEXT flag that is then used in isOriginPotentiallyTrustworthy implementation instead of hard-coded protocol schema list. Which is exactly what we'd need to enable isSecureContext in the custom protocol implementations.

😌 😀

@Gozala
Copy link
Contributor Author

Gozala commented May 22, 2018

Infinite loading issue is finally resolved!

@Gozala
Copy link
Contributor Author

Gozala commented May 22, 2018

Unfortunately I discovered yet another issue this time it's a clam that connection is not secure:

screen shot 2018-05-22 at 4 07 14 pm

And I have also verified it's not related to the isSecureContext as that even with system principal same error seems to appear. I am suspecting it's related to securityInfo field of nsIChannel.

@Gozala
Copy link
Contributor Author

Gozala commented May 23, 2018

@lidel @olizilla @mafintosh above "connection is not secure" raised some interesting questions, specifically:

  1. What do we want to display there for ipfs, dat URLS ? If you interact with it on https:// page you can drill down into more details. I assume in case of IPFS it would be ensuring that content receive hashes to the same hash as in URL. In case of Dat do you validate that content fingerprints with same public key ?

  2. At which point can we report connection security ? I think stream reports progress and status updates via nsIProgressEventSink and currently connection can have one of this statuses nsISocketTransport (in-lining below for continence)

    STATUS_RESOLVING | 0x804b0003 | Transport is resolving the host. Usually a DNS lookup.
    STATUS_RESOLVED | 0x804b000b | Transport has resolved the host.
    STATUS_CONNECTING_TO | 0x804b0007 |  
    STATUS_CONNECTED_TO | 0x804b0004 |  
    STATUS_SENDING_TO | 0x804b0005 |  
    STATUS_WAITING_FOR | 0x804b000a |  
    STATUS_RECEIVING_FROM | 0x804b0006 |  
    

    There is even more states that can be reported via nsIWebProgressListener (not sure how yet). Question is what does protocol handler API should expose and in which form. I could use some input on this from implementers.

@pfrazee
Copy link

pfrazee commented May 23, 2018

Dat does validate all of its content against the public key. We also only use DNS names which we can receive from authenticated sources (spec here).

@Gozala
Copy link
Contributor Author

Gozala commented May 24, 2018

@pfrazee maybe I did not described what I meant properly, I assume there is some audit happening that content you get is authored by the owner of the private key, I imagine wanting to update status of the stream when this validation can be done.

@pfrazee
Copy link

pfrazee commented May 24, 2018

@Gozala oh you mean the timing? Yeah the data is verified when it is received, prior to it ever being used.

I suppose one of the tricks here is that "secure" has a different meaning than with https, since https is referring to the TLS session. If you're looking at just whether dat's data is verified, it always will be and invalid data is basically just ignored -- so arguably it's always secure? But there is channel encryption, and that has to be negotiated too.

@Gozala
Copy link
Contributor Author

Gozala commented May 24, 2018

@Gozala oh you mean the timing? Yeah the data is verified when it is received, prior to it ever being used.

I suppose one of the tricks here is that "secure" has a different meaning than with https, since https is referring to the TLS session.

Yeah I meant this in a broader sense, what kind of security properties does protocol needs to communicate with a browser and what API would make sense to expose to do so.

If you're looking at just whether dat's data is verified, it always will be and invalid data is basically just ignored -- so arguably it's always secure?

What if some data is invalid should not that be communicated in some way to a browser ? What if you encounter alternate forks of the same archive ? I imagine it is another thing that is worth communicating with a browser in some form.

But there is channel encryption, and that has to be negotiated too.

I think in more concrete terms how should we change current draft of the protocol handle interface to allow network protocol implementation to communicate status updates of the request with a browser. And how do we make this not tied to an individual protocol like Dat or IPFS as they might have a different properties to communicate. Also what other properties does protocol handler needs to communicate with browser here example that come into my mind:

  • Number of peers seeding, being seeded to.
  • Even progress in terms of overal % doesn't necessarily makes sense, maybe in terms of chunks makes more sense ?

I could really use some help in terms of API change suggestions.

@pfrazee
Copy link

pfrazee commented May 24, 2018

What if some data is invalid should not that be communicated in some way to a browser ?

Since any peer can show up and start giving bad data, I'm not sure I'd communicate that. It's not something the user needs to know (IMO).

What if you encounter alternate forks of the same archive ? I imagine it is another thing that is worth communicating with a browser in some form.

That's something that might be worth communicating, because it currently means the Dat is corrupted and can't be recovered. We don't yet do that in Beaker though (one day at a time).

I think in more concrete terms how should we change current draft of the protocol handle interface to allow network protocol implementation to communicate status updates of the request with a browser.

Maybe a good issue for https://github.com/datprotocol/DEPs

@lidel
Copy link
Contributor

lidel commented May 25, 2018

@Gozala to answer your two questions from #2 (comment):

  1. Things to show in protocol popup
    Protocols such as IPFS rely on content-addressing so if we were successful in receiving data, it means a chunk has been verified by the node before passing to the browser.
    • For ipfs:// and other content-addressed protocols putting simple Content-addressed in green (or blue, to indicate different guarantees than HTTPS) would be a good default.
    • For mutable protocols such as ipns:// information about real source could be displayed (eg. current /ipfs/ path behind latest record published to IPNS)
    • For IPFS, the browser extension could provide DAG/IPLD Explorer to which a link could be put there as well.
    • If additional info screen available after click-thru could be a page similar to browser or page actions, we could put additional, fetched-on-demand diagnostics there, such as number of known peers hosting the content, but I am not sure how useful such hard-to-access UI could be. I'd just go with one sentence from the first bullet point.
  2. At which point can we could report progress
    • For IPFS a minimum set of states worth reporting I see is:
      1. request is being prepared
      2. request was sent to local IPFS node
      3. waiting for response from node
      4. started receiving data
        • we could provide download progress here in the future, or if we provide expected total size to the browser, it could be calculated on the fly
      5. all data was received
    • For immutable, content-addressed protocols like ipfs:// reporting connection security would be the moment we got some data, confirmed hashes match and start pushing data to the content iterator (STATUS_RECEIVING_FROM).
    • For "mutable" protocols such as ipns:// (which resolves to immutable /ipfs/ path behind the scenes) it would be useful to have an ability of reporting STATUS_RESOLVING and STATUS_RESOLVED, but it is rather low priority, as resolve is happening inside of the node is is not exposed to the outside yet.

Will think about it some more tho.

@Gozala
Copy link
Contributor Author

Gozala commented May 25, 2018

@lidel Thanks for the feedback. I suggest to think about browser as a node itself rather that something that talks to IPFS node. If you angle things that way I think communicating back any status / progress update would make sense. I also suspect that existing status flags and security info in gecko doesn't quit translate in IPFS and I don't suggest we shoehorn it, but it worth documenting what we'd like to be able to do. In terms of UI also agree that current door-hanger makes little sense, but how we present that info if at all is a separate problem, ideally we still want to provide browser with all the info as it arrives.

I also do understand that both IPFS and Dat do audit data as it arrives, never the less I imagine there could be an instance where node receives some malicious data and I imagine that is worth reporting to the browser. And what if valid data can't be found that's also worth reporting.

On the somewhat related note does is anything like HTTP Status codes, or Headers make sense in either of these contexts ?

@pfrazee
Copy link

pfrazee commented May 25, 2018

never the less I imagine there could be an instance where node receives some malicious data and I imagine that is worth reporting to the browser. And what if valid data can't be found that's also worth reporting.

Open to debate, but IMO you only need to escalate in Dat if it's a corruption event. Misbehaving non-author peers will degrade performance, but that's all. If valid data can't be found, you just get a timeout (but keep searching).

On the somewhat related note does is anything like HTTP Status codes, or Headers make sense in either of these contexts ?

The way that I've approached this is by thinking of the Dat protocol handler like a server. It's another area that's open to debate but you can see how we've approached it here.

@Gozala
Copy link
Contributor Author

Gozala commented May 25, 2018

The way that I've approached this is by thinking of the Dat protocol handler like a server. It's another area that's open to debate but you can see how we've approached it here.

I wonder if that was somewhat driven by the Electron API itself ? In gecko protocol handler does not impose HTTP headers or status codes that is part of the http protocol handler implementation itself. If you approach it from this angle you might start questioning whether replicating HTTP headers and status codes make any sense and if it does, would not it make sense to do that at the protocol level rather than at the protocol handler level ?

To be clear this doesn't block us here, but as I face this issues it's helpful to discuss that with protocol authors.

Another thing that I have being thinking is that unlike HTTP in Dat / IPFS space completed request is not necessarily complete in a sense that you might continue seeding as long as you have that resource loaded. In theory it might be useful to display graphs of activity (kind of like we see them on servers) I wonder if it would make sense to let protocol channel keep reporting status updates even after content is loaded or whether it makes sense to have a separate service that browser UI can subscribe to receive network activity updates on that resource.

@pfrazee
Copy link

pfrazee commented May 27, 2018

I wonder if that was somewhat driven by the Electron API itself ?

Sure. We're basically using the headers and status codes as a way to configure the browser handling. It wouldn't have to be that way.

@lidel
Copy link
Contributor

lidel commented May 28, 2018

On the somewhat related note does is anything like HTTP Status codes, or Headers make sense in either of these contexts ?

Main uses I see:

  • Status codes
    a way to signal a redirect to a different URI or return a clear error code when hash/URI is invalid or network connectivity is down (eg. no peers, or no peers with data withing a timeout window).

  • Headers
    a way to pass some additional metadata to protocol handler and/or return it along with response
    It would enable us not only to have redirects, but also open possibility of making things like XHR's .getAllResponseHeaders() work. Later in this comment I suggest things like passing additonal metadata about the request, such as original string.

IMO these are generic building blocks that could be useful for different protocols, even if we make them a lot less powerful than HTTP ones. Headers could be optional, and if not explicitly defined could have an implicit OK Status.

An open question is if/how much of HTTP abstractions should be copied.

a) My initial intuition is that going with HTTP-like semantics: Headers with mandatory Status Code following HTTP conventions (2xx == OK, 3xx == Redirtect, etc) would simplify wiring new protocols into the browser (eg. dev tools) and reduce cognitive load for developers that consume the API, like it did in Beaker. But I also understand it may not be possible due to constrains/complexity that I am not aware of.

b) May be also better to keep the API small and provide simple DSL/API for mentioned uses, similar to redirectUrl from onBeforeSendHeaders.


On a related note, I noticed some additional edge cases specific to ipfs://, but which may be worth mentioning here as additional data points:

  • contentType/contentEncoding
    In IPFS we do not store mime-types along with data, so we don't know it before data starts arriving. Public HTTP gateways are doing mime-type-sniffing and we will have to do the same in browser.
    Question: is it possible for Firefox to provide access to its own mime-type-sniffer? Perhaps setting contentType to auto or null could enable already existing heuristics described here?
    Reusing it would be more elegant than doing mime-sniffing in userland.

  • on-the-fly normalization of case-sensitive identifiers in ipfs://<cid>
    AFAIK URI is force-lowercased before it is passed to handler.
    Not sure if this is technically feasible, but it would be really useful if we could automatically detect legacy, case-sensitive hashes (CIDv0) and convert+redirect them to case-insensitive encoding (CIDv1 in Base32) that can be safely used in ipfs://<cid> contexts.

    My initial thinking is that it may not be trivial, as it requires browser to flex rules a bit:

    1. Find a way to pass the original (case-sensitive) address string to the handler function (just like the current redirect-based one does). Right now URI is forced to lowercase before it is passed to handler and we don't want to change how URI object is created by the browser. Perhaps a Header with original string could be added when original and lowercased versions differ? It would be generic enough to enable other protocols to do their own checks/normalization. Not sure how realistic is this. Thoughts?
    2. convert the CID
    3. make it possible for handler function to return a redirect to a different URI (or do everything in webRequest/onBeforeRequest or onBeforeSendHeaders).

    Without mentioned normalization we will have to just display an error informing user that CIDv0 is not allowed in ipfs:// and suggesting conversion to case-insensitive CIDv1. It is not deal-breaker for us, but if we could have transparent conversion to CIDv1 that would be huge UX win for users.

@flying-sheep
Copy link

flying-sheep commented May 28, 2018

why copy anything from HTTP? let’s keep the API simple.

If someone wants to add headers and redirections to their protocol, let them parse them from the content themselves, right?

@pfrazee
Copy link

pfrazee commented May 28, 2018

I think it's a benefit to be able to use the features provided by HTTP headers. Content Security Policy and Caching immediately come to mind; I'm sure there will be more over time. If we say, "These new protocols are not HTTP so they shouldn't use HTTP codes & headers," I think we may find ourselves porting over the functionality repeatedly.

Can we generalize "HTTP" headers into "Request and Response Headers" ? With that line of thinking, we're able to inherit all of the features and minimize the amount of novel work, and then we add new features to them.

There's definitely some differences in terms of potential response states, for instance:

  • Not finding any peers
  • Found a peer but didn't find the file
  • Found a peer, found the metadata for the file, but couldn't find the file in time

And so on. But my gut reaction is, we should adapt the HTTP behaviors unless it completely fails to map in some key fashion.

@flying-sheep
Copy link

flying-sheep commented May 29, 2018

Maybe I’m missing something, but we want to support arbitrary protocols, right?

But if we add abstractions for HTTP-like headers, and if a protocol sends data that looks like HTTP headers but isn’t, it will be annoying or impossible to get the data out of the headers back into its original form in order to be parsed properly. E.g. HTTP headers are just ASCII. what if some protocol sends

Set: Ø

Hey!

This would parse to

Response {
  headers: {
    Set: <ASCIIDecodeError>,
  },
  content: AsnycIterator([
    ArrayBuffer('Hey!'),
  ]),
}

If we do something like that, the header part of the API should be an adapter or wrapper around the raw content API.

@sammacbeth
Copy link
Contributor

It seems the use-cases for making HTTP headers part of this API are tied to wanting some aspects of the browser security model to seamlessly apply to new protocols. As we are loading these pages on a web browser, we expect the tooling to behave like the web platform does. That means CSP and cross-origin restrictions etc. However, these features are tightly coupled with HTTP, and rely on HTTP verbs and headers which may not have equivalents on different protocols.

If this is to be a header-less protocol handler, it may be useful to enable some configurability of the browser environment. The concept of origins may or may not be appropriate, depending on the protocol, and questions such as whether pages on custom protocols can then load third-party content on other protocols.

@flying-sheep
Copy link

flying-sheep commented May 30, 2018

That’s connected to a protocol handling URLs or the more general URIs:

authority = [userinfo@]host[:port]
link = [scheme:][//authority][path][?query][#fragment]
URI  =  scheme: [//authority][path][?query][#fragment]
URL  =  scheme:  //authority [path][?query][#fragment]

Yes, myproto: is a valid URI. An URL is an URI with an authority. Links have all parts optional, e.g. a link starting with //authority is a protocol relative link (PRL).

The protocol handler API should accept URIs, which means that the concept of origin(=authority) doesn’t necessarily apply to things we handle.

Therefore my suggestion: Give users a nice wrapper to create HTTP-like protocols with all the goodies we’re accustomed to but leave the base API maximally powerful.

@pfrazee
Copy link

pfrazee commented Jun 1, 2018

My main interest is minimizing the amount of effort this takes. I'm not in favor of changing the definitions of origins (ie authority = origin) if it means a lot of work-arounds; there's no reason any of the new protocols need to change from the standard URL form.

My point is that the HTTP semantics are not hard to map to these protocols, and so I think we ought to be approaching this from the standpoint of asking "how do the request/response semantics actually need to change?" rather than assuming we need to start from a blank slate.

For Dat, we need mechanisms that are functionally equivalent to status codes and HTTP headers. It's also useful that fetch() works with Dat URLs just the same as HTTP.

@Gozala
Copy link
Contributor Author

Gozala commented Jun 7, 2018

image

Bug preventing secure context was fixed and is available nightly builds. I also have landed 5751026 take make use of it in our protocol API and with that we have secure context:

screen shot 2018-06-07 at 3 04 47 pm

@Gozala
Copy link
Contributor Author

Gozala commented Jun 7, 2018

contentType/contentEncoding
In IPFS we do not store mime-types along with data, so we don't know it before data starts arriving. Public HTTP gateways are doing mime-type-sniffing and we will have to do the same in browser.
Question: is it possible for Firefox to provide access to its own mime-type-sniffer? Perhaps setting contentType to auto or null could enable already existing heuristics described here?
Reusing it would be more elegant than doing mime-sniffing in userland.

I believe omitting contentType / contentEncoding would do exactly that.

@Gozala
Copy link
Contributor Author

Gozala commented Jun 7, 2018

on-the-fly normalization of case-sensitive identifiers in ipfs://
AFAIK URI is force-lowercased before it is passed to handler.
Not sure if this is technically feasible, but it would be really useful if we could automatically detect legacy, case-sensitive hashes (CIDv0) and convert+redirect them to case-insensitive encoding (CIDv1 in Base32) that can be safely used in ipfs:// contexts.

@lidel It would be challenging, but might be doable. I would encourage you to give it a shot. I personally would rather spend time on other issues, because:

  1. Even with redirects I think user experience is not going to be great.
  2. It seems like you're onboard with switch to case-insensitive encoding by default.
  3. Adding complexity to support that would require justification if were land this APIs into firefox.

That being said here is some pointers that might provide necessary context to try implementing this.

  • nsIProtocolHandler used in the implementation is actually passed URL string to be parsed in newURI in fact if you look at logs you can see those coming in. In fact nsIStandardURLMutator used there is what causes it to be normalized into case-insensitive form. Now problem is that newURI needs to return synchronously but it runs on the different process from add-ons so there is no way for us to call into a function that would transcoding.
  • Good news is you can technically return nsIURI instance there which does not require lowercase hostname and can be done with nsISimpleURIMutator (example). The drawback is though that if we used them we would have content that either has "null" origin or origin will be full uri which would break CSP.
  • What might work though is to use nsISimpleURIMutator in newURI method but then in the newChannel2 return channel that on asyncOpen would asynchronously redirect via asyncOnChannelRedirect. Truth is I don't know how to do that exactly but usually you can figure out by searching like this one.

One more thing to consider is that I do not know how to wrap all this into a nice API where uses that don't care about such redirects won't end up with nsIURIs instead of nsIStandardURLs. Or how facilitate redirects without imposing round-trip with add-on each time.

My initial thinking is that it may not be trivial, as it requires browser to flex rules a bit:

  • Find a way to pass the original (case-sensitive) address string to the handler function (just like the current redirect-based one does). Right now URI is forced to lowercase before it is passed to handler and we don't want to change how URI object is created by the browser. Perhaps a Header with original string could be added when original and lowercased versions differ? It would be generic enough to enable other protocols to do their own checks/normalization. Not sure how realistic is this. Thoughts?

I think my comment above should clarify what are the options and challenges here. Only other thing I could add there might be an option to return some bastardized nsIStandardURL from newURI that would have original spec and then do actual parse in newChannel2. That would allow passing originalURL to the protocol handler and provide some way to do redirects.

  • convert the CID

I'm not entirely sure what is your suggesting here, if it just switching to case-insensitive CID 👍 Otherwise you'd need all the things I've described above. Also keep in mind that you have to choose between standard URLs which have notion of relative URLs and proper origin support but require case-insensitive hostname and URIs that have neither but are case-sensitive.

  • make it possible for handler function to return a redirect to a different URI (or do everything in webRequest/onBeforeRequest or onBeforeSendHeaders).

I'm not sure actually how webRequest works (like when in the process is it triggered) if it can be used to do redirects I would absolutely recommend going with that as it would bypass all the complications I've described.

Without mentioned normalization we will have to just display an error informing user that CIDv0 is not allowed in ipfs:// and suggesting conversion to case-insensitive CIDv1. It is not deal-breaker for us, but if we could have transparent conversion to CIDv1 that would be huge UX win for users.

I totally understand. How migration costs compares to implementation + added complexity costs that I don't know.

I'd suggest to explore if current behavior can be preserved by using nsISimpleURIMutator in newURI but then use nsIStandardURLMutator to produce nsIStandardURL that will be used in channel. If that will work that would allow us to pass originalURL as part of the request which in turn will allow redirecting.

@Gozala
Copy link
Contributor Author

Gozala commented Jun 7, 2018

I think it's a benefit to be able to use the features provided by HTTP headers. Content Security Policy and Caching immediately come to mind; I'm sure there will be more over time. If we say, "These new protocols are not HTTP so they shouldn't use HTTP codes & headers," I think we may find ourselves porting over the functionality repeatedly.

Problem is we'd have to port them anyway for custom protocol anyway. On one hand it sure makes it easier for ipfs, dat, ssb folks as to not think about that, on the other hand if we do think about it and decide on a subset it would make porting that easier.

Can we generalize "HTTP" headers into "Request and Response Headers" ? With that line of thinking, we're able to inherit all of the features and minimize the amount of novel work, and then we add new features to them.

The way gecko is setup I don't think it's possible. It's individual protocol handler implementation that do parsing and interpreting those into specific actions. So nsIHTTPProtocolHandler does that in gecko what that means I'd need to replicate all that it does, which is not only challenging but I'm almost certain it also makes use of some non-scriptable APIs to do that which means it would have to be done in the gecko and land into firefox itself. That is why I would very much prefer to minimize the scope by considering what does and does not make sense. Additional benefit is considering all this would be helpful any standardization effort we might consider.

Also from what I can tell Request Headers don't make any sense in this context, or am I missing something ?

@Gozala
Copy link
Contributor Author

Gozala commented Jun 7, 2018

It seems the use-cases for making HTTP headers part of this API are tied to wanting some aspects of the browser security model to seamlessly apply to new protocols. As we are loading these pages on a web browser, we expect the tooling to behave like the web platform does. That means CSP and cross-origin restrictions etc. However, these features are tightly coupled with HTTP, and rely on HTTP verbs and headers which may not have equivalents on different protocols.

If this is to be a header-less protocol handler, it may be useful to enable some configurability of the browser environment. The concept of origins may or may not be appropriate, depending on the protocol, and questions such as whether pages on custom protocols can then load third-party content on other protocols.

Most of these are tight to origin and isSecureContext current implementation supports both. There are also CSP headers but some support of that would be much easier to expose that supporting all of HTTP.

Cross protocol interactions is another good topic to discuss and I suspect Moz Security will have opinions in this regard too.

@Gozala
Copy link
Contributor Author

Gozala commented Jun 8, 2018

My main interest is minimizing the amount of effort this takes. I'm not in favor of changing the definitions of origins (ie authority = origin) if it means a lot of work-arounds; there's no reason any of the new protocols need to change from the standard URL form.

👍 I totally agree.

My point is that the HTTP semantics are not hard to map to these protocols, and so I think we ought to be approaching this from the standpoint of asking "how do the request/response semantics actually need to change?" rather than assuming we need to start from a blank slate.

I don't really suggest starting from blank slate, rather I'm trying to reduce the scope. Think of it not as reinvent status codes / headers but rather picking just subset we need as it would greatly simplify implementation efforts.

For Dat, we need mechanisms that are functionally equivalent to status codes and HTTP headers. It's also useful that fetch() works with Dat URLs just the same as HTTP.

Custom protocols in gecko are more similar to file:/// than they are to http:// note that fetch and xhr does work in both as pretty much all other APIs. But there are no redirects nor headers.

I think what you really want is small subset of features provided by combination of HTTP status codes and headers. If we could figure out what that subset is it would be a lot easier to add support for it.

@pfrazee
Copy link

pfrazee commented Jun 8, 2018

If it's not possible to reuse any existing HTTP handling code, then I can understand why my suggestion (to not reinvent) doesn't get us much.

Here's what we currently use from the HTTP semantics right now for dat in Beaker. I'll rate how important they are for us, but that's just to say we do/dont need the functionality but it could be by other mechanisms.

Requests:

  • Method: GET (usually) HEAD (rarely & not vital)
  • Range: vital
  • If-None-Match: nice to have (along with all other If-* headers)
  • Accept: dont use this yet, could be nice to have (along with all Accept-*)
  • Cache-Control: dont use this yet, could be nice to have

Responses:

  • Codes: 200, 206 (range reply), 204 (HEAD reply), 303 (redirects), 304 (caching response), 404, 405 (bad method), 500 (error with the dataset), 504 (timeout searching for dat on network)
  • Content-Type: vital
  • Content-Disposition: pretty useful (download as .zip)
  • Content-Security-Policy: vital
  • Access-Control-Allow-Origin: probably don't need it? Not sure, we set it to '*'
  • Location: vital
  • Accept-Ranges: vital
  • Content-Range: vital
  • Content-Length: vital
  • Cache-Control: useful not vital
  • ETag: useful not vital

Hope that's helpful

@Gozala
Copy link
Contributor Author

Gozala commented Jun 8, 2018

@pfrazee thanks for this list, it's very helpful. Just one followup question, in which context are these request methods / headers are used ? Is it just to support XHR / fetch APIs or is there some other use cases ? And if it's used by XHR / fetch how is that preferable over use of DatArchive API ?

@pfrazee
Copy link

pfrazee commented Jun 8, 2018

Mainly they are used for browsing - for visiting pages and for embedded resources. (The range requests come in for video/audio, for instance.) Generally the DatArchive API is the ideal choice, but XMR / fetch is useful when you want to be transport-agnostic. A dat url works just as well as an http url in that API.

@Gozala
Copy link
Contributor Author

Gozala commented Jun 19, 2018

I was attending Mozilla All Hands meeting last week and that gave me a chance to talk to necko (gecko networking stack) engineers. About some of the still open issues in regards to custom protocol handlers. More coordination & research+work will be required to address remaining limitations but here is a some takeaways:

  1. Async redirects - Turns out we could do async redirects that should enable @lidel to redirect requests from old encoding to a new encoding. Specifically it seems that channel could invoke asyncOnChannelRedirect on corresponding nsIChannelEventSink, unfortunately I do not know how to get a reference to it but I hope to find that out soon from the necko engineer who's helping me out with this.

  2. Response (HTTP) status codes - Turns out nsIHttpChannel reports that by passing nsresult as statusCode parameter to onStopRequest of nsIStreamListener which was passed to channel's asyncOpen. I've asked necko team to point me to the source that does mapping between HTTP status codes and nsresult codes and once I have that it should be fairly easy to support status codes. Please note that very likely things like redirects caching and alike won't just magically start doing right things, most likely those don't have corresponding nsresult it's just implementation of protocol handler does the right thing there instead, which is to say doing caching is on the protocol implementer. Not sure about redirects I'm inclining to just let protocol handler return response with different url and data stream than requested as it would reduce messaging overhead but we'll see after there's more clearance on 1. Async Redirects.

  3. Request Headers - Things seems here sadly far more complicated, but it is also clear that this is really important for proper support of Video / Audio tags that just assume HTTP backend. It was suggested to me to that we could do following:

    • Plain data from request so that protocol handler can parse HTTP headers or alternative encoding of metadata.
    • Expose existing HTTP parser so it could be used. There was quite a reluctance on this though for reasons I did not understand (Part of it was that it's written in C++ and could not be easily extracted for just parsing use). It was highly recommended to instead just support more strictly encoded subset of the headers as anything send from video / audio tags would fit.

    Unfortunately it seems that supporting request headers would require C++ work and landing corresponding changes into Firefox which is to say it's likely going to take a while and likely will come after we illustrated that people are actually building on this work.

  4. Response Headers - Things don't seem all that great here either. From what I understood there is logic branching based on whether url.scheme is http(s) and if so that is where channel is casted to nsIHttpChannel. It was suggested to add another interface like nsIHttpHeadersChannel that would be able to provide HTTP headers, after which we can update earlier mentioned logic branch to support headers on non http(s) protocols as well. As with Request Headers it seems to require C++ changes which makes it more difficult and requires landing it in Firefox. Although necko team seemed 👍 on accepting such changes.

Sadly I don't have C++ experience to tackle Request / Response headers, if anyone interested in this work does please let me know and I'll be happy to get you started and connect to people who can mentor you in this process. Otherwise I'll probably end up tackling this myself but only after everything else here is solid.

@Gozala
Copy link
Contributor Author

Gozala commented Jul 23, 2018

Moved out remaining items into individual issues. Closing this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants