Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue 1 convert to polyfill #12

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

piranna
Copy link

@piranna piranna commented May 2, 2013

It's not a complete re-desing, but only some tune-ups to help compatibility and improve of my DataChannel-polyfill. All the management to work as a polyfill is there (ShareIt-project/DataChannel-polyfill@bf63d0d), although would be interesting to be integrated here and make 'reliable' an independent polyfill (tell me if you like the idea and I'll do it. In fact, the code will be cleaner in both projects...).

The mayor downside is that the 'onmessage' method (now a EventListener) is having the message inside a 'data' attribute ({data: }) instead of being directly served () as the specification says.

@michelle
Copy link
Owner

michelle commented May 2, 2013

A lot of good stuff here. I would prefer that Reliable inherit from PeerJS's ported version of node.js EventEmitter (which is a dependency on PeerJS already) rather than have its own Event system.

@michelle
Copy link
Owner

michelle commented May 2, 2013

Also the fix for #11 looks like it will also improve performance and close #10. I never looked too deeply into it, so it's great that it's such a simple fix. With this the exponentially decaying sliding window currently used to ensure that images get through at all can also be removed and transfer times will be faster in general.

@piranna
Copy link
Author

piranna commented May 2, 2013

A lot of good stuff here.

Thanks :-)

I would prefer that Reliable inherit from PeerJS's ported version of
node.js EventEmitter (which is a dependency on PeerJS already) rather than
have its own Event system.

It make sense... the only thing is that it has it's own custom EventTarget
attributes because it mask the message event and forward the other ones to
the native DataChannel. The best approach would be take the original,
native DataChannel object and add the needed functionality (that's what a
polyfill does, by the way... :-P ) instead of being a wrapper class like it
is now. I didn't wanted to do it that way because it would break completely
your code (it would be almost a complete rewrite) and backward
compatibility, but I think if we take this path it will be easy to adapt it
to use PeerJS EventEmitter class. What do you think?

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un monton
de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

@piranna
Copy link
Author

piranna commented May 2, 2013

Also the fix for #11 https://github.com/michellebu/reliable/issues/11looks like it will also improve performance and close
#10 #10. I never looked
too deeply into it, so it's great that it's such a simple fix.

Well, fix for #11 I find it almost a hack, but seems it makes the job.

With this the exponentially decaying sliding window currently used to
ensure that images get through at all can also be removed and transfer
times will be faster in general.

Please yes! It's completely annoying how slow it is now! I don't know if
it's related to the reliable library of to Chrome implementation of
DataChannels, but it's about 10x slower than my websockets based polyfill
or more... :-/

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un monton
de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

@michelle
Copy link
Owner

michelle commented May 3, 2013

On further inspection the slowness is not so simple. Will have to rewrite some stuff as mentioned in #10. Whether this is worth the work or not is questionable though, because Chrome is making fast progress with reliable/binary: https://code.google.com/p/webrtc/issues/detail?id=1408

@piranna
Copy link
Author

piranna commented May 3, 2013

I think so. The only (personal) points I see to work on this are to add
backward compatibility so my application (and my polyfill) could work since
Chrome v21 and to have a nice demo on the final of the championship... I
totally agree that if Chrome will implement it soon it doesn't worth the
effort, so give it a low priority.

2013/5/3 Michelle Bu [email protected]

On further inspection the slowness is not so simple. Will have to rewrite
some stuff as mentioned in #10#10.
Whether this is worth the work or not is questionable though, because
Chrome is making fast progress with reliable/binary:
https://code.google.com/p/webrtc/issues/detail?id=1408


Reply to this email directly or view it on GitHubhttps://github.com//pull/12#issuecomment-17373252
.

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un monton
de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

@michelle
Copy link
Owner

michelle commented May 3, 2013

Just fixed a bunch of stuff. Reliable is now pretty fast, as seen on chat.html.

@piranna
Copy link
Author

piranna commented May 3, 2013

I'll take a look.
El 03/05/2013 07:03, "Michelle Bu" [email protected] escribió:

Just fixed a bunch of stuff. Reliable is now pretty fast, as seen on
chat.html.


Reply to this email directly or view it on GitHubhttps://github.com//pull/12#issuecomment-17378704
.

Conflicts:
	dist/reliable.js
	dist/reliable.min.js
	lib/reliable.js
@piranna
Copy link
Author

piranna commented May 3, 2013

I have merged your improvements with my branch and yes, it's seems to be
somewhat faster and responsive, thanks! :-D "Big files" (more than 30kbs)
get a desesperate time to transfer, but smaller ones are almost instant :-)

I have been checking the diferences (
ShareIt-project/reliable@master...issue_1__Convert_to_polyfill)
and seems than the main points are sending using UTF-8, events-based and
the reliable attribute. The first two ones are more debatable, but the
latest one could be easily integrated, isn't it?

2013/5/3 [email protected] [email protected]

I'll take a look.
El 03/05/2013 07:03, "Michelle Bu" [email protected] escribió:

Just fixed a bunch of stuff. Reliable is now pretty fast, as seen on

chat.html.


Reply to this email directly or view it on GitHubhttps://github.com//pull/12#issuecomment-17378704
.

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un monton
de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

@michelle
Copy link
Owner

michelle commented May 3, 2013

Mmm I haven't looked at your code, but are you using the higherBandwidthSDP hack?

@michelle
Copy link
Owner

michelle commented May 3, 2013

For reference, I can do ~1.6MB in 6 seconds and about 1 second for ~50KB files.

@piranna
Copy link
Author

piranna commented May 3, 2013

Yes, I'm using the higherBandwidthSDP inside my DataChannel-polyfill. You
can take a look at
https://github.com/piranna/DataChannel-polyfill/blob/chrome_canary/datachannel.js#L448
:

if(createDataChannel && !supportReliable && Reliable)
{
var createOffer = RTCPeerConnection.prototype.createOffer;
var createAnswer = RTCPeerConnection.prototype.createAnswer;

RTCPeerConnection.prototype.createOffer = function(successCallback,
                                                   failureCallback,
                                                   constraints)
{
  createOffer.call(this, function(offer)
  {
    offer.sdp = Reliable.higherBandwidthSDP(offer.sdp);
    successCallback(offer)
  },
  failureCallback, constraints)
}

RTCPeerConnection.prototype.createAnswer = function(successCallback,
                                                    failureCallback,
                                                    constraints)
{
  createAnswer.call(this, function(answer)
  {
    answer.sdp = Reliable.higherBandwidthSDP(answer.sdp);
    successCallback(answer)
  },
  failureCallback, constraints)
}

}

2013/5/4 Michelle Bu [email protected]

For reference, I can do ~1.6MB in 6 seconds and about 1 second for ~50KB
files.


Reply to this email directly or view it on GitHubhttps://github.com//pull/12#issuecomment-17424082
.

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un monton
de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

@piranna
Copy link
Author

piranna commented May 3, 2013

I'm doing the tests over two tabs on the same Chrome v28 instance and my
internet connection is 10Mbps down / 1Mbps up (and 25€ + taxes... Yes,
internet here at Spain sucks and it's far expensive... ¬¬)

2013/5/4 [email protected] [email protected]

Yes, I'm using the higherBandwidthSDP inside my DataChannel-polyfill. You
can take a look at
https://github.com/piranna/DataChannel-polyfill/blob/chrome_canary/datachannel.js#L448
:

if(createDataChannel && !supportReliable && Reliable)
{
var createOffer = RTCPeerConnection.prototype.createOffer;
var createAnswer = RTCPeerConnection.prototype.createAnswer;

RTCPeerConnection.prototype.createOffer = function(successCallback,
                                                   failureCallback,
                                                   constraints)
{
  createOffer.call(this, function(offer)
  {
    offer.sdp = Reliable.higherBandwidthSDP(offer.sdp);
    successCallback(offer)
  },
  failureCallback, constraints)
}

RTCPeerConnection.prototype.createAnswer = function(successCallback,
                                                    failureCallback,
                                                    constraints)
{
  createAnswer.call(this, function(answer)
  {
    answer.sdp = Reliable.higherBandwidthSDP(answer.sdp);
    successCallback(answer)
  },
  failureCallback, constraints)
}

}

2013/5/4 Michelle Bu [email protected]

For reference, I can do ~1.6MB in 6 seconds and about 1 second for ~50KB
files.


Reply to this email directly or view it on GitHubhttps://github.com//pull/12#issuecomment-17424082
.

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un
monton de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un monton
de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

@piranna
Copy link
Author

piranna commented May 4, 2013

I'm remembering that my code split the files in chunks of 64KB due to
legacy code, so in fact reliable library is sending "files" so big. Maybe
is this the reason why is so slow? I was thinking since some time ago about
reducing the size of the chunks to just 1KB that's the size of the standard
chunk on a Tiger TTH and also used to identify and check them, but I don't
know if the 1KB data + Tiger chunk hash + chunk ID + Tiger TTH file hash
would all fit inside a DataChannel packet (so ideally chunks would be send
faster in just one packet and also would be able to don't need reliable
transfer at all, just some UDP-like... :-) ) and also now I'm a bit
reluctant to do it since some people told me that my application is some
kinda similar to Gnutella and I was thinking about adapt the protocol to
being compatible with it (so user base is greater :-D ) but I wouldn't be
able to use TTH... :-( Other option is forget about compatibility and try
to keep designing a protocol that best fit to web technologies... What do
you think?
El 04/05/2013 01:56, "[email protected]" [email protected] escribió:

I'm doing the tests over two tabs on the same Chrome v28 instance and my
internet connection is 10Mbps down / 1Mbps up (and 25€ + taxes... Yes,
internet here at Spain sucks and it's far expensive... ¬¬)

2013/5/4 [email protected] [email protected]

Yes, I'm using the higherBandwidthSDP inside my DataChannel-polyfill.
You can take a look at
https://github.com/piranna/DataChannel-polyfill/blob/chrome_canary/datachannel.js#L448
:

if(createDataChannel && !supportReliable && Reliable)
{
var createOffer = RTCPeerConnection.prototype.createOffer;
var createAnswer = RTCPeerConnection.prototype.createAnswer;

RTCPeerConnection.prototype.createOffer = function(successCallback,
                                                   failureCallback,
                                                   constraints)
{
  createOffer.call(this, function(offer)
  {
    offer.sdp = Reliable.higherBandwidthSDP(offer.sdp);
    successCallback(offer)
  },
  failureCallback, constraints)
}

RTCPeerConnection.prototype.createAnswer = function(successCallback,
                                                    failureCallback,
                                                    constraints)
{
  createAnswer.call(this, function(answer)
  {
    answer.sdp = Reliable.higherBandwidthSDP(answer.sdp);
    successCallback(answer)
  },
  failureCallback, constraints)
}

}

2013/5/4 Michelle Bu [email protected]

For reference, I can do ~1.6MB in 6 seconds and about 1 second for ~50KB
files.


Reply to this email directly or view it on GitHubhttps://github.com//pull/12#issuecomment-17424082
.

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un
monton de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un
monton de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

@michelle
Copy link
Owner

michelle commented May 4, 2013

I'm not sure I understand--Reliable chunks files for you into 500 byte pieces.

@piranna
Copy link
Author

piranna commented May 4, 2013

I know that reliable have a mtu of 500 bytes to be sure that each chunk is
send inside just one ethernet package and don't get fragmented, but over it
and indepently, i'm spliting my files in chunks of 64kb+metadata without
taking in consideration the mtu, since it should be transparent to the
aplication. Maybe this is the reason of why it's being so slow?

Also, I was thinking about down my chunks size from 64kb+metadata to
1kb+metadata, so it could fit inside the teorethical 1500bytes limit of
ethernet datagrams, also maybe allowing to transfer them no-reliable, UDP
style. Would this make sense or improve the transfer speed? What's the
current datachannels datagram size limit? I remember reading something
about 1070 bytes... what's the reason of this number? Got it being
increased? What relationship has it with reliable 500 bytes mtu?

(yes, I'm some kinda illiterate on network programming for this topics,
sorry... :-P)
El 04/05/2013 20:19, "Michelle Bu" [email protected] escribió:

I'm not sure I understand--Reliable chunks files for you into 500 byte
pieces.


Reply to this email directly or view it on GitHubhttps://github.com//pull/12#issuecomment-17438724
.

@piranna
Copy link
Author

piranna commented May 5, 2013

On
http://stackoverflow.com/questions/14315556/send-arraybuffer-over-datachannel-dom-exception-12#comment19890381_14315556it's
said they got to send data up to 1100 bytes per package, do you think
it's feasable with reliable library raising the _mtu attribute from 500
byted? What problems it would have? According to my calcs with MessagePack
and Tiger TTH I would need just 1087 bytee to be able to send 1KB chunks
with file TTH and chunk ID and also have enought space to add the chunk
Tiger hash as checksum, and would decrease it up to 1080 using offset-based
fields... :-)
El 04/05/2013 20:33, "[email protected]" [email protected] escribió:

I know that reliable have a mtu of 500 bytes to be sure that each chunk is
send inside just one ethernet package and don't get fragmented, but over it
and indepently, i'm spliting my files in chunks of 64kb+metadata without
taking in consideration the mtu, since it should be transparent to the
aplication. Maybe this is the reason of why it's being so slow?

Also, I was thinking about down my chunks size from 64kb+metadata to
1kb+metadata, so it could fit inside the teorethical 1500bytes limit of
ethernet datagrams, also maybe allowing to transfer them no-reliable, UDP
style. Would this make sense or improve the transfer speed? What's the
current datachannels datagram size limit? I remember reading something
about 1070 bytes... what's the reason of this number? Got it being
increased? What relationship has it with reliable 500 bytes mtu?

(yes, I'm some kinda illiterate on network programming for this topics,
sorry... :-P)
El 04/05/2013 20:19, "Michelle Bu" [email protected] escribió:

I'm not sure I understand--Reliable chunks files for you into 500 byte
pieces.


Reply to this email directly or view it on GitHubhttps://github.com//pull/12#issuecomment-17438724
.

@michelle
Copy link
Owner

michelle commented May 5, 2013

I had it at 500 to be safe. Yes you can simply change around MTU to mess
around with it :). Keep in mind that the chunk # will take space too.

Michelle

On Sat, May 4, 2013 at 6:37 PM, Jesús Leganés Combarro <
[email protected]> wrote:

On

http://stackoverflow.com/questions/14315556/send-arraybuffer-over-datachannel-dom-exception-12#comment19890381_14315556it's
said they got to send data up to 1100 bytes per package, do you think
it's feasable with reliable library raising the _mtu attribute from 500
byted? What problems it would have? According to my calcs with MessagePack
and Tiger TTH I would need just 1087 bytee to be able to send 1KB chunks
with file TTH and chunk ID and also have enought space to add the chunk
Tiger hash as checksum, and would decrease it up to 1080 using
offset-based
fields... :-)
El 04/05/2013 20:33, "[email protected]" [email protected] escribió:

I know that reliable have a mtu of 500 bytes to be sure that each chunk
is
send inside just one ethernet package and don't get fragmented, but over
it
and indepently, i'm spliting my files in chunks of 64kb+metadata without
taking in consideration the mtu, since it should be transparent to the
aplication. Maybe this is the reason of why it's being so slow?

Also, I was thinking about down my chunks size from 64kb+metadata to
1kb+metadata, so it could fit inside the teorethical 1500bytes limit of
ethernet datagrams, also maybe allowing to transfer them no-reliable,
UDP
style. Would this make sense or improve the transfer speed? What's the
current datachannels datagram size limit? I remember reading something
about 1070 bytes... what's the reason of this number? Got it being
increased? What relationship has it with reliable 500 bytes mtu?

(yes, I'm some kinda illiterate on network programming for this topics,
sorry... :-P)
El 04/05/2013 20:19, "Michelle Bu" [email protected] escribió:

I'm not sure I understand--Reliable chunks files for you into 500 byte
pieces.


Reply to this email directly or view it on GitHub<
https://github.com/michellebu/reliable/pull/12#issuecomment-17438724>
.


Reply to this email directly or view it on GitHubhttps://github.com//pull/12#issuecomment-17444382
.

@piranna
Copy link
Author

piranna commented May 5, 2013

I know, and I have taking it in account ;-) I'll try playing with the MTU,
but I think it would work. Now the problem would be the fileslist that this
would be really big and can't be splited easily. Anyway I'll talk tomorrow
with my teacher :-)

2013/5/5 Michelle Bu [email protected]

I had it at 500 to be safe. Yes you can simply change around MTU to mess
around with it :). Keep in mind that the chunk # will take space too.

Michelle

On Sat, May 4, 2013 at 6:37 PM, Jesús Leganés Combarro <
[email protected]> wrote:

On

http://stackoverflow.com/questions/14315556/send-arraybuffer-over-datachannel-dom-exception-12#comment19890381_14315556it's
said they got to send data up to 1100 bytes per package, do you think
it's feasable with reliable library raising the _mtu attribute from 500
byted? What problems it would have? According to my calcs with
MessagePack
and Tiger TTH I would need just 1087 bytee to be able to send 1KB chunks
with file TTH and chunk ID and also have enought space to add the chunk
Tiger hash as checksum, and would decrease it up to 1080 using
offset-based
fields... :-)
El 04/05/2013 20:33, "[email protected]" [email protected] escribió:

I know that reliable have a mtu of 500 bytes to be sure that each
chunk
is
send inside just one ethernet package and don't get fragmented, but
over
it
and indepently, i'm spliting my files in chunks of 64kb+metadata
without
taking in consideration the mtu, since it should be transparent to the
aplication. Maybe this is the reason of why it's being so slow?

Also, I was thinking about down my chunks size from 64kb+metadata to
1kb+metadata, so it could fit inside the teorethical 1500bytes limit
of
ethernet datagrams, also maybe allowing to transfer them no-reliable,
UDP
style. Would this make sense or improve the transfer speed? What's the
current datachannels datagram size limit? I remember reading something
about 1070 bytes... what's the reason of this number? Got it being
increased? What relationship has it with reliable 500 bytes mtu?

(yes, I'm some kinda illiterate on network programming for this
topics,
sorry... :-P)
El 04/05/2013 20:19, "Michelle Bu" [email protected]
escribió:

I'm not sure I understand--Reliable chunks files for you into 500
byte
pieces.


Reply to this email directly or view it on GitHub<
https://github.com/michellebu/reliable/pull/12#issuecomment-17438724>
.


Reply to this email directly or view it on GitHub<
https://github.com/michellebu/reliable/pull/12#issuecomment-17444382>
.


Reply to this email directly or view it on GitHubhttps://github.com//pull/12#issuecomment-17444810
.

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un monton
de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants