-
Notifications
You must be signed in to change notification settings - Fork 50
respect the msize value for read/write #27
respect the msize value for read/write #27
Conversation
Fix read/write to take msize into account (and chunking data if necessary) Signed-off-by: Simon Ferquel <[email protected]>
Current coverage is 15.52% (diff: 13.33%)@@ master #27 diff @@
==========================================
Files 13 15 +2
Lines 1128 1179 +51
Methods 0 0
Messages 0 0
Branches 0 0
==========================================
+ Hits 180 183 +3
- Misses 900 948 +48
Partials 48 48
|
ping @stevvooe :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Msize should be handled in the channel, not the client.
@@ -38,7 +38,8 @@ func clientnegotiate(ctx context.Context, ch Channel, version string) (string, e | |||
return "", fmt.Errorf("unsupported server version: %v", version) | |||
} | |||
|
|||
if int(v.MSize) > ch.MSize() { | |||
if int(v.MSize) < ch.MSize() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why reverse this condition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The condition was wrong : client and server negociate to know wich has the smallest msize, and agree to stick with that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The client already sent their size. This scales down the local msize if the server responds with a smaller value. The protocol works like this:
- Client sends msize.
- Server responds with msize.
v.Msize > msize, use server's msize.
At least, this was how I understand from study of 9p. Do you have documentation countering this? What does plan9 do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, please move this to a separate PR.
if !ok { | ||
return 0, ErrUnexpectedMsg | ||
// size[4] Rread tag[2] count[4] data[count] | ||
const rreadMessageEnvelopeSize int = 4 + 1 + 2 + 4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This calculation should be a part of the message type. Please modify this correctly in codec.
maxChunkSize := c.msize - rreadMessageEnvelopeSize | ||
totalRead := 0 | ||
|
||
for { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This breaks the abstraction: there should only be a single message sent per method call.
You can create a helper, similar to io.ReadFull
, to handle chunking.
Signed-off-by: Simon Ferquel <[email protected]>
fcall.Message = MessageTread{Offset: m.Offset, Fid: m.Fid, Count: m.Count - uint32(overflow)} | ||
} | ||
} | ||
|
||
p, err := ch.codec.Marshal(fcall) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this sizing be represented by the codec? Just test len(p)
against the returned buffer. We can add a Size
method to codec to avoid alloc.
} | ||
|
||
// ReadFull reads the whole file and returns it in a slice | ||
func ReadFull(s Session, ctx context.Context, fid Fid, offset int64) (p []byte, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Go back and look at io.ReadFull
. It takes a buffer. This is going to cause unnecessary allocs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After some thought here, it would be much better to implement an io.Reader
or io.ReadSeeker
that tracks offset and fid. This way, the io
utility functions can be used composably. The singature would be something like this:
NewReadSeeker(ctx context.Context, s Session, fid Fid) (io.ReadSeeker, error)
This should open up correct buffering.
|
||
// ReadFull reads the whole file and returns it in a slice | ||
func ReadFull(s Session, ctx context.Context, fid Fid, offset int64) (p []byte, err error) { | ||
var buf [4096]byte |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right here is the problem: this fixes the buffer size and removes control from the caller.
@@ -31,3 +31,30 @@ type Session interface { | |||
// session implementation. | |||
Version() (msize int, version string) | |||
} | |||
|
|||
// WriteFull writes the whole p slice to the specified fid | |||
func WriteFull(s Session, ctx context.Context, fid Fid, p []byte, offset int64) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please order the context argument correctly.
@@ -31,3 +31,30 @@ type Session interface { | |||
// session implementation. | |||
Version() (msize int, version string) | |||
} | |||
|
|||
// WriteFull writes the whole p slice to the specified fid | |||
func WriteFull(s Session, ctx context.Context, fid Fid, p []byte, offset int64) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Place these in a separate file. They are just methods to use with session. See readdir.go
for an example.
@@ -38,7 +38,8 @@ func clientnegotiate(ctx context.Context, ch Channel, version string) (string, e | |||
return "", fmt.Errorf("unsupported server version: %v", version) | |||
} | |||
|
|||
if int(v.MSize) > ch.MSize() { | |||
if int(v.MSize) < ch.MSize() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The client already sent their size. This scales down the local msize if the server responds with a smaller value. The protocol works like this:
- Client sends msize.
- Server responds with msize.
v.Msize > msize, use server's msize.
At least, this was how I understand from study of 9p. Do you have documentation countering this? What does plan9 do?
} | ||
|
||
// ReadFull reads the whole file and returns it in a slice | ||
func ReadFull(s Session, ctx context.Context, fid Fid, offset int64) (p []byte, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After some thought here, it would be much better to implement an io.Reader
or io.ReadSeeker
that tracks offset and fid. This way, the io
utility functions can be used composably. The singature would be something like this:
NewReadSeeker(ctx context.Context, s Session, fid Fid) (io.ReadSeeker, error)
This should open up correct buffering.
Signed-off-by: Simon Ferquel <[email protected]>
…ayer + expose / read write operations as standard io.Reader|Writer interfaces Signed-off-by: Simon Ferquel <[email protected]>
Signed-off-by: Simon Ferquel <[email protected]>
Signed-off-by: Simon Ferquel <[email protected]>
Signed-off-by: Simon Ferquel <[email protected]>
Should be ready to merge. I have successfully tested it in the context of Docker for Windows / Mac.
This also have a change in the package signature (newly exposed NewFileReader / Writer functions and newly exposed Codec from the session - so that 9pr can be reliable) |
This way of dealing with the problem is too much invasive prefer #29 |
There was a typo in the msize negociation code, and for large read / writes, the msize value was not taken into account (if the size of the data to write + size of the header message > msize, we must chunk the data)