Skip to content
This repository has been archived by the owner on Mar 11, 2020. It is now read-only.

respect the msize value for read/write #27

Conversation

simonferquel
Copy link
Contributor

There was a typo in the msize negociation code, and for large read / writes, the msize value was not taken into account (if the size of the data to write + size of the header message > msize, we must chunk the data)

Fix read/write to take msize into account (and chunking data if necessary)

Signed-off-by: Simon Ferquel <[email protected]>
@codecov-io
Copy link

codecov-io commented Sep 30, 2016

Current coverage is 15.52% (diff: 13.33%)

Merging #27 into master will decrease coverage by 0.43%

@@             master        #27   diff @@
==========================================
  Files            13         15     +2   
  Lines          1128       1179    +51   
  Methods           0          0          
  Messages          0          0          
  Branches          0          0          
==========================================
+ Hits            180        183     +3   
- Misses          900        948    +48   
  Partials         48         48          

Powered by Codecov. Last update 5a23906...9fd7daa

@dgageot
Copy link

dgageot commented Sep 30, 2016

ping @stevvooe :-)

Copy link
Contributor

@stevvooe stevvooe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Msize should be handled in the channel, not the client.

@@ -38,7 +38,8 @@ func clientnegotiate(ctx context.Context, ch Channel, version string) (string, e
return "", fmt.Errorf("unsupported server version: %v", version)
}

if int(v.MSize) > ch.MSize() {
if int(v.MSize) < ch.MSize() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why reverse this condition?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The condition was wrong : client and server negociate to know wich has the smallest msize, and agree to stick with that

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client already sent their size. This scales down the local msize if the server responds with a smaller value. The protocol works like this:

  1. Client sends msize.
  2. Server responds with msize.
    v.Msize > msize, use server's msize.

At least, this was how I understand from study of 9p. Do you have documentation countering this? What does plan9 do?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, please move this to a separate PR.

if !ok {
return 0, ErrUnexpectedMsg
// size[4] Rread tag[2] count[4] data[count]
const rreadMessageEnvelopeSize int = 4 + 1 + 2 + 4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This calculation should be a part of the message type. Please modify this correctly in codec.

maxChunkSize := c.msize - rreadMessageEnvelopeSize
totalRead := 0

for {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This breaks the abstraction: there should only be a single message sent per method call.

You can create a helper, similar to io.ReadFull, to handle chunking.

@simonferquel simonferquel changed the title Fix msize negociation and respect the msize value for read/write [DO NOT MERGE / WIP] Fix msize negociation and respect the msize value for read/write Oct 3, 2016
fcall.Message = MessageTread{Offset: m.Offset, Fid: m.Fid, Count: m.Count - uint32(overflow)}
}
}

p, err := ch.codec.Marshal(fcall)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this sizing be represented by the codec? Just test len(p) against the returned buffer. We can add a Size method to codec to avoid alloc.

}

// ReadFull reads the whole file and returns it in a slice
func ReadFull(s Session, ctx context.Context, fid Fid, offset int64) (p []byte, err error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Go back and look at io.ReadFull. It takes a buffer. This is going to cause unnecessary allocs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After some thought here, it would be much better to implement an io.Reader or io.ReadSeeker that tracks offset and fid. This way, the io utility functions can be used composably. The singature would be something like this:

NewReadSeeker(ctx context.Context, s Session, fid Fid) (io.ReadSeeker, error)

This should open up correct buffering.


// ReadFull reads the whole file and returns it in a slice
func ReadFull(s Session, ctx context.Context, fid Fid, offset int64) (p []byte, err error) {
var buf [4096]byte
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right here is the problem: this fixes the buffer size and removes control from the caller.

@@ -31,3 +31,30 @@ type Session interface {
// session implementation.
Version() (msize int, version string)
}

// WriteFull writes the whole p slice to the specified fid
func WriteFull(s Session, ctx context.Context, fid Fid, p []byte, offset int64) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please order the context argument correctly.

@@ -31,3 +31,30 @@ type Session interface {
// session implementation.
Version() (msize int, version string)
}

// WriteFull writes the whole p slice to the specified fid
func WriteFull(s Session, ctx context.Context, fid Fid, p []byte, offset int64) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Place these in a separate file. They are just methods to use with session. See readdir.go for an example.

@@ -38,7 +38,8 @@ func clientnegotiate(ctx context.Context, ch Channel, version string) (string, e
return "", fmt.Errorf("unsupported server version: %v", version)
}

if int(v.MSize) > ch.MSize() {
if int(v.MSize) < ch.MSize() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client already sent their size. This scales down the local msize if the server responds with a smaller value. The protocol works like this:

  1. Client sends msize.
  2. Server responds with msize.
    v.Msize > msize, use server's msize.

At least, this was how I understand from study of 9p. Do you have documentation countering this? What does plan9 do?

}

// ReadFull reads the whole file and returns it in a slice
func ReadFull(s Session, ctx context.Context, fid Fid, offset int64) (p []byte, err error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After some thought here, it would be much better to implement an io.Reader or io.ReadSeeker that tracks offset and fid. This way, the io utility functions can be used composably. The singature would be something like this:

NewReadSeeker(ctx context.Context, s Session, fid Fid) (io.ReadSeeker, error)

This should open up correct buffering.

…ayer

+ expose / read write operations as standard io.Reader|Writer interfaces

Signed-off-by: Simon Ferquel <[email protected]>
@simonferquel simonferquel changed the title [DO NOT MERGE / WIP] Fix msize negociation and respect the msize value for read/write respect the msize value for read/write Oct 4, 2016
Signed-off-by: Simon Ferquel <[email protected]>
Signed-off-by: Simon Ferquel <[email protected]>
Signed-off-by: Simon Ferquel <[email protected]>
@simonferquel
Copy link
Contributor Author

Should be ready to merge. I have successfully tested it in the context of Docker for Windows / Mac.
Some structural changes have been made :

  • Codec has now knowledge of the max size it can use to marshall messages
  • This has required some plumbing for tests and 9pr to build and run
  • High level clients should prefer to rely on NewFileReader / NewFileWriter that implement io.Reader / io.Writer behaviors to access data streams

This also have a change in the package signature (newly exposed NewFileReader / Writer functions and newly exposed Codec from the session - so that 9pr can be reliable)

@simonferquel
Copy link
Contributor Author

This way of dealing with the problem is too much invasive prefer #29

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants