Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI Code Review for azblob pkg #23477

Open
feiyushi opened this issue Sep 20, 2024 · 2 comments
Open

AI Code Review for azblob pkg #23477

feiyushi opened this issue Sep 20, 2024 · 2 comments
Assignees
Labels
customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team needs-team-triage Workflow: This issue needs the team to triage. question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Storage Storage Service (Queues, Blobs, Files)

Comments

@feiyushi
Copy link

Bug Report

  • import path of package in question: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  • SDK version: 1.4.0
  • output of go version: go version go1.22.6 linux/amd64
  • What happened?
    We have azblob vendor code in the repo and when enlist the package, merlin bot posted some code review questions. We don't know if it has any merits but want to share the review comments.
  • What did you expect or want to happen?
    Please take a look at comments. This issue is a FYI.
  • How can we reproduce it?
    n/a
  1. referring to github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/internal/shared/buffer_manager.go
    AI code review (iteration 1)
    In the Grow method, there is a potential issue where the buffer is added to the channel without checking if the channel is already full. This could lead to a deadlock if the channel is full and no goroutine is reading from it. It might be better to select on the channel send operation to ensure that it does not block indefinitely if the channel is full. Here is the suggested code:
func (pool *mmbPool) Grow() (int, error) {
    if pool.count < pool.max {
        buffer, err := NewMMB(pool.size)        if err != nil {
            return 0, err
        }        select {
        case pool.buffers <- buffer:
            pool.count++
        default:
            // Handle the case when the channel is full, e.g., log a warning or error
        }
    }
    return pool.count, nil
}
  1. referring to github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/internal/shared/batch_transfer.go
    AI code review (iteration 1)
    It might be better to handle the case where o.Operation is nil to avoid potential runtime panics. You can add a check at the beginning of the DoBatchTransfer function to return an error if o.Operation is not provided.

Here is the suggested code:

if o.Operation == nil {
    return errors.New("Operation cannot be nil")
}
@github-actions github-actions bot added Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Service Attention Workflow: This issue is responsible by Azure service team. Storage Storage Service (Queues, Blobs, Files) labels Sep 20, 2024
Copy link

Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @xgithubtriage.

@jhendrixMSFT jhendrixMSFT removed Service Attention Workflow: This issue is responsible by Azure service team. Client This issue points to a problem in the data-plane of the library. labels Sep 20, 2024
@github-actions github-actions bot added the needs-team-triage Workflow: This issue needs the team to triage. label Sep 20, 2024
@jhendrixMSFT
Copy link
Member

Here are my thoughts, I'll leave it up to the storage team on how to proceed.

Regarding Grow() I think the bot is wrong. The buffers channel is a buffered channel with size pool.max. So, we can never exceed the channel's capacity as we only grow to pool.max.

DoBatchTransfer is an internal API so we have complete control over the call context, so Operation will never be nil.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team needs-team-triage Workflow: This issue needs the team to triage. question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Storage Storage Service (Queues, Blobs, Files)
Projects
None yet
Development

No branches or pull requests

3 participants