Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RUM-6307 Make sure ConsentAwareFileOrchestrator is thread safe #2309

Conversation

mariusc83
Copy link
Collaborator

What does this PR do?

We had an issue in iOS where switching from PENDING to GRANTED in the tracking consent asynchronously causes the loss of some events that are about to be written from a different queue. In this PR we are making sure we are covering this use case with integration tests and also that we mark the delegateOrchestrator as Volatile in the ConsentAwareFileOrchestrator in case the Executor is re - spawning the worker thread.

Motivation

What inspired you to submit this pull request?

Additional Notes

Anything else we should know when reviewing?

Review checklist (to be filled by reviewers)

  • Feature or bugfix MUST have appropriate tests (unit, integration, e2e)
  • Make sure you discussed the feature or bugfix with the maintaining team in an Issue
  • Make sure each commit and the PR mention the Issue number (cf the CONTRIBUTING doc)

@mariusc83 mariusc83 self-assigned this Oct 4, 2024
@codecov-commenter
Copy link

codecov-commenter commented Oct 4, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 70.31%. Comparing base (7ebe827) to head (979fc06).
Report is 6 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #2309      +/-   ##
===========================================
- Coverage    70.32%   70.31%   -0.01%     
===========================================
  Files          736      736              
  Lines        27466    27472       +6     
  Branches      4607     4607              
===========================================
+ Hits         19315    19316       +1     
+ Misses        6871     6865       -6     
- Partials      1280     1291      +11     
Files with missing lines Coverage Δ
...ence/file/advanced/ConsentAwareFileOrchestrator.kt 96.77% <ø> (ø)

... and 28 files with indirect coverage changes

@mariusc83 mariusc83 marked this pull request as ready for review October 4, 2024 15:03
@mariusc83 mariusc83 requested review from a team as code owners October 4, 2024 15:03
switchConsentThread.start()
switchConsentThread.join()

// When
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

empty When

Comment on lines 172 to 182
val thread = Thread {
featureScope.withWriteContext { _, eventBatchWriter ->
eventBatchWriter.write(
rawBatchEvent,
fakeBatchMetadata,
eventType
)
}
}
thread.start()
thread.join()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not quite clear why do we need to spin a new thread for every write request we are making, why not put forEach call into a single dedicated thread? By using .join inside every .forEach call you are anyway making .withWriteContext invocations sequential.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to increase the potential of a collision, basically the worse case scenario.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But that doesn't increase the chance of collision, isn't it? There is no race condition, because of the .join - all write requests are strictly sequential.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yes, you are totally right, using join there will not work as it will force the main thread to wait until the thread in progress is finished. I always confuse those :(. Need a CountdownLatch in there.

@mariusc83 mariusc83 force-pushed the mconstantin/rum-6307/add-integration-test-for-async-consent-switching branch 3 times, most recently from d925d60 to af8cecd Compare October 7, 2024 09:15
@mariusc83 mariusc83 force-pushed the mconstantin/rum-6307/add-integration-test-for-async-consent-switching branch from af8cecd to 021b878 Compare October 7, 2024 09:43
@mariusc83 mariusc83 force-pushed the mconstantin/rum-6307/add-integration-test-for-async-consent-switching branch from 021b878 to 979fc06 Compare October 7, 2024 11:16
testedInternalSdkCore.registerFeature(stubFeature)
val featureScope = testedInternalSdkCore.getFeature(fakeFeatureName)
checkNotNull(featureScope)
val countDownLatch = CountDownLatch(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a problem here: CountDownLatch is initialized with value 1, and it will be set to 0 after the first batch write, and once all the batches written, value will be negative.

It means, that countDownLatch.await(TimeUnit.SECONDS.toMillis(5), TimeUnit.MILLISECONDS) will be unblocked when first batch is written, not when all batches are written.

We won't notice an issue, because we also have ConditionWatcher which wait for the data, but I suppose if the original intention was "write at least one batch in pending mode and then continue with tracking consent switch", then this await should be moved inside:

 val switchConsentThread = Thread {
    Datadog.setTrackingConsent(TrackingConsent.GRANTED)
}

Probably the switchConsentThread.join() is not needed as well, if we are relying on the waiting ConditionWatcher (but having it doesn't hurt).

@mariusc83 mariusc83 closed this Oct 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants