You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 9, 2020. It is now read-only.
It sounds like you're creating a circular dependency.
The yarn package depends on the spark-core package, any attempts to pull that code back in to spark-core will expose you to bugs in the future and be a nightmare of reflection and code injection.
There are a few solutions I can think of based on this issue:
Pull whatever dependencies you need back into spark-core (though judging by our IRL conversation, that might not go over so well)
Create a new package that has dependencies on spark-core and yarn
Move the hadoop delegation code into its own package and modify the YARN package to depend on the hadoop-delegation-token package.
Personally, 2 and 3 seem like the best stop-gap solutions, which can easily transition into a more ideal set-up at a future date.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Upon investigation of Secure HDFS support, I have found that the recent PRs that have supported moving Delegation Token Renewal logic into Spark Core are instrumental towards providing a clean implementation.
The PR of focus is this one: SPARK-20434: Move Hadoop delegation token code from yarn to core by mgummelt · Pull Request #17723 · apache/spark
Which is an initial step for this PR: SPARK-16742: Mesos Kerberos Support by mgummelt · Pull Request #18519 · apache/spark. Because we will be re-using alot of this logic, what is the strategy in re-using most recent commits instead of using ugly reflections to access private methods in private packages (if that is even possible).
This issue is in reference to this PR: #373
The text was updated successfully, but these errors were encountered: