-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Test] [Test Hive Connector V2] Test the Hive Source Connector V2 and record the problems encountered in the test #2793
Comments
please assign it to me , I will try it ; |
Which issue do you want to work? I do the test now and I will record all the problems found in the test and add todo list here. You can receive the issue you want to handle. |
I want to work for #2792 ,but it seems more difficult for me ; |
I fix #2792 already. You can look at other tasks. We have many issues marked as |
This issue has been automatically marked as stale because it has not had recent activity for 30 days. It will be closed in next 7 days if no further activity occurs. |
This issue has been closed because it has not received response for too long time. You could reopen it if you encountered similar problems in the future. |
SeaTunnel version: dev
Hadoop version: Hadoop 2.10.2
Flink version: 1.12.7
Spark version: 2.4.3, scala version 2.11.12
Problems found to be fixed
Hive Source Connector
1. test text file format table
Total rows:40
1.1 Test in Flink Engine
1.1.1 Job Config File
1.1.2 Submit Job Command
1.1.3 Check Result in Flink Job log
The fields in data is lost, it can be fix in the feature #2473
The rows is right.
1.2 Test in Spark Engine
1.2.1 Job Config File
1.2.2 Submit Job Command --deploy-mode CLIENT --master local
1.2.3 Check data result
The fields in data is lost, it can be fix in the feature.
The rows is right.
1.2.4 Submit Job Command --deploy-mode client --master yarn
1.2.5 Check data result
2. test orc file format table
The test data is same as text file format table.
2.1 Test in Flink engine
2.1.1 Job Config File
2.1.2 Submit Job Command
2.1.3 Check Data Result
The fields in data is right.
The rows is right.
2.2 Test in Spark Engine
2.2.1 Job Config File
2.2.2 Submit Job Command
2.2.3 Check Data Result
3. test parquet file format table
The text was updated successfully, but these errors were encountered: