Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hdxdsys 843 Add DTM data #168

Merged
merged 5 commits into from
Sep 19, 2024
Merged

Hdxdsys 843 Add DTM data #168

merged 5 commits into from
Sep 19, 2024

Conversation

turnerm
Copy link
Member

@turnerm turnerm commented Sep 17, 2024

Finally getting the DTM data into HAPI. This isn't the end of the story though, I will also need to split up the UNHCR data into the refugees and returnees endpoints.

Please also see OCHA-DAP/hapi-sqlalchemy-schema#64

The tests will work once I merge the schema PR above .

Copy link

github-actions bot commented Sep 17, 2024

Test Results

17 tests  +1   17 ✅ +1   11m 20s ⏱️ +59s
 1 suites ±0    0 💤 ±0 
 1 files   ±0    0 ❌ ±0 

Results for commit 155a9aa. ± Comparison against base commit 8dd2e14.

♻️ This comment has been updated with latest results.

@turnerm turnerm requested review from mcarans and b-j-mills and removed request for mcarans September 17, 2024 13:53
hxl_tags = admin_results["headers"][1]
values = admin_results["values"]
admin_codes = values[0].keys()
for admin_code in admin_codes:
Copy link
Contributor

@mcarans mcarans Sep 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given we're probably going to pause HAPI development this quarter, this may be a moot point, but for data that has already been set up in the right form by pipeline, I wonder if the simplicity of the YAML configurable scraper for reading the data is cancelled out by the complexity of the database upload code with the nested for loops.

That was my thinking for humanitarian needs where it looked much simpler (and more efficient) to read the file and upload to db in one step and not use a configurable scraper at all. This probably indicates we're missing the right kind of configurable reader as there would probably be commonality between IDPs and humanitarian needs reading. Just something to think about.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering why you didn't use the configurable scraper for HNO, and now that makes so much sense. Indeed there are a lot of inefficiencies in my DTM implementation, but let's just close our eyes and get it out the door. If we end up moving forward with HAPI / standardization then it will get refactored to the DTM scraper I'd imagine.

reporting_round,
operation,
)
if duplicate_row_check in duplicate_rows:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, why are there duplicate rows?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the same question, I plan on emailing DTM to ask

@coveralls
Copy link

Pull Request Test Coverage Report for Build 10940531302

Details

  • 56 of 56 (100.0%) changed or added relevant lines in 2 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.2%) to 93.337%

Totals Coverage Status
Change from base Build 10922190712: 0.2%
Covered Lines: 1583
Relevant Lines: 1696

💛 - Coveralls

@turnerm turnerm merged commit 27eda92 into main Sep 19, 2024
4 checks passed
@turnerm turnerm deleted the HDXDSYS-843-add-dtm branch September 19, 2024 13:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants