-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Arguments to assessments function cause error #31
Comments
Hi, We can use the Here is another way I checked. The following will query all the assessment results for 2020 in Kentucky as a JSON, and I can filter by assessment unit: library(tidyjson)
library(dplyr)
df_2020 <- assessments(state_code='KY', organization_id='21KY', reporting_cycle='2020', tidy = FALSE)
df_2020 |>
enter_object("items") |>
gather_array() |>
spread_all() |>
select(-c("array.index", "document.id")) |>
enter_object("assessments") |>
gather_array() |>
spread_all(recursive = TRUE) |>
filter(assessmentUnitIdentifier == 'KY-1749') That returned a tibble with one row with the assessment results for the KY-1749 assessment unit:
Repeat this for 2014: df_2014 <- assessments(state_code='KY', organization_id='21KY', reporting_cycle='2014', tidy = FALSE)
df_2014 |>
enter_object("items") |>
gather_array() |>
spread_all() |>
select(-c("array.index", "document.id")) |>
enter_object("assessments") |>
gather_array() |>
spread_all(recursive = TRUE) |>
filter(assessmentUnitIdentifier == 'KY-1749') This returned a zero row tibble, indicating no results for that assessment unit. I am obviously not familiar with the particulars of that site, but it seems it was either not assessed by the state or not included in the data uploaded by the state. alternative appraochI don't know your workflow, but it might work better to just download the assessment data for all the assessment units once and filter it locally instead of using the web api to filter it. df_2020_tidy <- assessments(state_code='KY', organization_id='21KY', reporting_cycle='2020', tidy = TRUE)
df_2020_tidy$use_assessment |> filter(assessment_unit_identifier == "KY-1749")
and df_2020_tidy$parameter_assessment |> filter(assessmentUnitIdentifier == "KY-1749")
## I just noticied I missed cleaning the column names here, sorry!!!
BUT when I went back to 2014 and 2012 with this approach, I got an error.... ## Of course this returns an error
df_2014_tidy <- assessments(state_code='KY', organization_id='21KY', reporting_cycle='2014', tidy = TRUE) So we can write a little function to tidy up the raw json instead: ## make sure we have the packages we need:
library(tidyjson)
library(dplyr)
library(tidyr)
library(purrr)
tidy_assessment_data <- function(content) {
## return documents
content %>%
enter_object("items") %>%
gather_array() %>%
spread_all() %>%
select(-c("array.index", "document.id")) %>%
enter_object("documents") %>%
gather_array() %>%
spread_all(recursive = TRUE) %>%
select(-"array.index") %>%
as_tibble() -> content_docs
## return use assessment data
content %>%
enter_object("items") %>%
gather_array() %>%
spread_all() %>%
select(-c("array.index", "document.id")) %>%
enter_object("assessments") %>%
gather_array() %>%
spread_all(recursive = TRUE) %>%
select(-c("array.index", "agencyCode")) %>%
mutate(
probableSources = map(.data$..JSON, ~{
.x[["probableSources"]] %>% {
tibble(
sourceName = map_chr(., "sourceName"),
sourceConfirmedIndicator = map_chr(., "sourceConfirmedIndicator"),
associatedCauseName = map(., ~{
.x[["associatedCauseNames"]] %>% {
tibble(
causeName = map_chr(., "causeName")
)}
})) %>%
unnest("associatedCauseName", keep_empty = TRUE)
}})
) %>%
tibble::as_tibble() %>%
janitor::clean_names()-> content_use_assessments
## return parameter assessment data
content %>%
enter_object("items") %>%
gather_array() %>%
spread_all() %>%
select(-c("array.index", "document.id")) %>%
enter_object("assessments") %>%
gather_array() %>%
spread_all(recursive = TRUE) %>%
select(-c("array.index", "agencyCode")) %>%
enter_object("parameters") %>%
gather_array() %>%
spread_all(recursive = TRUE) %>%
select(-c("array.index")) %>%
enter_object("associatedUses") %>%
gather_array() %>%
select(-"array.index") %>%
spread_all(recursive = TRUE) %>%
mutate(seasons = map(.data$..JSON, ~{
.x[["seasons"]] %>% {
tibble(
seasonStartText = map_chr(., "seasonStartText"),
seasonEndText = map_chr(., "seasonEndText")
)
}
})) %>%
unnest("seasons", keep_empty = TRUE) -> content_parameter_assessments
return(list(documents = content_docs,
use_assessment = content_use_assessments,
parameter_assessment = content_parameter_assessments))
}
## now make the query again but return the raw json
df_2014_raw <- assessments(state_code='KY', organization_id='21KY', reporting_cycle='2014', tidy = FALSE)
## and tidy it
df_2014_tidy <- tidy_assessment_data(df_2014_raw)
df_2014_tidy
This is pretty close to what the other functions return. I don't have any idea why it isn't returning a 303(d) document for 2014 though. |
Thank you so much for your quick and thorough response!! I think you are right --- downloading all assessments and then filtering in R might be the best approach since it reduces the number of queries to the API. Thank you for providing a function to clean up the raw JSON results. I am adding your name to the acknowledgments and look forward to citing this package in our manuscript! |
Thanks! Look forward to seeing the paper! Feel free to reach out if there are any other issues, this package is definitely a work in progress. |
@TraciPopejoy you may want to try the development version in the dev-flatten branch: remotes::install_github("mps9506/rATTAINS", ref = "dev-flatten") I is going to be a while before I can upload to CRAN because I'm waiting on the tibblify package this release depends on to mature. Hopefully this returns more consistent data structures across all queries to EPA ATTAINS. assessments(assessment_unit_id='KY-1749', state_code='KY', organization_id='21KY', reporting_cycle='2016')
|
Thank you so much for this update!! It works perfectly and has made my life a lot easier. I really appreciate all the hard work you've put into this r package. |
Hello,
I am hoping to use this package to query the ATTAINS data for specific areas across the eastern United States. I think most of the data I'm interested in is in the use_assessment tibble produced by assessments(), but I'm having difficulty using the assessments() function with assessment_unit_id designated. I would also like to know EPA IR categories since ~2010, which I think means I'll need to run assessments() multiple times, as shown in your example tutorial (though at the state level).
I can only get the assessments() function to complete a query with an assessment_unit_id listed when both state_code and organization_id are also listed. Is this the expected function? I also run into an error when I try to specify any reporting cycle other than 2020 (which I think is the most recent one based on the default arguments). Adding any agency_code arguments doesn't affect the error.
assessments(assessment_unit_id='KY-1749', state_code='KY', organization_id='21KY') # works assessments(assessment_unit_id='KY-1749', state_code='KY', organization_id='21KY', reporting_cycle='2020') # works assessments(assessment_unit_id='KY-1749', state_code='KY', organization_id='21KY', reporting_cycle='2016') # does not work
The error I recieve is : "Error in
dplyr::select()
:! Can't subset columns that don't exist.
x Column
agencyCode
doesn't exist.Run
rlang::last_error()
to see where the error occurred. "I downloaded the CRAN version of the package this morning and am running R version 4.1.1 on Windows.
I appreciate your help with this issue and for writing this package! It is going to immensely help and speed up the process -- we have about 70 sites across 13 states. Thanks!
Traci
The text was updated successfully, but these errors were encountered: