Skip to content

Conversation

@jzemmels
Copy link
Collaborator

@jzemmels jzemmels commented Dec 29, 2025

Still to-do:

  • unpack "values" and "percentiles" column when computation_type includes "percentile"
  • function argument descriptions -- do we want to generalize the get_params, get_description, and base_url functions to pull the docs for /statistics too?
  • arg type checking (is that done in other waterdata functions?)
  • pull apart get_statistics_data into separate functions to match other read_waterdata patterns
  • Figure out if other read_ogc_data helpers are needed:
    • rejigger_cols
    • deal_with_empty
    • switch_arg_id
    • switch_properties_id
  • resolve merge conflicts
  • (Nice to have) determine if there's a more efficient way to join everything together than the final return_list <- do.call(rbind, return_list_tmp) as it's a slight bottleneck.

And there's Laura's list of to-dos here: https://code.usgs.gov/water/dataRetrieval/-/issues/450

@jzemmels jzemmels self-assigned this Dec 29, 2025
@jzemmels
Copy link
Collaborator Author

To the discussion of whether base R vs. data.table is faster: I've implemented a data.table version of the get_statistics_data function and temporarily put in the read_waterdata_stats_datatable.R script until we decide if we want to add data.table as a dependency in this PR.

I've profiled the rbind-y base R version of the function against the data.table version and the comparison is noteworthy. The data.table version is much faster and uses way-way less memory.

image

Here's the benchmark code I ran:

bench::mark(
  "data.table" = {
    dat1 <- 
      get_statistics_data_data_table(args = list(approval_status = "approved", 
                                            # state_code = "US:42",
                                            county_code = "US:42:103",
                                            parameter_code = "00095", 
                                            mime_type = "application/json",
                                            computation_type = c("arithmetic_mean", "percentile")),
                                service = "Intervals")
    
    list(
      non_missing_ave = mean(dat1$value, na.rm = TRUE),
      missing_obs = sum(is.na(mean(dat1$value)))
    )
  },
  "base R" = {
    dat2 <-
  get_statistics_data(args = list(approval_status = "approved", 
                                  # state_code = "US:42",
                                  county_code = "US:42:103",
                                  parameter_code = "00095", 
                                  mime_type = "application/json",
                                  computation_type = c("arithmetic_mean", "percentile")),
                      service = "Intervals")
    
    list(
      non_missing_ave = mean(dat2$value, na.rm = TRUE),
      missing_obs = sum(is.na(dat2$value))
    )
  }, min_iterations = 5
)

@ldecicco-USGS
Copy link
Collaborator

To the discussion of whether base R vs. data.table is faster: I've implemented a data.table version of the get_statistics_data function and temporarily put in the read_waterdata_stats_datatable.R script until we decide if we want to add data.table as a dependency in this PR.

I've profiled the rbind-y base R version of the function against the data.table version and the comparison is noteworthy. The data.table version is much faster and uses way-way less memory.

image Here's the benchmark code I ran:
bench::mark(
  "data.table" = {
    dat1 <- 
      get_statistics_data_data_table(args = list(approval_status = "approved", 
                                            # state_code = "US:42",
                                            county_code = "US:42:103",
                                            parameter_code = "00095", 
                                            mime_type = "application/json",
                                            computation_type = c("arithmetic_mean", "percentile")),
                                service = "Intervals")
    
    list(
      non_missing_ave = mean(dat1$value, na.rm = TRUE),
      missing_obs = sum(is.na(mean(dat1$value)))
    )
  },
  "base R" = {
    dat2 <-
  get_statistics_data(args = list(approval_status = "approved", 
                                  # state_code = "US:42",
                                  county_code = "US:42:103",
                                  parameter_code = "00095", 
                                  mime_type = "application/json",
                                  computation_type = c("arithmetic_mean", "percentile")),
                      service = "Intervals")
    
    list(
      non_missing_ave = mean(dat2$value, na.rm = TRUE),
      missing_obs = sum(is.na(dat2$value))
    )
  }, min_iterations = 5
)

No surprises there, if it wasn't for RDB I would have switched many years ago. Let's go for it. We can start swapping out other code later (off the top of my head I can think of a several places where I'd been tempted in the past). Let's leave those types of edits for a dedicated PR later though.

combined <- data.table::rbindlist(combined_list)
combined <- combined[return_list, on = "rid"]

col_order <- c("parent_time_series_id", "monitoring_location_id", "monitoring_location_name", "parameter_code",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally we can figure out a way to not write these out by hand. On the off-chance they add some columns or change the function will start to fail. (and yes, similar hard-coding has caused some headaches in the past with dataRetrieval). We added some logic (I think just in the current PR that removes max_results) to move all "id" columns to the far right (because users don't want to see those big ol'hashes first) UNLESS they are special like monitoring_location_id.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The column order is no longer hard-coded in the function, but I don't think the current order we get from unpacking the nested JSON is ideal:

c("parameter_code", "unit_of_measure", "parent_time_series_id", 
"value", "percentile", "sample_count", "approval_status", "computation_id", 
"computation", "time_of_year", "time_of_year_type", "monitoring_location_id", 
"monitoring_location_name", "site_type", "site_type_code", "country_code", 
"state_code", "county_code", "geometry") 

I can merge the newest changes into this branch to see if what you mentioned changes anything. Otherwise, what would you recommend?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants