As in Chapter 11, we merge the different dataset to produce the data table used in the local projections that uses quarterly data and not monthly data.
library(tidyverse)
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr 1.1.4 ✔ readr 2.1.4
✔ forcats 1.0.0 ✔ stringr 1.5.0
✔ ggplot2 3.4.4 ✔ tibble 3.2.1
✔ lubridate 1.9.3 ✔ tidyr 1.3.0
✔ purrr 1.0.2
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(cli)
11.1 Load Intermediate Files
The (quarterly) weather data (Chapter 1) can be loaded:
Weather <- weather_regions_df |># Add ENSO dataleft_join( ONI_temp |>mutate(quarter =quarter(date)) |>mutate(year =as.numeric(Year), quarter =as.numeric(quarter)) |>group_by(year, quarter) |>summarise(ONI =mean(ONI)), by =c("year"="year","quarter"="quarter" ) ) |>group_by(IDDPTO, quarter) |>mutate( temp_min_dev_ENSO = temp_min -mean(temp_min),temp_max_dev_ENSO = temp_max -mean(temp_max),temp_mean_dev_ENSO = temp_mean -mean(temp_mean),precip_sum_dev_ENSO = precip_sum -mean(precip_sum))|>ungroup() |> labelled::set_variable_labels(temp_min_dev_ENSO ="Deviation of Min. Temperature from ENSO Normals",temp_max_dev_ENSO ="Deviation of Max. Temperature from ENSO Normals",temp_mean_dev_ENSO ="Deviation of Mean Temperature from ENSO Normals",precip_sum_dev_ENSO ="Deviation of Total Rainfall from ENSO Normals", )
This section outlines a two-step procedure for detrending agricultural production data at the regional level for a specific crop and month. The procedure involves handling missing values and then performing the detrending process.
Step 1: Handling Missing Values
In the first step, we address missing values by linear interpolation. This approach helps us estimate the missing values by considering the neighboring data points.
Step 1.1: Imputing missing values with linear interpolation.
The missing values get replaced by linear interpolation. However, if there are more than two consecutive missing values, they are not replaced with interpolated values. Instead, the series for the specific crop in the given region is split based on the locations of the missing values. The split with the highest number of consecutive non-missing values is retained, while the other splits are discarded.
Step 1.2: Dropping Series with Remaining Missing Values
After imputing missing values using the moving median, we check if any missing values still remain in the dataset. If there are any remaining missing values for a particular series, we choose to exclude that series from further analysis. By doing so, we ensure that the subsequent detrending process is performed only on reliable and complete data.
Step 2: Detrending the data
Once we have addressed the missing values, we proceed to the second step, which involves detrending the data. Detrending aims to remove the long-term trend or seasonality from the dataset, leaving behind the underlying fluctuations and patterns. To remove the trend from the data, we follow a three-step process.
Step 2.1: Demeaning
First, we compute the share of each data point, denoted as \(y_{c,i,m,t}^{demeaned}\), by dividing the raw data point \(y_{c,i,m,t}^{raw}\) by the sum of all raw data points for the given crop \(c\), region \(i\), and calendar month \(m\) over the entire time period \(T_c\): \[y_{c,i,m,t}^{demeaned}\frac{y^{raw}_{c,i,m,t}}{n_{T_C}\sum_{t=1}^{T_c}y^{raw}_{c,i,m}}\] Here, \(n_{T_c}\) represents the total number of data points for the given crop, region, and month.
Step 2.2: Quadratic Trend Estimation
Next, we estimate a quadratic trend using ordinary least squares (OLS) regression. We model the demeaned data points \(y_{c,i,m,t}^{demeaned}\) as a quadratic function of time \(t\): \[y^{demeaned}_{c,i,m}=\beta_{c,i,m} t + \gamma_{c,i,m} t^{2} + \varepsilon_{c,i,m}\] In this equation, \(\varepsilon_{c,i,m}\) represents the error term assumed to follow a normal distribution.
Step 2.3: Detrending
Once we have estimated the coefficients \(\beta_{c,i,m}\) and \(\gamma_{c,i,m}\) through OLS regression, we can remove the quadratic trend from the data. This is done by calculating the detrended data \(y_{c,i,m}^{d}\), which simply consists of the residuals \(e_{c,i,m}\): \[y^{d}_{c,i,m} = e_{c,i,m}\] The resulting detrended data \(y_{c,i,m}^{d}\) represents the original data with the quadratic trend component removed. We add the possibility to express the results in log.
Let us implement this process in R. First, we need to define two functions to handle the missing values:
The get_index_longest_non_na() function retrieves the indices of the longest consecutive sequence without missing values from a given input vector. It helps us identify the positions of elements in that sequence.
The keep_values_longest_non_na() function uses the obtained indices to create a logical vector. Each element of this vector indicates whether the corresponding element in the input vector belongs to the longest consecutive sequence of non-missing values. This allows us to filter the data and retain only the values from the longest consecutive sequence without missing values.
These two functions combined help us handle missing data in the weather series and ensure that we work with the most complete sequences for each region and crop.
The first function:
#' Returns the index of the longest sequence of non NA values in a vector#'#' @param y vector of numerical values#' @exportget_index_longest_non_na <-function(y) { split_indices <-which(is.na(y)) nb_obs <-length(y)if (length(split_indices) ==0) { res <-seq_len(nb_obs) } else { idx_beg <-c(1, split_indices)if (idx_beg[length(idx_beg)] != nb_obs) { idx_beg <-c(idx_beg, nb_obs) } lengths <-diff(idx_beg) ind_max <-which.max(lengths) index_beginning <- idx_beg[ind_max]if(!index_beginning ==1|is.na(y[index_beginning])) { index_beginning <- index_beginning +1 } index_end <- idx_beg[ind_max] + lengths[ind_max]if(is.na(y[index_end])) { index_end <- index_end -1 } res <-seq(index_beginning, index_end) } res}
The second one:
#' Returns a logical vector that identifies the longest sequence of non NA#' values within the input vector#' #' @param y numeric vectorkeep_values_longest_non_na <-function(y) { ids_to_keep <-get_index_longest_non_na(y) ids <-seq(1, length(y)) ids %in% ids_to_keep}
Note
Those two functions are defined in weatherperu/R/utils.R.
We define a function, detrend_production(), that takes the data frame as input, as well as a crop name and a region ID. It returns the detrended production for that region, in a tibble. In the resuling tibble, the detrended values are in column named y.
#' Detrends the production data, using OLS#'#' @param df data#' @param crop_name name of the crop#' @param region_id id of the region#' @param in_log if TRUE, the detrended values are expressed in log#'#' @returns data frame with the product, the region id, the date, and the#' detrended value for the production. If `in_log = TRUE` and there are zeros#' the function returns `NULL`. Similarly, if there are more than two#' @export#' @importFrom dplyr filter arrange mutate select row_number group_by#' @importFrom tidyr nest unnest#' @importFrom purrr map#' @importFrom imputeTS na_interpolation#' @importFrom stats lm predict residualsdetrend_production <-function(df, crop_name, region_id,in_log =FALSE) {# The current data df_current <- df |>filter( product_eng ==!!crop_name, region_id ==!!region_id ) |>arrange(year, quarter)## Dealing with missing values ----# Look for negative production values df_current <- df_current |>mutate(y_new =ifelse(Value_prod <0, NA, Value_prod) )if (any(is.na(df_current$y_new))) {# Replacing NAs by interpolation# If there are more than two contiguous NAs, they are not replaced df_current <- df_current |>mutate(y_new = imputeTS::na_interpolation(y_new, maxgap =3) )# Removing obs at the beginning/end if they are still missing df_current <- df_current |>mutate(row_to_keep =!(is.na(y_new) &row_number() %in%c(1:2, (n()-1):(n()))) ) |>filter(row_to_keep) |>select(-row_to_keep)# Keeping the longest series of continuous non-NA values df_current <- df_current |>mutate(row_to_keep =keep_values_longest_non_na(y_new) ) |>filter(row_to_keep) |>select(-row_to_keep) } rle_y_new <-rle(df_current$y_new) check_contiguous_zeros <- rle_y_new$lengths[rle_y_new$values==0]if (any(check_contiguous_zeros >=2)) { resul <-NULL } else {## Detrending df_current <- df_current |>group_by(quarter) |>mutate(y_new_normalized = y_new /median(y_new) )if (any(is.infinite(df_current$y_new_normalized))) { resul <-NULL } else { resul <- df_current |>select(product_eng, region_id, year, quarter, y_new, y_new_normalized) |>group_by(quarter) |>arrange(year) |>mutate(t =row_number()) |>ungroup() |>nest(.by =c(product_eng, region_id, quarter)) |># distinct OLS per quartermutate(ols_fit =map(data, ~lm(y_new_normalized ~-1+ t +I(t^2), data = .x)),resid =map(ols_fit, residuals),fitted =map(ols_fit, predict)# intercept = map(ols_fit, ~ coef(.x)[["(Intercept)"]]) ) |># unnest(cols = c(data, resid, intercept)) |>unnest(cols =c(data, resid, fitted)) |>group_by(quarter) |>mutate(y = resid +mean(fitted)# y = resid ) |>select(product_eng, region_id, year, quarter, y_new, y) |>ungroup() |>arrange(year)if (in_log) { resul <- resul |>mutate(y =log(y)) } } } resul}
We can apply this function to all crops of interest, in each region. Let us define a table that contains all the possible values for the combination of crops and regions:
We can have a look at the number of quarters with 0 values for the agricultural production. Recall that the series where more than two contiguous 0 were observed were discarded from the result.