Skip to content
meaning_stress_factor.ran.ipynb 585 KiB
Newer Older
{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "sublime-sociology",
   "metadata": {
    "papermill": {
     "duration": 0.072467,
     "end_time": "2021-07-20T08:35:08.082626",
     "exception": false,
     "start_time": "2021-07-20T08:35:08.010159",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "# **Location of the stress factor in potential evapo-transpiration models**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "collect-acrylic",
   "metadata": {
    "papermill": {
     "duration": 0.062722,
     "end_time": "2021-07-20T08:35:08.210370",
     "exception": false,
     "start_time": "2021-07-20T08:35:08.147648",
     "status": "completed"
    },
    "slideshow": {
     "slide_type": "slide"
    },
    "tags": []
   },
   "source": [
    "# Part I - Methodology \n",
    "\n",
    "## <u> Motivation </u> \n",
    "\n",
    "### Theoretical background\n",
    "\n",
    "This notebook focuses mainly on the physical meaning of the stress factor whenn put in front of the potential evapo-transpiration model. Thus it will mainly investigate the *constant surface conductance model* which is expressed as:  \n",
    "\n",
    "\\begin{align}\n",
    "    E_{a, cst}  = f_{PAR}.S(\\theta).E_{p,PM}(\\textbf{X})\n",
    "\\end{align}\n",
    "\n",
    "For more details about the formulation of the model and the associated potential evapo-transpiration model, see *location_stress_factor.ipynb* notebook. \n",
    "\n",
    "In the literature, the stress factor put in front of the potential evapo-transpiration model is often interpretated as the shrinkage in the leaf area cover over time. In this notebook we investigate the information contained within the stress factor and the meaning of the stress factor in order to better interprete this coefficient\n",
    "\n",
    "### Modelling experiements\n",
    "\n",
    "Two main experiments are carried out to infer the hypothesis : \n",
    "1. The stress factor is reconstructed out of the observations and compared to our artificial stress factor\n",
    "2. The model is computed with and without the fPAR coefficient to infer the information contained in this time serie\n",
    "\n",
    "🚧 All the newly constructed models in this notebook are not multiply by the fPAR factor (classical Penman-Monteith model, varying surface conductance and constant surface conductance models) 🚧"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "conventional-uruguay",
   "metadata": {
    "papermill": {
     "duration": 0.068018,
     "end_time": "2021-07-20T08:35:08.338529",
     "exception": false,
     "start_time": "2021-07-20T08:35:08.270511",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "# Part II - Functions set up"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fatal-roller",
   "metadata": {
    "papermill": {
     "duration": 0.056005,
     "end_time": "2021-07-20T08:35:08.458221",
     "exception": false,
     "start_time": "2021-07-20T08:35:08.402216",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "## Importing relevant packages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "shaped-operations",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-07-20T08:35:08.596187Z",
     "iopub.status.busy": "2021-07-20T08:35:08.590824Z",
     "iopub.status.idle": "2021-07-20T08:35:13.176429Z",
     "shell.execute_reply": "2021-07-20T08:35:13.177275Z"
    },
    "papermill": {
     "duration": 4.655975,
     "end_time": "2021-07-20T08:35:13.178429",
     "exception": false,
     "start_time": "2021-07-20T08:35:08.522454",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING (aesara.link.c.cmodule): install mkl with `conda install mkl-service`: No module named 'mkl'\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING (aesara.tensor.blas): Using NumPy C-API based implementation for BLAS functions.\n"
     ]
    }
   ],
   "source": [
    "# data manipulation and plotting\n",
    "import pandas as pd\n",
    "import matplotlib.pyplot as plt\n",
    "from matplotlib._layoutgrid import plot_children\n",
    "from matplotlib.patches import Polygon\n",
    "from collections import OrderedDict\n",
    "from IPython.display import display\n",
    "import os # to look into the other folders of the project\n",
    "import importlib.util # to open the .py files written somewhere else\n",
    "#sns.set_theme(style=\"whitegrid\")\n",
    "\n",
    "# Sympy and sympbolic mathematics\n",
    "from sympy import (asin, cos, diff, Eq, exp, init_printing, log, pi, sin, \n",
    "                   solve, sqrt, Symbol, symbols, tan, Abs)\n",
    "from sympy.physics.units import convert_to\n",
    "init_printing() \n",
    "from sympy.printing import StrPrinter\n",
    "from sympy import Piecewise\n",
    "StrPrinter._print_Quantity = lambda self, expr: str(expr.abbrev)    # displays short units (m instead of meter)\n",
    "from sympy.printing.aesaracode import aesara_function\n",
    "from sympy.physics.units import *    # Import all units and dimensions from sympy\n",
    "from sympy.physics.units.systems.si import dimsys_SI, SI\n",
    "\n",
    "# for ESSM, environmental science for symbolic math, see https://github.com/environmentalscience/essm\n",
    "from essm.variables._core import BaseVariable, Variable\n",
    "from essm.equations import Equation\n",
    "from essm.variables.units import derive_unit, SI, Quantity\n",
    "from essm.variables.utils import (extract_variables, generate_metadata_table, markdown, \n",
    "                                  replace_defaults, replace_variables, subs_eq)\n",
    "from essm.variables.units import (SI_BASE_DIMENSIONS, SI_EXTENDED_DIMENSIONS, SI_EXTENDED_UNITS,\n",
    "                                  derive_unit, derive_baseunit, derive_base_dimension)\n",
    "\n",
    "# For netCDF\n",
    "import netCDF4\n",
    "import numpy as np\n",
    "import xarray as xr\n",
    "import warnings\n",
    "from netCDF4 import Dataset\n",
    "\n",
    "# For regressions\n",
    "from sklearn.linear_model import LinearRegression\n",
    "\n",
    "# Deactivate unncessary warning messages related to a bug in Numpy\n",
    "warnings.simplefilter(action='ignore', category=FutureWarning)\n",
    "\n",
    "# for calibration\n",
    "from scipy import optimize\n",
    "\n",
    "from random import random"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "funny-statement",
   "metadata": {
    "papermill": {
     "duration": 0.065427,
     "end_time": "2021-07-20T08:35:13.310676",
     "exception": false,
     "start_time": "2021-07-20T08:35:13.245249",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "## Path of the different files (pre-defined python functions, sympy equations, sympy variables)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "plastic-valentine",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-07-20T08:35:13.442520Z",
     "iopub.status.busy": "2021-07-20T08:35:13.441623Z",
     "iopub.status.idle": "2021-07-20T08:35:13.445612Z",
     "shell.execute_reply": "2021-07-20T08:35:13.444820Z"
    },
    "papermill": {
     "duration": 0.072797,
     "end_time": "2021-07-20T08:35:13.445821",
     "exception": false,
     "start_time": "2021-07-20T08:35:13.373024",
     "status": "completed"
    },
    "tags": [
     "parameters"
    ]
   },
   "outputs": [],
   "source": [
    "path_variable = '../../theory/pyFile_storage/theory_variable.py'\n",
    "path_equation = '../../theory/pyFile_storage/theory_equation.py' \n",
    "path_analysis_functions = '../../theory/pyFile_storage/analysis_functions.py'\n",
    "path_data = '../../../data/eddycovdata/'\n",
    "dates_fPAR = '../../../data/fpar_howard_spring/dates_v5'\n",
    "\n",
    "stress_factor_reconstruct = \"stress_factor_reconstruct.png\"\n",
    "constant_VS_fPAR = \"constant_VS_fPAR.png\"\n",
    "complete_VS_incomplete = \"complete_VS_incomplete.png\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "blond-stage",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-07-20T08:35:13.592371Z",
     "iopub.status.busy": "2021-07-20T08:35:13.591294Z",
     "iopub.status.idle": "2021-07-20T08:35:13.594463Z",
     "shell.execute_reply": "2021-07-20T08:35:13.593407Z"
    },
    "papermill": {
     "duration": 0.075531,
     "end_time": "2021-07-20T08:35:13.594715",
     "exception": false,
     "start_time": "2021-07-20T08:35:13.519184",
     "status": "completed"
    },
    "tags": [
     "injected-parameters"
    ]
   },
   "outputs": [],
   "source": [
    "# Parameters\n",
    "path_variable = \"notebooks/theory/pyFile_storage/theory_variable.py\"\n",
    "path_equation = \"notebooks/theory/pyFile_storage/theory_equation.py\"\n",
    "path_analysis_functions = \"notebooks/theory/pyFile_storage/analysis_functions.py\"\n",
    "path_data = \"data/eddycovdata/\"\n",
    "dates_fPAR = \"data/fpar_howard_spring/dates_v5\"\n",
    "stress_factor_reconstruct = (\n",
    "    \"notebooks/Finished_project/meaning_stress_factor/stress_factor_reconstruct.png\"\n",
    ")\n",
    "constant_VS_fPAR = (\n",
    "    \"notebooks/Finished_project/meaning_stress_factor/constant_VS_fPAR.png\"\n",
    ")\n",
    "complete_VS_incomplete = (\n",
    "    \"notebooks/Finished_project/meaning_stress_factor/complete_VS_incomplete.png\"\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "positive-warning",
   "metadata": {
    "papermill": {
     "duration": 0.069195,
     "end_time": "2021-07-20T08:35:13.742273",
     "exception": false,
     "start_time": "2021-07-20T08:35:13.673078",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "## Importing the sympy variables and equations defined in the theory.ipynb notebook"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "comic-grass",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-07-20T08:35:13.933390Z",
     "iopub.status.busy": "2021-07-20T08:35:13.896381Z",
     "iopub.status.idle": "2021-07-20T08:35:15.381907Z",
     "shell.execute_reply": "2021-07-20T08:35:15.382939Z"
    },
    "papermill": {
     "duration": 1.583014,
     "end_time": "2021-07-20T08:35:15.383166",
     "exception": false,
     "start_time": "2021-07-20T08:35:13.800152",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "theta_sat\n",
      "theta_res\n",
      "alpha\n",
      "n\n",
      "m\n",
      "S_mvg\n",
      "theta\n",
      "h\n",
      "S\n",
      "theta_4\n",
      "theta_3\n",
      "theta_2\n",
      "theta_1\n",
      "L\n",
      "Mw\n",
      "Pv\n",
      "Pvs\n",
      "R\n",
      "T\n",
      "c1\n",
      "T0\n",
      "Delta\n",
      "E\n",
      "G\n",
      "H\n",
      "Rn\n",
      "LE\n",
      "gamma\n",
      "alpha_PT\n",
      "c_p\n",
      "w\n",
      "kappa\n",
      "z\n",
      "u_star\n",
      "VH\n",
      "d\n",
      "z_om\n",
      "z_oh\n",
      "r_a\n",
      "g_a\n",
      "r_s\n",
      "g_s\n",
      "c1_e\n",
      "c2_e\n",
      "e\n",
      "T_min\n",
      "T_max\n",
      "RH_max\n",
      "RH_min\n",
      "e_a\n",
      "e_s\n",
      "iv_T\n",
      "T_kv\n",
      "P\n",
      "rho_a\n",
      "VPD\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "eq_m_n\n",
      "eq_MVG_neg_case\n",
      "eq_MVG\n",
      "eq_sat_degree\n",
      "eq_MVG_h\n",
      "eq_h_FC\n",
      "eq_theta_4_3\n",
      "eq_theta_2_1\n",
      "eq_water_stress_simple\n",
      "eq_Pvs_T\n",
      "eq_Delta\n",
      "eq_PT\n",
      "eq_PM\n",
      "eq_PM_VPD\n",
      "eq_PM_g\n",
      "eq_PM_inv\n"
     ]
    }
   ],
   "source": [
    "for code in [path_variable,path_equation]:\n",
    "    name_code = code[-20:-3]\n",
    "    spec = importlib.util.spec_from_file_location(name_code, code)\n",
    "    mod = importlib.util.module_from_spec(spec)\n",
    "    spec.loader.exec_module(mod)\n",
    "    names = getattr(mod, '__all__', [n for n in dir(mod) if not n.startswith('_')])\n",
    "    glob = globals()\n",
    "    for name in names:\n",
    "        print(name)\n",
    "        glob[name] = getattr(mod, name)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "conventional-transparency",
   "metadata": {
    "papermill": {
     "duration": 0.060053,
     "end_time": "2021-07-20T08:35:15.507355",
     "exception": false,
     "start_time": "2021-07-20T08:35:15.447302",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "## Importing the performance assessment functions defined in the analysis_function.py file"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "pediatric-donor",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-07-20T08:35:15.642258Z",
     "iopub.status.busy": "2021-07-20T08:35:15.641333Z",
     "iopub.status.idle": "2021-07-20T08:35:15.652892Z",
     "shell.execute_reply": "2021-07-20T08:35:15.652296Z"
    },
    "papermill": {
     "duration": 0.083373,
     "end_time": "2021-07-20T08:35:15.653101",
     "exception": false,
     "start_time": "2021-07-20T08:35:15.569728",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "AIC\n",
      "AME\n",
      "BIC\n",
      "CD\n",
      "CP\n",
      "IoA\n",
      "KGE\n",
      "MAE\n",
      "MARE\n",
      "ME\n",
      "MRE\n",
      "MSRE\n",
      "MdAPE\n",
      "NR4MS4E\n",
      "NRMSE\n",
      "NS\n",
      "NSC\n",
      "PDIFF\n",
      "PEP\n",
      "R4MS4E\n",
      "RAE\n",
      "RMSE\n",
      "RMedSE\n",
      "RVE\n",
      "bias\n",
      "np\n",
      "nt\n"
     ]
    }
   ],
   "source": [
    "for code in [path_analysis_functions]:\n",
    "    name_code = code[-20:-3]\n",
    "    spec = importlib.util.spec_from_file_location(name_code, code)\n",
    "    mod = importlib.util.module_from_spec(spec)\n",
    "    spec.loader.exec_module(mod)\n",
    "    names = getattr(mod, '__all__', [n for n in dir(mod) if not n.startswith('_')])\n",
    "    glob = globals()\n",
    "    for name in names:\n",
    "        print(name)\n",
    "        glob[name] = getattr(mod, name)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "danish-virtue",
   "metadata": {
    "papermill": {
     "duration": 0.060051,
     "end_time": "2021-07-20T08:35:15.777945",
     "exception": false,
     "start_time": "2021-07-20T08:35:15.717894",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "## Data import, preprocess and shape for the computations"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "naked-passing",
   "metadata": {
    "papermill": {
     "duration": 0.071829,
     "end_time": "2021-07-20T08:35:15.914338",
     "exception": false,
     "start_time": "2021-07-20T08:35:15.842509",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "### Get the different files where data are stored\n",
    "\n",
    "Eddy-covariance data from the OzFlux network are stored in **.nc** files (NetCDF4 files) which is roughly a panda data frame with meta-data (see https://www.unidata.ucar.edu/software/netcdf/ for more details about NetCDF4 file format). fPAR data are stored in **.txt** files"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "owned-empty",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-07-20T08:35:16.074579Z",
     "iopub.status.busy": "2021-07-20T08:35:16.073075Z",
     "iopub.status.idle": "2021-07-20T08:35:16.081863Z",
     "shell.execute_reply": "2021-07-20T08:35:16.083115Z"
    },
    "papermill": {
     "duration": 0.093672,
     "end_time": "2021-07-20T08:35:16.083483",
     "exception": false,
     "start_time": "2021-07-20T08:35:15.989811",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['data/eddycovdata/fpar_adelaide_v5.txt', 'data/eddycovdata/fpar_daly_v5.txt', 'data/eddycovdata/fpar_dry_v5.txt', 'data/eddycovdata/fpar_howard_v5.txt', 'data/eddycovdata/fpar_sturt_v5.txt']\n",
      "['data/eddycovdata/AdelaideRiver_L4.nc', 'data/eddycovdata/DalyUncleared_L4.nc', 'data/eddycovdata/DryRiver_L4.nc', 'data/eddycovdata/HowardSprings_L4.nc', 'data/eddycovdata/SturtPlains_L4.nc']\n"
     ]
    }
   ],
   "source": [
    "fPAR_files = []\n",
    "eddy_files = []\n",
    "\n",
    "for file in os.listdir(path_data):\n",
    "    if file.endswith(\".txt\"):\n",
    "        fPAR_files.append(os.path.join(path_data, file))\n",
    "    elif file.endswith(\".nc\"):\n",
    "        eddy_files.append(os.path.join(path_data, file))\n",
    "        \n",
    "fPAR_files.sort()\n",
    "print(fPAR_files)\n",
    "eddy_files.sort()\n",
    "print(eddy_files)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aging-allah",
   "metadata": {
    "papermill": {
     "duration": 0.11457,
     "end_time": "2021-07-20T08:35:16.276528",
     "exception": false,
     "start_time": "2021-07-20T08:35:16.161958",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "### Define and test a function that process the fPAR data\n",
    "In the **.txt** files, only one value per month is given for the fPAR. The following function takes one .txt file containing data about the fPAR coefficients, and the related dates, stored in the a seperate file. The fPAR data (date and coefficients) are cleaned (good string formatting), mapped together and averaged to output one value per month (the fPAR measurement period doesn't spans the measurement period of the eddy covariance data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "destroyed-motor",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-07-20T08:35:16.452496Z",
     "iopub.status.busy": "2021-07-20T08:35:16.451059Z",
     "iopub.status.idle": "2021-07-20T08:35:16.454896Z",
     "shell.execute_reply": "2021-07-20T08:35:16.455419Z"
    },
    "papermill": {
     "duration": 0.095643,
     "end_time": "2021-07-20T08:35:16.455636",
     "exception": false,
     "start_time": "2021-07-20T08:35:16.359993",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "def fPAR_data_process(fPAR_file,dates_fPAR):\n",
    "    \n",
    "    fparv5_dates = np.genfromtxt(dates_fPAR, dtype='str', delimiter=',')\n",
    "    fparv5_dates = pd.to_datetime(fparv5_dates[:,1], format=\"%Y%m\")\n",
    "    dates_pd = pd.date_range(fparv5_dates[0], fparv5_dates[-1], freq='MS')\n",
    "\n",
    "    fparv5_howard = np.loadtxt(fPAR_file,delimiter=',', usecols=3 )\n",
    "    fparv5_howard[fparv5_howard == -999] = np.nan\n",
    "    fparv5_howard_pd = pd.Series(fparv5_howard, index = fparv5_dates)\n",
    "    fparv5_howard_pd = fparv5_howard_pd.resample('MS').max()\n",
    "\n",
    "    # convert fparv5_howard_pd to dataframe\n",
    "    fPAR_pd = pd.DataFrame(fparv5_howard_pd)\n",
    "    fPAR_pd = fPAR_pd.rename(columns={0:\"fPAR\"})\n",
    "    fPAR_pd.index = fPAR_pd.index.rename(\"time\")\n",
    "\n",
    "    # convert fPAR_pd to xarray to aggregate the data\n",
    "    fPAR_xr = fPAR_pd.to_xarray()\n",
    "    fPAR_agg = fPAR_xr.fPAR.groupby('time.month').max()\n",
    "\n",
    "    # convert back to dataframe\n",
    "    fPAR_pd = fPAR_agg.to_dataframe()\n",
    "    Month = np.arange(1,13)\n",
    "    Month_df = pd.DataFrame(Month)\n",
    "    Month_df.index = fPAR_pd.index\n",
    "    Month_df = Month_df.rename(columns={0:\"Month\"})\n",
    "\n",
    "    fPAR_mon = pd.concat([fPAR_pd,Month_df], axis = 1)\n",
    "    \n",
    "    return(fPAR_mon)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "figured-membrane",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-07-20T08:35:16.606677Z",
     "iopub.status.busy": "2021-07-20T08:35:16.592515Z",
     "iopub.status.idle": "2021-07-20T08:35:16.646200Z",
     "shell.execute_reply": "2021-07-20T08:35:16.644996Z"
    },
    "papermill": {
     "duration": 0.133026,
     "end_time": "2021-07-20T08:35:16.646483",
     "exception": false,
     "start_time": "2021-07-20T08:35:16.513457",
     "status": "completed"
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>fPAR</th>\n",
       "      <th>Month</th>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>month</th>\n",
       "      <th></th>\n",
       "      <th></th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>0.78</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>0.84</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>0.79</td>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>0.84</td>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>0.71</td>\n",
       "      <td>5</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>0.75</td>\n",
       "      <td>6</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>0.60</td>\n",
       "      <td>7</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>0.54</td>\n",
       "      <td>8</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>0.52</td>\n",
       "      <td>9</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10</th>\n",
       "      <td>0.67</td>\n",
       "      <td>10</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>11</th>\n",
       "      <td>0.73</td>\n",
       "      <td>11</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>12</th>\n",
       "      <td>0.78</td>\n",
       "      <td>12</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "       fPAR  Month\n",
       "month             \n",
       "1      0.78      1\n",
       "2      0.84      2\n",
       "3      0.79      3\n",
       "4      0.84      4\n",
       "5      0.71      5\n",
       "6      0.75      6\n",
       "7      0.60      7\n",
       "8      0.54      8\n",
       "9      0.52      9\n",
       "10     0.67     10\n",
       "11     0.73     11\n",
       "12     0.78     12"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "fPAR_data_process(fPAR_files[3],dates_fPAR)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "reflected-honor",
   "metadata": {
    "papermill": {
     "duration": 0.066689,
     "end_time": "2021-07-20T08:35:16.785489",
     "exception": false,
     "start_time": "2021-07-20T08:35:16.718800",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "### fPARSet function\n",
    "Map the fPAR time serie to the given eddy-covariance data. Takes two dataframes as input (one containing the fPAR data, the other containing the eddy-covariance data) and returns a data frame where the fPAR monthly values have been scaled to the time scale of the eddy covariance data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "beginning-genome",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-07-20T08:35:16.947645Z",
     "iopub.status.busy": "2021-07-20T08:35:16.946752Z",
     "iopub.status.idle": "2021-07-20T08:35:16.950943Z",
     "shell.execute_reply": "2021-07-20T08:35:16.950147Z"
    },
    "papermill": {
     "duration": 0.083974,
     "end_time": "2021-07-20T08:35:16.951153",
     "exception": false,
     "start_time": "2021-07-20T08:35:16.867179",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "def fPARSet(df_add, fPAR_pd):\n",
    "    \n",
    "    # construct the time serie of the fPAR coefficients\n",
    "    dummy_len = df_add[\"Fe\"].size\n",
    "    fPAR_val = np.zeros((dummy_len,))\n",
    "    \n",
    "    dummy_pd = df_add\n",
    "    dummy_pd.reset_index(inplace=True)\n",
    "    dummy_pd.index=dummy_pd.time\n",
    "    \n",
    "    month_pd = dummy_pd['time'].dt.month\n",
    "    \n",
    "    for i in range(dummy_len):\n",
    "        current_month = month_pd.iloc[i]\n",
    "        line_fPAR = fPAR_pd[fPAR_pd['Month'] == current_month]\n",
    "        fPAR_val[i] = line_fPAR['fPAR']\n",
    "    \n",
    "    # transform fPAR_val into dataframe to concatenate to df:\n",
    "    fPAR = pd.DataFrame(fPAR_val, index = df_add.index)\n",
    "    df_add = pd.concat([df_add,fPAR], axis = 1)\n",
    "    df_add = df_add.rename(columns = {0:\"fPAR\"})\n",
    "    \n",
    "    return(df_add)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "metallic-witch",
   "metadata": {
    "papermill": {
     "duration": 0.067815,
     "end_time": "2021-07-20T08:35:17.087005",
     "exception": false,
     "start_time": "2021-07-20T08:35:17.019190",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "### DataChose function\n",
    "\n",
    "Function taking the raw netcdf4 data file from the eddy covariance measurement and shape it such that it can be used for the computations. Only relevant variables are kept (latent heat flux, net radiation, ground heat flux, soil water content, wind speed, air temperature, VPD, bed shear stress). The desired data period is selected and is reshaped at the desired time scale (daily by default). Uses the fPARSet function defined above\n",
    "\n",
    "List of variable abbreviation : \n",
    "* `Rn` : Net radiation flux\n",
    "* `G` : Ground heat flux \n",
    "* `Sws` : soil moisture\n",
    "* `Ta` : Air temperature\n",
    "* `RH` : Relative humidity\n",
    "* `W` : Wind speed\n",
    "* `E` : measured evaporation\n",
    "* `VPD` : Vapour pressure deficit"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "finnish-greek",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-07-20T08:35:17.237421Z",
     "iopub.status.busy": "2021-07-20T08:35:17.234497Z",
     "iopub.status.idle": "2021-07-20T08:35:17.239293Z",
     "shell.execute_reply": "2021-07-20T08:35:17.240780Z"
    },
    "papermill": {
     "duration": 0.091746,
     "end_time": "2021-07-20T08:35:17.241233",
     "exception": false,
     "start_time": "2021-07-20T08:35:17.149487",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "def DataChose(ds_ref, period_sel, fPAR_given, Freq = \"D\", sel_period_flag = True):\n",
    "    \"\"\"Take subset of dataset if Flag == True, entire dataset else\n",
    "    \n",
    "    ds_ref: xarray object to be considered as the ref for selecting attributes\n",
    "    agg_flag: aggregate the data at daily time scale if true\n",
    "    Flag: select specific period if true (by default)\n",
    "    period: time period to be selected\n",
    "    ----------\n",
    "    Method : \n",
    "    - transform the xarray in panda dataframe for faster iteration\n",
    "    - keep only the necessary columns : Fe, Fn, Fg, Ws, Sws, Ta, ustar, RH\n",
    "    - transform / create new variables : Temperature in °C, T_min/T_max, RH_min/RH_max\n",
    "    - create the Data vector (numpy arrays)\n",
    "    - create back a xarray \n",
    "    - return an xarray\n",
    "    ----------\n",
    "    \n",
    "    Returns an xarray and a Data vector\n",
    "    \"\"\"\n",
    "    \n",
    "    if sel_period_flag:\n",
    "        df = ds_ref.sel(time = period_sel) \n",
    "        # nameXarray_output = period + \"_\" + nameXarray_output\n",
    "    else : \n",
    "        df = ds_ref\n",
    "        \n",
    "    # keep only the columns of interest\n",
    "    df = df[[\"Fe\",\"Fn\",\"Fg\",\"Ws\",\"Sws\",\"Ta\",\"ustar\",\"RH\", \"VPD\",\"ps\",\"Fe_QCFlag\",\"Fn_QCFlag\",\"Fg_QCFlag\",\"Ws_QCFlag\",\"Sws_QCFlag\",\"Ta_QCFlag\",\"ustar_QCFlag\",\"RH_QCFlag\", \"VPD_QCFlag\"]]\n",
    "    \n",
    "    # convert to dataframe\n",
    "    df = df.to_dataframe()\n",
    "    \n",
    "    # aggregate following the rule stated in freq\n",
    "    df = df.groupby([pd.Grouper(level = \"latitude\"), pd.Grouper(level = \"longitude\"), pd.Grouper(level = \"time\", freq = Freq)]).mean()\n",
    "    \n",
    "    # convert data to the good units : \n",
    "    df[\"Fe\"] = df[\"Fe\"]/2.45e6 # divide by latent heat of vaporization\n",
    "    df[\"Ta\"] = df[\"Ta\"]+273 # convert to kelvin\n",
    "    df[\"VPD\"] = df[\"VPD\"]*1000 # convert from kPa to Pa\n",
    "    \n",
    "    # construct the time serie of the fPAR coefficients\n",
    "    df = fPARSet(df,fPAR_given)\n",
    "    \n",
    "    # initialise array for the error\n",
    "    Error_obs = np.zeros((df.Fe.size,))\n",
    "    \n",
    "    for i in range(df.Fe.size):\n",
    "        size_window_left, size_window_right = min(i,7),min(df.Fe.size - i-1, 7)\n",
    "        #print(size_window_left, size_window_right)\n",
    "        sub_set = df.Fe[i-size_window_left : i+size_window_right].to_numpy()\n",
    "        mean_set = np.mean(sub_set)\n",
    "        sdt_set = np.std(sub_set)\n",
    "        error_obs = 2*sdt_set\n",
    "        Error_obs[i] = error_obs\n",
    "    \n",
    "    ErrorObs = pd.DataFrame(Error_obs, index = df.index)\n",
    "    \n",
    "    df = pd.concat([df,ErrorObs], axis = 1)\n",
    "    df = df.rename(columns = {0:\"error\"})\n",
    "\n",
    "        \n",
    "    return(df)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "demonstrated-muscle",
   "metadata": {
    "papermill": {
     "duration": 0.081244,
     "end_time": "2021-07-20T08:35:17.401088",
     "exception": false,
     "start_time": "2021-07-20T08:35:17.319844",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "## Compile the different functions defined in the symbolic domain\n",
    "All functions defined with sympy and ESSM are defined in the symbolic domain. In order to be efficiently evaluated, they need to be vectorized to allow computations with numpu arrays. We use the *aesara* printing compiler from the sympy package. Note that this printer replace the older one (*theano*) which is deprecated. A comparison of the performances between the two packages can be found in the aesara repository."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "editorial-lesbian",
   "metadata": {
    "papermill": {
     "duration": 0.069559,
     "end_time": "2021-07-20T08:35:17.592312",
     "exception": false,
     "start_time": "2021-07-20T08:35:17.522753",